algorithms, alogorithmic public sphere

The algorithmic public sphere and democracy

12 minutes to read
Paper
Aniek van den Brandt
20/02/2019

The public sphere has been controlled by humans for years, but in the last decade there has been a shift in power from human gatekeepers to algorithmic ones. This essay describes the problematic consequences this algorithmic culture has for society and the public sphere.

Algorithms and the changing public sphere

Today’s public sphere is highly influenced by digitalization. The concept of the public sphere, however, was defined by political philosopher Jürgen Habermas decades before digitalization. He says:

“By ‘public sphere’, we mean first of all a domain of our social life in which such a thing as public opinion can be formed. Access to the public sphere is open in principle to all citizens. A portion of the public sphere is constituted in every conversation in which private persons come together to form a public. (…) Citizens act as a public when they deal with matters of general interest without being subject to coercion; thus with the guarantee that they may assemble and unite freely, and express and publicize their opinions freely.” (Habermas, 1974: p. 49)

Thus, following Habermas, the public sphere is a space in which politics and the consequences of these politics are discussed and evaluated. It is a space to let your voice be heard, criticize politicians and think about society. According to Habermas, this public sphere emerged between the 16th and 18th century, when the development of printing brought social circles of men together in coffee houses to discuss trade, labour, and politics. In these coffee houses, it was about forming the public opinion of the people, by the people. As seen, this is what the public sphere is all about.

According to Habermas, mass media first changed the public sphere. Over the last few years of digitalization, even more has changed. By now we can say that the public sphere has shifted from coffee houses to traditional mass media and now to Facebook and Twitter. Debates happen mostly online, and public opinion is shaped within an online space. With the introduction of online personalization, the public sphere is no longer ruled by the citizen, but by computational products. It is no longer human beings that are the gatekeepers of the public sphere, but rather algorithms have taken over this role.

Algorithmic culture

Algorithms have become a standard in the online public sphere. They are a cultural phenomenon now. Therefore, in 2015, Ted Striphas introduced the term ‘algorithmic culture’ as:

“What one sees in Amazon, and in its kin Google, Facebook, Twitter, Netflix and many others, is the enfolding of human thought, conduct, organization and expression into the logic of big data and large-scale computation, a move that alters how the category culture has long been practiced, experienced and understood.” (Striphas, 2015: p. 396)

According to Striphas this algorithmic culture arose in the last few decades, after human beings began delegating the work of culture to computational products more and more. The sorting, classifying and hierarchizing of people and things is no longer done by ourselves, but by an algorithm. That has consequences for the public sphere.

Networked gatekeeping

Algorithmic culture started small and grew gradually to something big. At first, many were optimistic about how computer products and bots could take over tasks from human beings. Algorithms were seen as an extension of democracy by many, because algorithms were seen to fight a media elite that decided the media agenda.

Meraz and Papacharissi (2016) describe how the idea that the elite, as gatekeeper, decided the agenda was replaced by the idea of ‘networked gatekeeping’, defined as “a process through which actors are crowdsourced to prominence through the use of conversational, social practices that symbiotically connect elite and crowd in the determination of information relevancy” (p. 99).

Social media users decide the recirculation of news. The result is a pluralization of the status of gatekeeper.

According to Meraz and Papacharissi, especially during social movements, networked gatekeeping becomes a clear phenomenon. Social media users receive and share news through social filtering, recommendations of their friends and through the selection of the mechanism behind the medium.

Twitter users will see tweets about what is happening during a social movements like the #MeToo or the #BlackLivesMatter movement. Next, they decide whether they find the movement interesting by ‘Retweeting’ and ‘Liking’ the Tweets they see. In this way they decide the recirculation of news items and influence which gatekeepers get attention and prominent status. The result is, following from Meraz and Papacharissi, a pluralization of the status of gatekeeper.

A good example of this can be seen in the Indian #MeToo-movement, where a woman named Mahima Kukreja, from outside that media-elite, who ended up on the frontline of the movement. In a Twitter message, comedian Utsav Chakraborty shared a story about an incident of Indian men behaving badly on a cruise ship in Australia, which he called an embarrassment to fellow citizens. However, that same Chakraborty had sent Mahima Kukreja a picture of a penis two years earlier.

Kukreja decided to come out with her story and Chakraborty answered with a public apology within less than a day. After that, Kukreja received many messages from other women who expressed solidarity, but also asked her to share their own experiences of harassment and assault by men. Kukreja, who was not part of India’s media elite at all before sharing her story, became a voice for these women and one of the Twitter gatekeepers for the #MeToo movement in India, chosen by other users of Twitter (Altstedter, Chaudhary and Shrivastava, 2018).

Transparency, privacy and responsibility

However, the question is whether Meraz and Papacharissi were maybe a bit too optimistic about algorithms. Tufekci (2015) sees some clear downsides of algorithmic culture, in which we make computational processes take over humans' task of gatekeeping.

Tufekci (2015: p. 208) compares the publishing of a story in a newspaper to the publishing of a story on Facebook. The story of a traditional journalist is considered and altered by factcheckers, editors, and copyeditors. They are responsible for the story that everyone who buys the newspaper gets to see. However, a story published on Facebook is algorithmically edited. This editing is not transparent, it is invisible. According to Tufekci, that is the danger: algorithms are able to act like responsible and potent gatekeepers, however there is no transparency or visibility in their gatekeeping process.

The guess of the algorithm is right in many cases, but can also be totally wrong

Furthermore, according to Tufekci (2015: pp. 209-210) no one can be kept responsible for the decisions algorithms make. That is also a dangerous thing, because algorithms also make ‘mistakes’. First, they are based purely on the data they have and make guesses based on that. An algorithm uses the clicks of an individual to set up a profile of that individual. The algorithm decides whether this individual is male or female, young or old, gay or straight, happy or depressed and religious or non-religious. The guess of the algorithm is right in many cases, but can also be totally wrong.

Researchers from Cambridge University developed an algorithm that used a combination of dimensionality reduction and logistic regression to infer this information about users. Their model only used the likes given on certain posts by individuals to label them. The study results for this example revealed that less than 5 percent of users labelled as gay by the model were connected with explicitly gay groups (Graepel, Kosinski and Stillwell, 2013: p. 5803). Although the algorithms used by social media platforms use more than just likes, this study illustrates how easy it is for algorithms to make wrong guesses and to label individuals wrongly.

Following Tufekci (2015, pp. 209-210), besides collecting wrong information, some developments in computational processes that have grown alongside big data have made it possible that algorithms now allow inferences about private information that may never have been disclosed to an online platform. The profile that the algorithm creates may therefore also include information about a person that the person themself did not know about. This is in addition to the wrong information.

This collection of information is not the biggest problem, but rather the revelation of the information. Because algorithms also do not take ethical norms and values of society into account, they might reveal information that was never meant to be public. In this way, algorithms can be seen as a serious threat to our privacy rights. What makes this problem even bigger is that when such information is revealed, no one can be held responsible, because of the lack of transparency. The algorithm is an invisible gatekeeper. Therefore, it is hard for citizens to do anything about it or protect themselves against the algorithm.

The manipulative nature of algorithms

Not only do algorithms collect and reveal information about people, they determine which nformation we see. The perception of information about almost everything can be manipulated by algorithms and the selection of information they show. Whereas Meraz and Papacharissi believed that people still had some choice in what they got to see, Tufekci (2015, pp. 215-216) rightly doubts this. She describes an experiment in which Facebook demonstrated that it was able to alter the US electoral turnout by hundreds of thousands of votes, by providing people with a certain type of information through algorithms. Tufekci writes the following about the experiment:

“Facebook has stated explicitly that they had tried to keep their 2010 experiment from skewing the election. However, had Facebook not published the result, and had they intended to shape the electorate to favour one candidate over another, the algorithmic gatekeeping enabled through computational agency would have been virtually unnoticeable, since such algorithmic manipulation is neither public, nor visible, nor easily discernible” (p. 216).

Facebook’s algorithms are able to influence, maybe even decide close elections

The experiment makes clear that Facebook’s algorithms are able to influence, or maybe even decide close elections by manipulating the flow of information. This experience makes clear that algorithms are able to change people’s perception of the world by providing them with one-sided information. Human gatekeepers may not be totally objective, but apparently algorithmic gatekeepers aren't either.

We see another example of this in an article by Daniels, in which she describes how social media platforms and algorithms have changed the way White nationalists use the internet in the United States. She states, “Algorithms speed up the spread of White supremacist ideology (…). And algorithms, aided by cable news networks, amplify and systematically move White supremacist talking points into the mainstream of political discourse” (p. 62). She calls white nationalists ‘innovation opportunists’ (p. 62) and illustrates this with the way they changed Pepe the Frog from an innocent cartoon character to an online hate symbol. A group of hackers, tech people, libertarians and White supremacists built an association of Pepe the Frog with hate and white nationalism.

According to Daniels, they have now succeeded in getting their ideology into the mainstream. “Among White supremacists, the thinking goes: if today we can get “normies” talking about Pepe the Frog, then tomorrow we can get them to ask the other questions on our agenda: “Are Jews people?” or “What about black on white crime?”” (p. 64).

Pepe the Frog

Personalization and filter bubbles

In the past few years, algorithmic culture has been taken to the next level and so has the provision of one-sided information. The change inperception described above has also become a change on an individual level. Eli Pariser writes about this in The filter bubble: What the Internet is hiding from you, in which he describes the danger of personalizing algorithms as the gatekeepers of the online public sphere.

Personalization has become a trend for social media platforms like Facebook and Twitter. Everything you see online is selected by an online algorithm that decides whether something is particularly relevant for you. This includes not just the content your online friends like, share, and post, but also company’s advertisements. As Pariser (2011) describes it: “Search for a word like ‘depression’ on Dictionary.com, and the site installs up to 223 cookies and beacons on your computer so that other Web sites can target you with antidepressants” (p. 6). Companies have started a war for personalized data, because “the more personally relevant their information offerings are, the more ads they can sell, and the more likely you are to buy the product they’re offering” (p. 7).

Personalized news feeds are becoming the primary source for news

But the personalization reaches further than just shaping what we buy. It is not just the commercial side that becomes personalized. Pariser (2011: p. 8) describes how for an increasing amount of people, personalized news feeds are becoming the primary source for news. As a consequence, social media users end up in what Pariser calls a ‘filter bubble’: “a unique universe of information for each of us, which fundamentally alters the way we encounter ideas and information” (p. 9). We no longer get to see things that the algorithm thinks we will not like. We do not see political perspectives that are too different from our own anymore, which is important to maintain a balanced democracy. However some studies (Beam, 2014; Brose, Graefe & Haim, 2018) have shown that it is not that bad yet, it is definitely a possible scenario for the future, if algorithms keep growing like they have in the last few decades.

Besides the ideological consequences, our news consumption is also filtered and personalized. A possible result is that we do not see ‘hard’ or ‘boring’ news anymore, which might be very important for our general world knowledge. A study by Beam (2014) already confirmed that the usage of personalized news systems has had a negative effect on knowledge gain. Pariser (2011) adds to this that instead of all important news, we see information that fits with our own interests, that is fun and easy to process.

This is how we end up all alone in our personal bubble, isolated from the rest of the world. No debate, no conversation and no shared facts, but just a bunch of individuals in a sphere that we cannot even call public anymore. In this, we are forgetting that without debate and conversation, there is no democracy.

The algorithmic public sphere and democracy

Although at first glance it seemed like the algorithmic culture could work as an opportunity for democracy, the downsides seem to take the upper hand now. Algorithms have become too omnipotent. They collect our personal information without having any idea about the moral norms and values of our society. With many small puzzle parts, they create a profile of every individual, that might be right, but also wrong. When this personal profile is revealed or falls into wrong hands, it is a threat to our privacy rights. Algorithms do not just collect information about people, they can also manipulate what people do and think by deciding what information people see. This can also be very dangerous, as shown in the Facebook study about the elections.

Algorithms personalize and as such influence what we see on social media platforms. In this way, we could end up alone in our personal filter bubble, without seeing other perspectives and without noticing what is most important. Algorithmic gatekeepers isolate individuals from others, with the result that discussion and debate soon will not be part of the online public sphere anymore. If this development proceeds, then the question is whether we can still speak about a ‘public’ sphere in a few years.

Eli Pariser (2011) finds a solution to the problem and danger of algorithmic gatekeepers through balance: a balance between personalized and general information. Only then, we see what we like, but also what is reality. We are able to profit from some of the benefits of algorithmic gatekeepers, but we also need human gatekeepers to protect the public sphere. However this will probably not solve all problems algorithms bring to the public sphere, it would be a step in the right direction, and might even rescue democracy.

References

Altstedter A., Chaudhary A. & Shrivastava, B. (2018, October 22). #MeToo’s Twitter Gatekeepers Power a People’s Campaign in India. Bloomberg.

Beam, M.A. (2014). Automating the news: How personalized news recommender system design choices impact news reception. Communication Research, 41(8), 1019-1041.

Brosius, H.B., Graefe, A. & Haim, M. (2018). Burst of the filter bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330-343.

Daniels, J. (2018). The Algorithmic Rise of the “Alt-Right”. Contexts, 17(1), 60-65.

Graepel, T., Kosinski, M. & Stillwell, D. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15), 5802-5805.

Habermas, J. (1974). The Public Sphere: An Encyclopaedia Article. New German Critique, 3, pp. 49-55.

Meraz, S. & Papacharissi, Z. (2016). Networked Framing and Gatekeeping. In Witschge, T., Anderson, C., Domingo, D. & Hermida, A. (Ed.) The Sage handbook of digital journalism (pp. 95-112). London, England: Sage.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK.

Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4-5), 395-412.

Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. J. on Telecomm. & High Tech. L., 13, 203.