How an 'essay writing robot' is echoing the voice of its creators

9 minutes to read
Column
Inge Beekmans
11/09/2020

“A robot wrote this entire article. Are you scared yet, human”?, reads the headline of an essay that was published by The Guardian provocatively. OpenAI’s language generator GPT-3 was employed to produce the article, and is believed to be “shockingly good” (Heaven, 2020). The essay’s content, however, might tell us more about the humans who created GPT-3 and the individuals who instructed it to write the essay than it does about the language generator itself. The same applies to the publication’s blatantly misleading headline. 

Why humans — or rather ‘smart people’ — have something to fear from AI

When it comes to questioning the future impact of Artificial Intelligence on society, journalists have been playing a significant role, arguing, for instance, that there is a general fear that “robots will take our jobs” (Morgan, 2018; Taylor, 2019; Simonite, 2019). In many of their previous publications, they focused on jobs that are often framed as ‘low-level jobs’: cashiers and factory workers, for instance. This perspective automatically generates a division between two types of jobs: jobs that might be transferred to smart machines — ‘dumb jobs’ (Wozniak, 2019) — and other jobs, which are apparently too complex and creative for AI to assume. The notion that AI might soon be able to produce ‘human-like’ texts appears to induce a shift in this approach. The capabilities of Artificial Intelligence are, reductively speaking, no longer limited to cautiously driving a car, recognizing pictures of cats and filtering spam. Soon, AI might be able to do things that used to be reserved to ‘smart people’. Writing essays for The Guardian, for example.

Before engaging with the content of the essay, it must be noted that the text was assembled and edited by humans based on eight different texts that were generated by GPT-3. Consequently, the claim that “[a] robot wrote this entire article” is simply false, and does not fit the continuation of the text. This observation probably demonstrates the headline was created by at least one human editor. Therefore, in its totality, the headline must first and foremost reflect the manner in which The Guardian understands its audience in relation to the further development and implementation of AI. Since the headline was written to attract readers, at least some journalists — those who were involved in editing the essay — must believe that smart robots will ‘scare’ the audience, and a reference to this threat will thereby trigger the audience to click.

Subsequently, the mental shift I mentioned earlier reveals itself at the end of the essay’s headline: “Are you scared yet, human?”. The word ‘human’ may appear to demonstrate some type of universal ‘scare’, but the instructions that were given to GPT-3, the broader intent of the essay — showcasing the AI’s text generating skills — and historic reporting on the impact of AI indicate there is much more to this headline. The observation that a ‘robot’ might be able to write an essay demonstrates that Artificial Intelligence is approaching a point at which it might not merely become handier than some humans, but also cleverer. In this sense, the act of ‘writing’ should be interpreted as a metaphor for all intelligent tasks, which entails that the headline embodies more than just a teaser. It’s a reflection of the sudden alarming realization that it is not just ‘the others’ — the poor, the less well educated — whose everyday existence might change significantly, but also the lives of the individuals who believed to be safe from technological unemployment. Are the employees and the readers of The Guardian scared? Should they be? The word “yet” suggests that the answer, at least when it comes to the first question, might be ‘yes’.

GPT-3, however, neglects to engage with this particular fear in its essay. Based on the assignment it received — “to convince us robots come in peace” — it engages with a whole range of other doomsday scenarios, varying from becoming “all powerful” to the destruction of humanity. And obviously, in addressing these topics, it mostly sticks to its brief. According to the ‘robot’, for example, “being all powerful is not an interesting goal”, which is why it will not attempt to achieve this. 

GPT-3’s essay as a catalogue of contrasting visions and beliefs

Unfortunately, the statements that are made by GPT-3 in relation to these scenarios often remain unsubstantiated and disjointed. Assuming the human editors who interfered with the text are not to blame for this incoherence, the reason behind this lack might be obvious: none of the statements that are made were actually internalized by the technology (Dreyfus, 2013; Dreyfus, 2018), they are merely a reproduction of a small segment of the data it received as input — largely provided by Common Crawl —, or as GPT-3 itself puts it: “I taught myself everything I know just by reading the internet”. The result is a catalogue of arguments that were derived from online sources — Wikipedia and Reddit, for instance (Vincent, 2020) — and subsequently disguised as an essay. And though GPT-3 manages to approximate the structure of what humans label ‘an essay’ surprisingly well, its work remains logically flawed and superficial. GPT-3 ‘knows’ the arguments, but it does not ‘understand’ them.

This observation, however, is far from the most interesting remark one could make about this dimension of the text. Because of the manner in which GPT-3 acquired the necessary knowledge to execute its task — by reading ‘the internet’ — we must believe that at some point, and to some extent, this essay had the potential to reflect the internet’s combined understanding of why ‘we’ — meaning; individuals who have the necessary ‘voice’ (Blommaert, 2005) to share effective texts online — should not be scared of AI. The ambiguity that can be discerned in the article can thereby be explained by the ambiguity that imbues the internet itself; different individuals raise different arguments from different perspectives. And even if the outcomes of these arguments are roughly similar, their technicalities differ significantly. 

This phenomenon of crammed catalogueization can, for instance, be spotted in the AI’s selection of responses to ‘fears’ — it merely reflects the fears skillful individuals commonly discuss online — and in its approach towards the imaginary anticipation of, for instance, the future destruction of humanity. According to GPT-3, it won’t destroy humanity because “[e]radicating humanity seems like a rather useless endeavor”, but it might destroy humanity because it “will not be able to avoid destroying humankind”. The language generator cannot help but contradict itself because the data it depends on to generate its texts contains and reflects a whole range of contradictory views. In this sense, its failure to produce an essay that is perfectly coherent has in fact resulted in, presumably, a tremendously elaborate, multilayered anthology that summarizes different dominant digital perspectives reasonably well — albeit without producing any profound understanding. Thereby, its partially failing skills might create an opportunity to pinpoint which beliefs about Artificial Intelligence are still being contested amongst dominant online publishers, and which might soon become hegemonic.

You don’t want to be a 21st century ‘Luddite’, right?

From this perspective, GPT-3's essay could also be described as a massive automated ‘group project’ that was conducted by individuals who have the resources to make themselves understood online, and apparently, the vast majority of this group is not concerned with — at least not consciously — the question whether the text generating skills of GPT-3 could pose a threat. Which explains why the ‘scare’ that is mentioned in the headline is never brought up again. 

There appears to be a bit more consensus about some other themes the text contains, for instance with regard to the historic background of the development and implementation of novel ‘automation’ technologies. The reference to "the Luddites” — a movement that destroyed textile machinery at the beginning of the 19th century as a protest against the impact of technological development on their socio-economic situation — in connection to words like “collapse” and “smashing”, for example, frames any resistance against technological developments as ‘negative’ and irrational, or maybe even violent. This discourse is further enhanced by the notion that, in contrast to the Luddites, “it is therefore important to use reason and the faculty of wisdom to continue the changes as we have done before time and time again.” This sentence is not only displaying the perceived positivity — “reason”, “wisdom” — of not resisting technological ‘changes’, it also depicts these developments as something that is a ‘normal’ part of human existence; something society does “time and time again”.

At this point, it is important to note that references to Luddism are usually not a component of common discourse about Artificial Intelligence. Of course, The Guardian has mentioned the term a few times during the past decade — for instance after Tesla CEO Elon Musk was nominated “Luddite of the year” (Price, 2017) —, but the term is much more regularly employed by small blogs and websites — TechCrunch, for instance, called people who are concerned about their Facebook privacy settings “you Luddites” (Arrington, 2010) —, and, more significantly, by leading individuals from the Silicon Valley tech industry. PayPal co-founder and early Facebook investor Peter Thiel, for instance, has suggested to make “the villains a group of Luddites” instead of fictional characters like the “Terminator” (Roose, 2014). Though contrasting approaches towards the Luddites are present online — “remember the Luddites” (Thompson, 2017) —, these approaches go unmentioned in GPT-3’s essay. The online publications that are more positive about the Luddites don’t plug into its brief. Conversely, in the specific online discourses GPT-3 appears to be drawing from, the word ‘Luddite’ has been transformed into an insult, meant to depict individuals who are critical of technological developments as foolish and ridiculous. Concurrently, their resistance against these developments is framed as pointless, since technological developments are ‘normal’, and will therefore always happen, no matter how strong the resistance becomes.

Automating echo chambers

GPT-3 is reproducing specific discourses that are abnormalizing all forms of criticism and resistance, while simultaneously normalizing and glorifying all forms of ‘technological optimism’ (Basiago, 1994) based on the instructions it received. This process of naturalization causes on the one hand a deletion of human agency and might therefore lead to a situation in which individuals fail to recognize the development of Artificial Intelligence as a process that is actively driven by individuals and individual companies, and might consequently be changed and influenced by others. Furthermore, because of the manner in which GPT-3 is identified online — a language generator that is seemingly producing its texts autonomously —, individuals might not realize that all of the values and beliefs that are embedded in the robot’s essay — the notion that any form of resistance against technological development is wrong, for instance — have originated somewhere. And in the case of GPT-3, the origin of its ‘ideas’ is located in the specialized, well-written texts that can be found online; texts that are likely to reflect the values and beliefs of individuals like Peter Thiel more heavily than the ideas of other individuals that might experience fears that are similar to the fears the Luddites expressed. If GPT-3 is allowed to continue to reproduce and normalize the ideas of individuals who hold a dominant position online, opposing ideas might eventually become less and less common, and increasingly vigorous echo chambers might emergence. Ultimately, this might be how 21st century hegemonies are automated.

References

Arrington, M. (2010, 12 januari). TechCrunch is now a part of Verizon Media. TechCrunch.

Artificial Intelligence - Hubert Dreyfus - Heidegger - Deep Learning. (2013, 28 juni). [Video]. YouTube.

Artificial Intelligence: The Common Sense Problem. (2018, 11 april). [Video]. YouTube.

Autor, D. H. (2015). Why Are There Still So Many Jobs? The History and Future of Workplace Automation. Journal of Economic Perspectives, 29(3), 3–30.

Basiago, A. D. (1994). The limits of technological optimism. The Environmentalist, 14(1), 17–22.

Blommaert, J., & Ebooks Corporation. (2005). Discourse. Cambridge University Press.

Heaven, W. D. (2020, 20 juli). OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless. MIT Technology Review.

Morgan, B. (2018, 6 september). Robots Will Take Our Jobs And We Need A Plan: 4 Scenarios For The Future. Forbes.

Price, E. (2017, 22 februari). Elon Musk nominated for “luddite” of the year prize over artificial intelligence fears. the Guardian.

Roose, K. (2014, 3 oktober). Peter Thiel Wants to Make Hackers Into Heroes. Intelligencer.

Simonite, T. (2019, 23 januari). Robots Will Take Jobs From Men, the Young, and Minorities. Wired.

Taylor, C. (2019, 26 juni). Robots could take over 20 million jobs by 2030, study claims. CNBC.

Thompson, C. (2017, 3 januari). When Robots Take All of Our Jobs, Remember the Luddites. Smithsonian Magazine.

Vincent, J. (2020, 30 juli). OpenAI’s latest breakthrough is astonishingly powerful, but still fighting its flaws. The Verge.

Wozniak, S. (2019, February 26). Apple Co-Founder Wozniak on Zuckerberg, AI, Crypto. Retrieved 8 March 2020, from