Digital rendering of a dark skinned male face, dark background

Generative AI and the uncanny valley (and a call to action)

Ruben den Boer

Maybe you have come across it in videogames, animated movies or robotics: that uneasy feeling you get from an android or avatar that tries to be human, but just misses the mark. This is called the uncanny valley phenomenon. Now, the increasing accessibility of generative AI offers a new and exciting spin on the research on this phenomenon. 

What is the uncanny valley?

The uncanny valley hypothesis was first introduced by roboticist Mashiro Mori in 1970. It posits that “as a robot increasingly resembles a person, its familiarity increases until a point at which it abruptly drops to a negative value and elicits strong repulsion” (Wang et al., 2015). The jury is still out on whether Mori’s exacts hypothesis is true. But the eerie feeling some of us get when encountering a nearly human avatar or entity, is undeniable.

Explanations of the uncanny valley phenomenon

Over the past decades, cognitive scientists have proposed several explanations for this uncanny phenomenon. A selection of popular hypotheses in cognitive science aiming to explain the uncanny valley phenomenon includes: 

  • Violation of Expectation hypothesis: Human replicas elicit the uncanny feeling by creating expectations for a human but failing to match them (Saygin et al., 2011; Gray & Wegner, 2012).
  • Categorical Uncertainty hypothesis: Uncanniness is associated with a lack of or orientation which arises when individuals experience uncertainty at a category boundary. They are not certain to which category (human/non-human) an entity belongs (Ramey, 2006).
  • Mind Perception hypothesis: The uncanny feeling is linked to violating the expectation that robots lack subjective experience, which characterizes humans. The uncanny feeling is therefore not limited to the appearance of characters but also extends to their ‘inner mind’ (Mitchell et al., 2011).

All these hypotheses can be understood as forms of cognitive conflict: we perceive something – a human-like entity – that clashes with our expectations or presupposed notions of that something – it is not human (enough). There are still a lot of uncertainties about the actual cognitive processes underpinning the uncanny valley phenomenon, mostly due to conflicting evidence and inconsistent methodologies across studies (Wang et al, 2015; Zhang et al., 2020).

The uncanny valley phenomenon in generative AI

Most scientists up to now have focused their research into the uncanny valley phenomenon on its occurrence in videogames, animated movies and robotics. This makes sense, since these were the fields where the uncanny valley most often occurred for years. But recently, an exciting new technology has been gaining traction that brings a new twist to the uncanny valley phenomenon: generative AI.

Generative Artificial Intelligence (GAI) is a term for deep learning systems that create artificial content such as images and audio from textual prompts. Generative AI tools such as Dall-E 2, ChatGPT and MidJourney detect the underlying pattern related to the input and produce similar content (Dilmegani, 2023). Since 2022, these tools have – to varying degrees – become publicly accessible, leading to an influx of AI-generated content on social media, in newsletters and at art exhibitions. This development intersects with the uncanny valley phenomenon in two ways that invite new avenues of research: the uncanny feeling following AI-generated images of humans, and the uncanny feeling following the understanding that those pictures were not created by humans – even though they (almost) look like they are.

AI generated humans

The first and most obvious intersection of GAI and the uncanny valley phenomenon is the creation of human-like images by AI that incite the familiar unease. Most AI tools do pretty well at generating human faces. This is because these tools look for patterns in existing pictures posted online. Since there are a lot of human faces online, there is a lot of data, which allows for a finer-grained (re)production of a new human face. But it is exactly this amazing capability to recreate humans that can lead to the uncanny valley phenomenon.

For illustration, I generated four images with DALL-E 2 using the prompt “A human eating spaghetti, studio photo” (Figure I). At first glance, these look like pretty convincing pictures of humans. But once you start scrutinizing the details, unease can quickly set in. The empty and depressed gaze of the man in the first picture, the circular ears in the second picture and the half-generated teeth in the fourth picture all point to the fact that these are not actual humans. They are almost-humans, ripe ground for the uncanny valley phenomenon according to the Violation of Expectation hypothesis (see above). 

Figure I: four images generated with DALL-E 2 using the prompt: "A human eating spaghetti, studio photo"

Note that it is not just the content of these AI-generated pictures that make them interesting subjects for investigation. It is the public accessibility that makes this new breeding ground for the uncanny valley phenomenon demanding of study. Video games, animated movies and robots require highly specialized skills to get made. But generative AI requires little more than a prompt and the click of a button. 

This raises questions about cultural production in our post-digital society. How will our culture (even our idea of culture) change when realistic-enough images, texts, videos and audio clips can be created with a few clicks? When an essay is written in seconds? When AI makes better Instagram posts than you ever could? It will be up to us, students and scholars of digital culture, to investigate these questions.

AI as ‘humans’

The second and more novel intersection of generative AI and the uncanny valley phenomenon is the fact that these images look like they are created by a human, while they actually aren’t. The human-ness of GAI can be so convincing that an art piece generated with MidJourney recently won first place in an art competition (Roose, 2022). This invites us as researchers to adapt our conception of the uncanny valley phenomenon towards a more layered understanding.

Up to this point, the uncanny feeling set in when a human created an as-human-as-possible non-human. But in the case of GAI, I hypothesize a more layered conceptualization of the uncanny valley phenomenon. Since generative AI attempts to depict humans by seeking patterns in existing data created by humans, we are actually dealing with an as-human-like representation of a human by a non-human based on human input.

Read that again if you have to. 

Is there a truly human ‘essence’ to human discourse?

This layered approach to the uncanny valley phenomenon does not require us to discard all cognitive science research up till now. Indeed, this approach is in line with the Mind Perception hypothesis (see above), which extends the phenomenon to human qualities like empathy and creativity experienced in avatars or androids.

But the cognitive labour required to fully comprehend this layered reality serves as another interesting subject for future research. And it makes salient even more questions regarding digital culture. Have we reached a point where AI-generated discourse is indistinguishable from human discourse? Will we ever reach that point? What will happen to (digital) society when we do? Is there a truly human ‘essence’ to human discourse? And if that is the case: what does this tell us about human-computer interactions in the past, present and future?

A call to action

Understanding how the uncanny valley phenomenon is experienced in relation to GAI can have big implication because of the enormous potential of GAI as a new mode of communication. The uncanny feeling might dissipate when AI-tools become more refined. But, following Wang et al. (2015), I propose that this makes research into the uncanny valley phenomenon all the more pressing. The uncanny valley phenomenon in videogames and animated movies remains rather innocent. But increasingly convincing GAI opens the door to possibly democracy-threatening content like deepfakes and bot-spam. Understanding how we perceive AI-generated content is therefore not only new and exciting. It is urgent. A layered understanding of the uncanny valley phenomenon will be an important piece to that puzzle. 

Developing this understanding is not just a job for cognitive scientists. It is also up to us, students and researchers of digital culture to develop an empirically grounded understanding of the cultural and societal significance and impact of generative AI. If we do this in tandem with the leaps being taken in the fields of data science and cognition, we can hopefully contribute to a future where we reap the benefits of generative AI and diminish it’s possibly destructive dangers. 


Dilmegani, C. (2023, February 20). Generative AI: 7 Steps to Grow with the AI boom in 2023. AIMultiple.

Generative AI. (2022, November 14). Diggit Magazine.

Gray, K. and Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125 -130.

Mitchell, W. J. et al. (2011). A Mismatch in the Human Realism of Face and Voice Produces an Uncanny Valley. i-Perception, 2(1), 10–12.

Ramey, C. H. (2006). An inventory of reported characteristics for home computers, robots,and human beings: Applications for android science and the uncanny valley. In 

Proceedings of the ICCS/CogSci-2006 Long Symposium ‘Toward Social Mechanisms of Android Science’, Vancouver, Canada.

Roose, K. (2022, September 2). AI-Generated Art Won a Prize. Artists Aren’t Happy. The New York Times.

Saygin, A. P. et al. (2011). The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social Cognitive and Affective Neuroscience, 7(4), 413–422.

Wang, S. et al. (2015). The Uncanny Valley: Existence and Explanations. Review of General Psychology, 19(4), 393–407.

Zhang, J. et al. (2020). A Literature Review of the Research on the Uncanny Valley. Cross-Cultural Design. User Experience of Products, Services, and Intelligent Environments, 255–268.