Deep fakes - an emerging risk to societies

22 minutes to read
Paper
Tunde Farago
15/11/2019

Deep fakes is a rapidly evolving technology that can easily fool not just our human senses, but also the software created to counter them. This paper aims to explore their potential social, cultural and political implications.

Deep fakes and the post-truth era

Digitalization and the rise of the internet have brought about a revolution in the way people can access information and share it with others. It also resulted in citizens themselves becoming active producers of knowledge, a privilege that once strictly belonged only to governments, media outlets and academia. (Hanell and Salö, 2017: pp. 155-156). With traditional institutions losing the authority of conveying facts, fake news, misinformation and lies have become commonplace. (Llorente, 2017: p. 9).

Although fake news, propaganda and lies are as old as human communication itself, it wasn’t until the year 2016 when the major online misinformation campaigns that contributed to Donald Trump becoming the president of the United States and British people voting to leave the European Union, that these phenomena became the center of attention in Western societies. These, for many people truly shocking events, resulted in Oxford Dictionaries to name “post-truth” the word of the year in 2016. (Oxford Dictionaries, n.d.)

A doctored video two decades ago, when YouTube was in its early stages, would have been regarded as a fun prank, but today when people are unable to distinguish between what is real and what is fake, a decent deep fake can pass as a real thing

This so-called post-truth era that we live in now is said to be guided by our emotions as facts are cherrypicked and shaped to fit our own versions of reality (McIntyre, 2018: chapter 1, paragraph 12). But humanity’s ability to twist reality has taken an even a bigger step forward in recent years with the creation of deep fake technology. With this technology one can create deep fakes: videos of real people saying and doing things they have never actually said or done (Chesney and Citron, 2018: n.p.).

In 2017 the first deep fakes emerged in which faces of young Hollywood actresses, such as Scarlet Johansson and others, were superimposed onto porn movies. The technology that was once only in the hands of experts was soon democratized and web pages and free apps promising the ability to create video forgeries of celebrities, ex-girlfriends, colleagues and even strangers popped up all around the world. This has led to a situation in which everybody can suddenly fall victim to a deep fake (Schwartz, 2018).

Moreover, the machine learning ability of Artificial Intelligence, which was used to create the deep fakes, is developing so fast that it becomes very hard for software, not to mention our human senses, to detect the forgeries (Chesney and Citron, 2018: n.p.). And it is exactly this inability of people to tell them apart from reality where their true danger lies.

A doctored video two decades ago, when YouTube was in its early stages, would have been regarded as a fun prank, but today when people are unable to distinguish between what is real and what is fake, a decent deep fake can pass as the real thing. And it is not only because the technology is so advanced that forgeries can easily fool people. Nowadays, even a crudely doctored video will do that.

For instance, a poorly altered video of U.S. House Speaker Nancy Pelosi to make her sound drunk gained more than 2 million views on the first day it was posted online (Harwell, 2019). In this media era, fake news can very quickly become widely known and accepted, due to algorithmic boosting and filter bubbles which help spread this kind of content extremely fast. The Nancy Pelosi case also shows that the threat of deep fakes is real, especially when people rely on their emotions and personal beliefs rather than real facts when deciding what is true and what not.

Deep fakes and their potential to harm societies

Our technological capacities to create videos that can mimic reality quite accurately come at a time when society is already vulnerable to misinformation and fake news. The production of knowledge is no longer in generally trusted hands. Everybody can create their own versions of the truth and facts and distribute it to the rest of the world.

Our confirmation biases play a role in the acceptance of misinformation, and algorithms help spread them further (Chesney and Citron, 2018: p. 9). This also means that “[d]eep fakes are not just a threat to specific individuals or entities. They have the capacity to harm society in a variety of ways” (Chesney and Citron, 2018: p. 20). The fact that they have not been utilized on a much greater scale yet does not mean that the potential harm they may cause is not realistic.

Since deep fakes are indeed a very new phenomenon, we have yet to understand what specific risks they present. Chesney and Citron (2018) were among the first to try to outline the potential risks and dangers they pose to societies. They argue that there are a lot of beneficial outcomes as well from the creation of deep fakes, especially in arts, education and gaming.

For instance, Hollywood has already utilized deep fake technology to bring to life deceased actors for certain roles. The most famous examples of this are in the Star Wars franchise, where the characters played by the late Carrie Fisher and Peter Cushing were recreated with the help of this technology. Gaming industry is yet another area where deep fakes might come in handy. Gaming platforms such as Nintendo Wii developed games where users can create their own customizable avatars.

When it comes to education, Chesney and Citron argue that this technology could also be used to alter existing films, documentaries or shows for pedagogical purposes (Chesney and Citron, 2018: pp. 14-16). For instance “[w]ith deep fakes, it will be possible to manufacture videos of historical figures speaking directly to students, giving an otherwise unappealing lecture a new lease on life.” (Ibid: p. 14)

Once out there on the World Wide Web, deep fakes can have a life of their own

However, it is the potential harm that we should be focusing on, considering our already vulnerable digital environment and current societal developments. Chesney and Citron (2018: pp. 16-21) point out that the threat presented by deep fakes to society has systematic characteristics, since they have the potential to reach and harm all levels of society:

the damage may extend to, among other things, distortion of democratic discourse on important policy questions; manipulation of elections; erosion of trust in significant public and private institutions; enhancement and exploitation of social divisions; harm to specific military or intelligence operations or capabilities; threats to the economy; and damage to international relations.” (Ibid: p. 21)

However, there is no way of accurately predicting what a specific deep fake might do. The potential harm depends on the context of its creation and circulation, and it is only possible to understand its full impact once it has had time to reach an audience. By that time of course some damages might be irreversible.

The Belgian deep fake featuring Trump

In May 2018 a deep fake of Donald Trump addressing the Belgian public emerged. In this crudely doctored video the fake Trump is seen calling on the Belgian public to urge their government to withdraw from the Paris climate agreement. The video was created by a Belgian social-democratic party called Socialistische Partij Anders (sp.a) and posted on its official social media accounts. The video was made with the intention to spark people’s interest in climate change, but received hundreds of angry comments stating that Trump has no right at all to express his opinion about a Belgian political matter. (Schwartz, 2018)

One outraged tweet read: “Humpy Trump needs to look at his own country with his deranged child killers who just end up with the heaviest weapons in schools” (as cited by Schwartz, 2018). Another went even further, throwing insults at the entire American nation: “Trump shouldn’t seek the moral high ground because the Americans are themselves as dumb” (Ibid). Needless to say, the deep fake managed to provoke and anger parts of the Belgian public. Even the fact that the deep fake was really badly doctored – the fake Trump’s lips and the sounds were off sync and the whole video was of very low quality – did not keep some people from being genuinely convinced that Trump really did say those things and understandably they directed their anger towards him. 

If one watches the video very carefully, at the end the fake Trump himself admits that he is actually featuring in a fake video. However, at the exact moment when he says those words the sound level drops drastically and the Dutch subtitles which were present during the entire video disappear, making it very difficult for Belgian people who do not understand English to realize that what they have seen is not actual footage of Trump.

Information travels faster on social media platforms “if it looks and feels true on a visual and emotional level.”

Also, those who paid attention only to the subtitles may be forgiven for having missed this final part. As it was later revealed, the sp.a requested the hi-tech forgery from a production house which uses AI to generate highly realistic fake videos of people. However, when the video did not end up generating the expected outcomes, the party was left to manage the situation. Their social media team was forced to explain over and over again to their outraged followers that it was just a silly prank video and nothing more (Schwartz, 2018).

Although the intentions of the creators of the Trump deep fake were to enlighten the Belgian public about the immediate threat of global warming and to inspire them to act on it, the effects were something they did not anticipate. And that is the problem, because once out there on the World Wide Web, deep fakes can have a life of their own. The members of sp.a defended their actions by stating that, given the low-tech quality of the video, they assumed that their followers would instantly know it is a fake and understand the hidden message in it. This would eventually lead them to sign the climate change petition that the fake video was meant to popularize (Schwartz, 2018). This illustrates one of the true dangers: the inability of creators to predict how people will react, and what kind of consequences deep fakes might have on the wider society. What we are dealing with here is a clear discrepancy between the creators’ intentions and the usage and perception by the public. 

Deep Fakes, virality and affect

Negative emotions such as fear and insecurity are incredibly strong motivators. They can create instability in diverse societies, eroding the trust in civic and democratic institutions, and result in a state of nervousness (Davies, 2018: p. 20). According to Davies, “[m]uch of this nervousness that influences democracy today is not simply because feelings have invaded a space previously occupied by reason, but because the likely sources and nature of violence have become harder to specify” (Ibid: p. 17). In this sense violence does not necessarily mean physical violence; it can also mean just the threat of violence, a sense of danger or a feeling of insecurity. Nonetheless, this nervous state can shape the public’s opinion so much so that even if the deep fake is proved to be false by outsiders, the “belief in it becomes an article of faith, a litmus test of one's adherence to that community's idiosyncratic worldview.” (Donath, 2016 as cited in Zuckerman, 2017)

One reason why people fall for fake news and share them online is the fact that information travels faster on social media platforms “if it looks and feels true on a visual and emotional level” (Davies, 2018: p. 15). Another idea to consider is that humans tend to care more for negative as well as novel information, which can evoke stronger emotions, like surprise and disgust: “Negative information not only is tempting to share, but it is also relatively ‘sticky.’ As social science research shows, people tend to credit—and remember—negative information far more than positive information” (Chesney and Citron, 2018:  p. 12). Moreover, humans are predisposed to pay close attention to things that will stimulate them, for instance things that are violent, sexual, disgusting, embarrassing and humiliating (Ibid: p. 13). Therefore, it is no surprise that many Belgians who saw the video were attentive to the fake Trump’s disrespectful behavior and mistook him for the real Trump.

Reports about this Trump video made international headlines as well, but it did not cause any major reactions on the global scale, and was widely ignored by the Trump administration too. However, this might simply be because by the time the news about the video reached the rest of the world it had already been debunked and news outlets were stating clearly that it was a case of a fake video meant to be a practical joke (Schenk, 2018).

Another explanation as to why this video did not have a bigger impact on the global scale is maybe the fact that the fake Trump was very similar in character to the real Trump, especially when it comes to declaring climate change to be a hoax or calling on other nations to follow the American example in dealing with political issues. Many people are already used to this real Trump; encountering another video of him ranting about how climate change is only fake news will not be anything new to them. 

The consequences of a fake video circulating can be more significant in cases where the stakes are higher, or the emotional appeal stronger

A last thing to consider is perhaps the content of the video: climate change. People are very motivated to avoid any immediate threats, like a barking dog, or a dangerous street. Yet when it comes to climate change, people are more difficult to be moved. This topic may not have a lot of emotional appeal for individuals who have on a daily basis their own personal problems to deal with and do not necessarily feel the direct impact of global warming (Markman, 2018). In fact, “many effects of climate change are distant from most people.” (Ibid) Therefore, it is no surprise that the Belgian public was more focused on (the fake) Trump addressing their nation and commenting on their politics, than on the issue of climate change.

But nonetheless, even if this was just a small-scale deep fake incident, the fact that it was created by a political party in order to induce action about a certain political issue makes this scenario quite alarming. Especially if we add to this “the nature of today’s communications environment” in combination with our confirmation biases and algorithmic networks – then we have a clear recipe for a potential disaster even on a more global scale (Chesney and Citron, 2018: p. 19).

The consequences of a fake video circulating can be more significant in cases where the stakes are higher, or the emotional appeal stronger, for example if instead of trying to inspire people to care about the environment, the intent of the video would be to make them believe there has been a terrorist attack. As Davies (2018: p. 123) points out, even just “[s]mall acts of transgression can have major political effects, if the right tool and target are carefully selected.”

The power of a fake video

The Belgian deep fake came out just a month after Buzzfeed published an article warning about the possible risks of using deep fakes in political campaigns. In order to demonstrate the capacity of such technology and for the sake of the argument, they created a deep fake, featuring a fake Barack Obama trash talking about Trump. For this they utilized a free application called FakeApp released by the Reddit user “Deepfakes”, who also created the fake porn videos of celebrities in 2017 mentioned above.

The Buzzfeed article, which also showcased the forgery, argued that although the technology to create doctored videos is still in its infancy, requiring a fair amount of IT skills, time and fast computers, the potential for it to become more sophisticated and democratized is just around the corner. The article makes the claim that if soon anyone can make a fake video that defies reality, then there are pretty perilous times ahead of us (Silverman, 2018).

At the same time some commentators oppose these apocalyptic predictions, claiming that the technology is not so far advanced that deep fakes could actually pose a real threat to society. An article which appeared in The Verge in March 2019 for instance stated that “deepfake propaganda is not a real problem” and it won’t become an issue in the nearest future (Brandom, 2019). The author points out that the predictions of deep fakes becoming a threat to our democratic systems have not materialized yet. He adds that the main reason for this is simple: it’s just not worth the trouble. He claims that the algorithms that can detect the forgeries are widely available online; therefore, it can be easily proved that the doctored videos are fakes. One of his arguments is that doctored videos are not useful enough to the extent that troll campaigns are: “Most troll campaigns focused on affiliations rather than information, driving audiences into ever more factional camps. Video doesn’t help with that; if anything, it hurts by grounding the conversation in disprovable facts.” (Ibid). Brandom also makes the point that deep fakes are in general more dangerous for individuals, given the huge amount of fake porn, of celebrities and of unknown people alike, circulating online.

The lies and misinformation that have crept into our information networks and democratic systems have destabilized institutions and created fear and suspicion among the population.

However, one might disagree. If the public reactions to the badly doctored video of Nancy Pelosi and the Belgian deep fake are any indication, we do have something to worry about. The fact that there has not yet been any major societal disruption by a deep fake does not mean that it cannot happen in the future. It seems that “[e]ven at this early stage [of technological advancement of deep fakes] it’s proving difficult for humans to consistently separate” them from reality" (Silverman, 2018). If a picture says more than a thousand words, then what does a video say? “In some instances, the emotional punch of a fake video or audio might accomplish a degree of mobilization to action that written words alone could not” (Chesney and Citron, 2018: p. 24). What we have seen so far are doctored images, altered videos and fake news intended to distort the reality and change people’s opinions on certain issues. We have yet to experience a deep fake with the capacity to disrupt our senses of reality that will have bigger social and political ramifications. This is a possibility, since even if our algorithms would be able to detect the fake, this will not be enough (Vincent, 2019).

One piece of evidence for this is that already some badly doctored videos have managed to create quite a lot of friction on the American political scene. The best recent example, apart from the Nancy Pelosi case, is the doctored video of a CNN reporter, Jim Acosta, seemingly hitting a White House assistant in 2018. This was an even more chilling episode, considering the fact that the White House Press Secretary Sarah Sanders shared the forged video on Twitter, while defending the White House’s decision to permanently revoke Acosta’s credentials. The part of the original video where the CNN reporter is seen taking the microphone back from the White House assistant, who was determined to stop him from further questioning the president, was accelerated to make it seem that Acosta actually hit the assistant (Aratani, 2018).

The forgery was created by InfoWars, an online conspiracy website best known for its 2012 Sandy Hook Elementary School shooting conspiracy, in which they claimed that the shooting that led to 28 people being killed never actually happened. Keeping this in mind, the fact that the White House decided to back its decision of banning the CNN reporter by using a forged video created by a conspiracy website raises some serious concerns. At the same time, it also demonstrates the growing risks of deep fakes being weaponized in the name of politics (Aratani, 2018).

Deep fakes, democracy and a polarized society

What these examples also point to is just how much one society can become polarized by fake news, doctored videos and deep fakes. Lies and misinformation have crept into our information networks and democratic systems and have managed to destabilize institutions, and create fear and mutual suspicion among the population (Davies, 2018: p. 22).

Fake news and misinformation can spread with enormous speed online, due to people’s confirmation biases, algorithmic boosting, and filter bubbles. Social media platforms create an environment where individuals can find themselves enclosed within an informative bubble that reinforces their personal beliefs and opinions while at the same time suffocating any other narratives that could challenge the status quo (Prego, 2017: p. 20). As Chesney and Citron (2018: p. 13) claim, “[f]ilter bubbles can be powerful insulators against the influence of contrary information.” Opposing narratives that enter these bubbles are immediately discredited, leaving the filter bubble intact in the process (Prego, 2017: p. 20): “In this atomized world that is self-strengthening, it is actually a huge weakness because it is the perfect breeding ground for spreading fake news” (Ibid: p. 21). People will not think to fact check the information they receive in their bubble since they genuinely believe it to be true, as it confirms their own beliefs about a certain topic (Ibid). However,

“[o]ne of the prerequisites for democratic discourse is a shared universe of facts and truths supported by empirical evidence. In the absence of an agreed upon reality, efforts to solve national and global problems will become enmeshed in needless first order questions like whether climate change is real. The large scale erosion of public faith in data and statistics has led us to a point where the simple introduction of empirical evidence can alienate those who have come to view statistics as elitist” (Chesney and Citron, 2018: p. 21).

Lawmakers and experts already warn that deep fakes might hinder and disrupt the upcoming U.S. elections in the year 2020

One possible solution for this problem lies in educating the public how to distinguish facts from fiction and by increasing media literacy. According to danah boyd, in order to achieve this, we need to be very creative and develop a structural base for people to communicate with each other across divisions in a meaningful way (boyd, 2017a; boyd, 2017b). That is,

“[w]e need to enable people to hear different perspectives and make sense of a very complicated — and in many ways, overwhelming — information landscape. We cannot fall back on standard educational approaches because the societal context has shifted. We also cannot simply assume that information intermediaries can fix the problem for us, whether they be traditional news media or social media.” (boyd, 2017a)

What about the law ?

Apart from this there is also the legal side of deep fakes to consider. Lawmakers and experts are already warning that deep fakes might hinder and disrupt the upcoming U.S. elections in the 2020. During a House Intelligence Committee hearing in Washington on June 13, 2019, experts from various fields as well as politicians discussed the potential dangers of deep fakes and ideas on how to prevent them from creating widespread damage (George, 2019). Using the doctored Nancy Pelosi video as an example, they argued that the era of deep fakes will have “the capacity to disrupt entire campaigns, including that for the presidency" (Ibid). This might be possible given the fact that the public is already struggling to separate facts from fiction.

One expert suggested that tech companies need to step up their game and ban such content from their platforms. However, giving these companies the freedom to decide on their own what kind of content should be removed was seen as too risky a move. Danielle Citron, a University of Maryland Law professor who attended the meeting, told the lawmakers that most of the legislation regulating the usage of online videos is decades old and needs urgent revision (Ibid).

“If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent—empowering authorities along the way.”

However, new laws restricting the creation and distribution of such forgeries will not be enough to combat the problem, not when those laws have no jurisdiction outside their country of origin. For instance, “U.S. officials determined Russia carried out a sweeping political disinformation campaign on U.S. social media to influence the 2016 election” (George, 2019). It is not unimaginable that they or others will try again in 2020. A deep fake incriminating a political candidate favored by the public released a night before the elections could potentially flip the results in favor of the other candidate. Even if the fake video would be debunked and proven false, this might well come too late. “When events are unfolding rapidly and emotions are riding high, there is a sudden absence of any authoritative perspective on reality” (Davies, 2018: p. xi). In the heat of the moment, people might not stop to consider the facts before they act (Ibid).

Deep fakes are like a new way of falsely shouting fire in a crowded theater. A well-timed deep fake may tip the election, “particularly if the attacker is able to time the distribution such that there will be enough window for the fake to circulate but not enough window for the victim to debunk it effectively” (Chesney and Citron, 2018: p. 22). This is also because “[m]ore than anything else, the dynamics that define the web — frictionless sharing and the monetization of attention — mean that deepfakes will always find an audience” (Vincent, 2019) 

Moreover, depending on the context of the shared deep fake, the willingness of people to accept facts that are in line with their own opinions and beliefs will further credit video forgeries. After all, if there is already some doubt among the public, deep fakes will deepen the mistrust even more. This is what happens with believers in conspiracy theories: they are likely to believe in many if they believe in one (Barkun, 2016: p. 2). On the other hand, if the target audience and the timing are not chosen carefully, the video can easily be discarded. This means that apart from the content of the deep fake, its publication and circulation strategy is also key in determining its potential effects.

Deep fakes as slow violence

However, the effects of deep fakes do not necessarily have to happen overnight, as online attacks are often a form of “slow violence” (Varis, 2018): “[A] violence that occurs gradually and out of sight, a violence of delayed destruction that is dispersed across time and space, an attritional violence that is typically not viewed as violence at all” (Nixon, 2013 as cited in Varis, 2018). In other words, small triggers that occur here and there can gradually over time create an avalanche of mistrust and mutual suspiciousness among the public. These continuous online persuasions eventually wear people down and on a large scale help stir the public’s opinion on a certain political matter (Varis, 2018). Modern political campaigners often deploy this strategy. They are very aware that public opinion is best swayed with small-scale interventions, which sometimes go unnoticed, rather than through big formal public statements (Davies, 2018: p. 13).

Therefore, a carefully executed misinformation campaign unfolding over a certain period of time might eventually lead to the desired result. And once it manages to polarize a society, the threats to the democratic systems start to become visible: “If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent—empowering authorities along the way” (Chesney and Citron, 2018: p. 29). In such a vulnerable state, when our society is suffering from the erosion of truth and trust, authoritarian leaders will strive to exploit the public’s opinion even further.

This is already happening all around Europe. What we are witnessing is the rise of populism, which has managed to divide a once more integrated union of European countries with open borders and a shared currency into smaller conflicted islands. Populist leaders, such as Viktor Orban in Hungary, use the vulnerability of the European Union and specifically the refugee crisis, to widen the divide. His misinformation campaign specifically targets political opponents, academia, the media, NGO’s, prominent individuals and basically everyone who does not support his populist ideas and is in a position to have an influence on society (Shattuck, 2019). If the credibility of individuals and institutions who have the capacity to produce and verify knowledge and information is undermined, the public is left with no choice but to believe those who hold the power, resulting in the erosion of democratic systems (Chesney and Citron, 2018: p. 29).

In this climate, deep fakes can be exceptionally effective. All kinds of fake news, conspiracy theories and misinformation can spread if the public is already struggling to make a difference between facts and fiction, because people who encounter them might act first and start asking questions later or, in the worst case scenario, never at all. However, the capacity of deep fakes to mimic reality and fool not just our senses but also our technological tools created to counter them goes beyond other forms of fake news, misinformation campaigns or lies. Even just one realistic video forgery could have mind-blowing effects for which our democratic systems are just not prepared.

References

Aratani, L. (2018). Altered video of CNN reporter Jim Acosta heralds a future filled with 'deep fakes'

Barkun, M. (2016). Conspiracy theories as stigmatized knowledge. Diogenes: pp. 1-7.

boyd, d. (2017a). Did media literacy backfire? 

boyd, d. (2017b). Why America is self-segregating

Brandom, R. (2019). Deepfake propaganda is not a real problem

Chesney, R. and D. K. Citron. (2018). Deep Fakes: A looming challenge for privacy, democracy, and national security [Draft version]. 

Davies, W. (2018). Nervous states: How feeling took over the world. London: Jonathan Cape.

George, S. (2019). 'Deepfakes' called new election threat, with no easy fix

Hanell, L. and L. Salö (2017). Nine months of entextualizations. Discourse and knowledge in an online discussion forum thread for expecting parents. In Kerfoot, Caroline and Kenneth Hyltenstam (eds.) Entangled discourses. South-North orders of visibility. London: Routledge: pp. 154-170.

Harwell, D. (2019). Faked Pelosi videos, slowed to make her appear drunk, spread across social media

Llorente. A. J. (2017). The post-truth era: Reality vs. perception. UNO. The Post-truth Era: Reality vs. Perception. 17: p. 9.

Markman, A. (2018). Why people aren’t motivated to address climate change

McIntyre, L. (2018). Post-truth [Kobo Aura version]. 

Oxford Dictionaries, (n.d.). Post-truth

Prego, V. (2017). Informative bubbles. UNO. The Post-truth Era: Reality vs. Perception. 17: pp. 20-21.

Schenk, M. (2018). Fake news: Belgian social democrat party uses faked Trump video in climate change campaign

Schwartz, O. (2018). You thought fake news was bad? Deep fakes are where truth goes to die

Shattuck, J. (2019). How Viktor Orban degraded Hungary’s weak democracy

Silverman, C. (2018). How to spot a deepfake like the Barack Obama–Jordan Peele video. 

Varis, P. (2018). What is the wholesome internet? How wholesome memes became a trend

Vincent, J. (2019). Deepfake detection algorithms will never be enough

Zuckerman, E. (2017). Fake news is a red herring