TikTok

The Troubling Appearances Of Deepfake Videos On Internet Platforms

Blog
Blean Tsige
21/09/2023

 More Than Fake News: Deepfake Videos On TikTok & The Impact It Has On The Globe 

 

Misinformation, fake news or even more simply put fictional narratives, have been around for a while. Content that carries any sort of false information is able to reach larger crowds through the introduction of new media. This is because new media tends to be attainable and more affordable so that more people can have it. Consequently, a larger group of the public can be misinformed at an increasing rate if the new media is not appropriately regulated. This can have atrocious outcomes if the public discourse is influenced by wrong information. The matters have become even worse since the introduction of social media and deepfake videos. Both of these technologies have brought on challenges, which makes it difficult to regulate and control the outcome of these sources. 

 

| INTRODUCTION

 

The truth has been manipulated for centuries for various reasons. Some tend to manipulate the truth so that they can share a greater story, which is sometimes referred to as fiction literature. This sort of literature is appreciated and evaluated as a form of relaxation since it allows the reader to escape reality for pleasure. Other times the truth has been distributed for harmful intent. This might happen with information that is distorted to such an extent that it becomes absolutely fake. For example, fake news stories have been circulating around the globe and have caused distrust in superior media institutions. This is because society puts its trust in institutions to regulate and control sources so that the outcome will be correct. However, with the constant introduction of new media that make information travel much faster, it has become rather difficult for regulations to appear quickly enough to regulate these new sources. 

Especially if one thinks of new media such as social media platforms. These sorts of Internet platforms have made it extremely challenging to suggest mandates that will protect the public discourse from being misinformed. This is because, on social media platforms, misinformation can travel at an impossible speed and reach an infinite amount of people regardless of their demographic. But social media companies have made a big step in combating fake news, by shutting down fake accounts, aka bots, which tend to spread these stories online (West, 2022). 

It seemed as if social media companies were acquiring a handle on the matter, nonetheless, the technology keeps on advancing and new problems keep appearing. This issue concerns AI-generated fake videos which are referred to as ‘deepfakes’ (Sample, 2023). These videos are currently circulating on different social media platforms and have become common and are very convincing (Sample, 2023). This makes them extremely dangerous because they spread misinformation at an increasing amount. But also the type of people this affects has been cause for concern. This is because social media demographics show that there are a lot of young impressionable users on their platform. 

Consequently, the main aim of this essay is to scrutinize whether deepfake videos pose more of a threat than fake news. Fake news has been dealt with by social media companies, by shutting down bots who spread this sort of misinformation. However, the deepfake videos pose a new issue, because they are much more realistic and, thus, harder to combat. It is therefore vital to understand how much deepfake videos will affect society in the future. 

This is done by first reviewing previous literature on fake news and the problems that have been found until now. One article is by Waisbord (2018) which explains what fake news entails and aids to understand how truth can be defined. The second article is by Habgood-Coote (2018), who claims that terms such as fake news should be abundant because they do not serve society in the right manner and the focus should be on media literacy. Here it is plausible to consider that deepfakes should not be a separate entity but should be considered part of the bigger problem and that the social media user should find a way to detect these videos. 

The next part deals with an analysis of how deepfakes have currently affected the globe, by applying the information gathered from The Guardian and BBC. The two news stories explain how the videos of public figures and the Ukrainian war have been distorted by fake content and deepfake videos. This analysis will serve to understand the discussion part, which will demonstrate the case of Zelensky's deepfake video and the devastating consequences. The conclusion will present a summary of the main parts and the developments of fiction which are contemporarily found in deepfake videos.  

 

| BACKGROUND INFORMATION

 

Before delving into the subject of contemporary videos made up of deepfake content, it is important to first understand what is considered to be true. According to Waisbord (2018), truth is not an inherent quality particularly in news, since it is made up of a complex web of factors. In this sense, Wasibord (2018) labels truth as an objective and verifiable representation of reality. Nonetheless, in the news when a story is presented it is influenced by a broad amount of choices which have an impact on how it will be interpreted (Waisbord, 2018). In fact, journalists, editors and news organizations are making choices for the narrative, while subconsciously being influenced by economic, political, and cultural factors (Waisbord, 2018). 

Waisbord (2018) therefore believes that truth is not an absolute entity but instead, it is a consequence of negotiations and constructions. This means that Waisbord (2018) is suggesting multiple inadequacies within the definition of truth. This is because he furthermore argues that truth is consistently impacted by social, cultural and political factors. Waisbord (2018) suggests defining this influence as a “truth regimes”, due to the consistent power dynamics that are at play which shape the news. 

He has furthermore gathered that the increase of misinformation has developed due to the digitalisation of media (Waisbord, 2018). He believes that fake news has spread faster and intensified due to the digital era, which consequently has turned people who are part of the news world to compete for legitimacy and authority (Waisbord, 2018). Waisbord (2018) argues that truth claims are ignored or undermined which has caused a fragmented and polarized media outlook. He, therefore, concludes that the public needs to be aware and improve their media literacy skills to avoid falling victim to fake news (Waisbord, 2018). In addition, Waisbord (2018) states that there needs to be a nuanced understanding of truth in journalism. Waisbord's (2018) last fact about truth could be applied beyond journalism. This is because finding the truth in deepfake videos is also vital, to avoid falling victim to being misinformed. 

Habgood-Coote (2018) believes that the public discourse about fake news should focus on more critical and nuanced discussions. Habgood-Coote (2018), has argued that instead of focusing on the terms of fake news, people should focus on improving their media literacy to tackle misinformation. This is because Habgood-Coote (2018) considers fake news to be a catch-all phrase with no essence entailed in it. Instead, he says that fake news is a weapon for political purposes, with no accuracy or reliability in it (Habgood-Coote, 2018). Overall, media becomes merely known as a biased, clickbaitable source that focuses on media ownership, instead of a trustful outlook (Habgood-Coote, 2018). He, therefore, proposed that society should not fixate on fake news, but alternatively focus on the structural problems (Habgood-Coote, 2018). Hence, Habgood-Coote (2018) suggests that people increase their media literacy skills and always think critically about the media environment. 

These wise words by Habgood-Coote (2018) give insight into the much larger issue, which is that as long as technology is increasing and developing the outcome will stay the same. That is the speed of information spreading consistently increases. This means as well that misinformation will therefore become more common. In order for this to be under control it is important to reflect on what Waisbord (2018) has said about truth, which is that there needs to be a distinct understanding of what most people consider to be true. Also, Habgood-Coote's (2018) ideas about people increasing their media literacy skills in order to identify fake news during this climate, can be taken on as good advice when it comes to deepfake videos. 

 

| ANALYSIS

 

The Guardian has published an article titled “You thought fake news was bad? Deep fakes are where truth goes to die”, in which they scrutinized the devastating effects deepfake videos had on the public in comparison to fake news (Schwartz, 2018). The article acknowledged the fact that misinformation was normally a threat to the news and labelled as fake news, however, deepfake videos have become the contemporary manner for misinformation to be spread (Schwartz, 2018). The underlying issue with these videos is that it has an impact on truth and information warfare since the deepfake videos make the content appear hyperrealistic (Schwartz, 2018). 

The technology that has developed deepfakes, makes it seem as if anyone has said or done anything that appears in the video (Schwartz, 2018). The videos are operated by artificial intelligence algorithms, which has caused the videos to appear more frequently on social media platforms (Schwartz, 2018). The more these videos appear on social media where there is a high quantity of people using the platform, the more people will be influenced by the potential misinformation these deepfake videos carry. In addition, the videos tend to be filled with fabricated content that has been exploited for malicious intent (Schwartz, 2018). The outcome has been that there is an increase in the spread of misinformation and manipulation of public opinion, which in turn has made people unable to trust visual evidence (Schwartz, 2018). 

The Guardian suggests that the only manner for the public to be aware of content containing false information is by looking out for unnatural movements, blurriness, inconsistent lighting and audio inconsistencies in the deepfake videos (Schwartz, 2018). Nonetheless, this is arguably not enough to protect society from these videos especially since social media is made up of a high amount of young adults who might overlook this and in turn, are misinformed about certain topics. 

The BBC article titled “Ukraine war: False TikTok videos draw millions of views” poses a great example of how devastating fake videos have been to the public discourse (Sardarizadeh, 2022). The videos included fictional content that contained false information and misled the public about Ukraine's status during the war (Sardarizadeh, 2022). For example, some of the videos included scenarios of Russian aggression and Ukrainian military victories (Sardarizadeh, 2022). The videos contained actors and special effects that demonstrated a real conflict between the opposing armies but from a different time. The content tends to frequently appear more on TikTok than most other social media platforms (Sardarizadeh, 2022). This is because according to the BBC, the platform's algorithms have unintentionally favoured false content, which has caused misinformation to spread faster (Sardarizadeh, 2022). This is because these videos seem to always hit up to a million views, which has a devastating impact on the public discourse. 

It has become challenging for people to stay updated via social media platforms about the Ukrainian war because fake or deepfake videos have made it almost impossible to know which narrative is true and which is simply fiction. The videos impose a serious threat to information warfare since the content cannot be told apart from fiction. The BBC has corresponded by urging fact-checking organizations and journalists to reveal videos containing misinformation (Sardarizadeh, 2022). The objective of this is that accurate information and context can combat the falseness of fake or deepfake videos. However, in the article, BBC is criticizing TikTok's fake and deepfake videos as harmful, especially the ones in connection to the Ukraine war (Sardarizadeh, 2022). 

 

| DISCUSSION

 

One of the many deepfake videos that have been circulating around the Internet is of Ukrainian President Volodymyr Zelenksy, which had devastating effects on the public discourse. The video contained Zelensky giving a speech about how Ukraine should surrender to the war (Metz, 2022). The deepfake video contained artificial intelligence technology with a hyperrealistic Zelensky appearing on the screen. The background that was used looked exactly the same as the one he is in when he addresses the news media (Metz, 2022). This has caused massive confusion on social media and left the Internet divided on how to combat such issues. Facebook and YouTube had decided to remove the video and considered it an infringement of their policies regarding manipulated media. Both platforms hoped that their input was viewed as an aid to combat the spread of misinformation and to protect the integrity of their platforms (Metz, 2022). 

 

Figure 1. President Zelensky is used in a deepfake video that has spread on social media platforms such as TikTok.

 

However, deepfake videos such as this from Zelenksy have showcased the growing challenges faced by various Internet platforms to prevent the spread of misinformation because the videos are harder to detect. This is why social media platforms are now investing in technology and resources that should improve identifying and removing such manipulated media (Metz, 2022). The main issue, however, remains, which is that it will become continuously more difficult to find a difference between fiction and truth. Regardless of the many technologies that are being developed, it seems as if there is less transparency within the content that appears on Internet platforms. This means that harmful content can spread without it being detected early enough to cause no damage. This is because some people use advanced technology for malicious purposes with their main objective of spreading misinformation that can damage reputations and eventually make media or public figures untruthful (Metz, 2022).  

It becomes therefore questionable to be able to keep a clear head around this, even if one considers Waisbord's (2018) definition of truth. Waisbord (2018) defined truth as an objective and verifiable representation of reality. Nonetheless, if one takes into account how hyperrealistic deepfake videos are and that millions of people have been fooled, it becomes impossible to consider Waisbord's (2018) definition as sufficient. Waisbord (2018) has also argued for the gaps in his definition, but the influences on media by outsider sources seem to not cover the fact that deepfake videos are extremely convincing. Most of these videos are created by anonymous users who cannot be traced, which means that no one can combat the outsider sources who influence this mess (Metz, 2022). Also, Habgood-Cote’s (2018) advice for people to improve their media literacy to spot misleading facts and not focus on fake news alone seems to overlook the challenging realistic content deepfake videos contain. This means that it is in fact impossible for a person to know if the video is a deepfake, by being aware of media literacy. This is because the videos use special effects and artificial intelligence technology to create realistic-looking public figures, such as in the case of Zelensky (Metz, 2022). It might be thus adequate to give the responsibility to social media platforms to take action to remove deepfake videos (Metz, 2022). 

One point that has become clearer throughout time technology advances is that deepfakes are the new way for manipulated and misleading content to be spread. Fake news is outdated stuff, while deepfake poses the new threat of the 21st century. There has been nothing more dangerous on the Internet that misleads the crowds, than deepfake videos that have hyperrealistic versions of reality. It has become virtually almost completely impossible to tell apart fiction from truth. Artificial intelligence technology has become a fun tool that should be used in movies to create scenarios to entertain people. However, some people have malicious intent and have used this technology to cause more harm than good.   

 

| CONCLUSION

 

Deepfake videos have imposed a threat towards the global community. The average social media user can barely tell apart fiction-filled videos from real events. This has caused serious problems towards society because misinformation can spread faster than ever before. People are now more exposed to being misled, which becomes especially dangerous when misinformation carries messages that could cause harm. Within this essay, various examples were demonstrated that illustrate these problems. 

In conclusion, Waisbord's (2018) definition of truth within media could not amply the devastating effects deepfake videos have on the public discourse. The videos are made by anonymous sources, which makes it impossible to expose these influences. Also, Habgood-Cote’s (2018) idea to increase media literacy has not adequately covered the input of possibly more advanced technology such as artificial intelligence technology which can create hyper-realistic scenarios. This means that when deepfake videos surface on the internet people cannot protect themselves from the false narrative that is being told to them. It has become possibly solely the Internet platform's responsibility to combat this issue so that the world can still see the difference between fiction and truth. The main point is that deepfake videos have presented a much bigger problem than fake news ever has done before. This means that misinformation will increase and spread at a much higher speed because more people are exposed to it and its hyperrealistic nature has made it almost impossible to recognize. In the end, one can only hope that an adequate solution can be found to combat this issue at hand. 

 

| REFERENCES

 

Barnhart, B. (2023). Social media demographics to inform your brand’s strategy in 2023. Sprout Social. https://sproutsocial.com/insights/new-social-media-demographics/ 

 

Habgood-Coote, J. (2018). Stop talking about fake news!. Inquiry: An Interdisciplinary Journal of Philosophy. 

 

Metz, R. (2022, March 16). Facebook and YouTube say they removed Zelensky deepfake. CNN. https://edition.cnn.com/2022/03/16/tech/deepfake-zelensky-facebook-meta/...

 

Sample, I. (2023, March 3). What are deepfakes – and how can you spot them? The Guardian. https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-an...

 

Sardarizadeh, B. S. (2022, April 25). Ukraine war: False TikTok videos draw millions of views. BBC News. https://www.bbc.com/news/60867414 

 

Schwartz, O. (2018, November 12). You thought fake news was bad? Deep fakes are where truth goes to die. The Guardian. https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-...

 

Waisbord, S. (2018). Truth is What Happens to News. Journalism Studies. 19:13, 1866-1878. 

 

West, D. M. (2022, March 9). How to combat fake news and disinformation. Brookings. https://www.brookings.edu/research/how-to-combat-fake-news-and-disinform...