Fake news, hybrid media, politics and democracy

Defining fake news in a hybrid media system

12 minutes to read
Judith Evens

The Covid-19 pandemic has once again shown us that fake news is thriving in the hybrid media system and that the boundary between what is fake and what is real is often blurry. For example, can news promoting the malaria medication hydroxychloroquine be considered fake news, or is it misinformed content published in an honest attempt to battle a pandemic?

The virality of fake news

In this article I will unravel the various definitions of fake news and situate it within a political context. Fake news can be overwhelming. I aim to clarify and bring a bit of order to this chaotic world. I will first turn to a relatively new definition by Venturini (2019). He argues that using the term fake news can be deceptive because it implies a dichotomy of real/fake or true/false. It assumes that there is a clear distinction between truth and falsehood and ignores the fluidity of this distinction.

We have to keep in mind that what is considered true and false can change over time and may differ from place to place. This tension is also reflected in the following quote from Venturini: “the notion of ‘fake news’ is misleading because it supposes that malicious pieces of news are manufactured, while reliable ones correspond directly to reality” (p. 2).

This binary way of thinking is often also seen in discussions of the democratic and political value of digital media: digital media either benefit democracy by empowering citizens or they undermine democracy by primarily empowering elites; either digital media entirely replace legacy media or they are fully absorbed by the older media systems (Chadwick et al., 2015).

Dichotomous thinking is unhelpful when it comes to analysing fake news and the hybrid media system. According to Venturini, the consumption of fake news can be compared to the consumption of fast food in that it can be addictive. He therefore proposes the term “junk news”. The danger of junk news is that it distracts its consumers from more important information and it prevents the circulation of other stories. Fake news can also be used as a convenient “weapon to discredit opposing sources of information” (p. 2).

Throughout this article I will use Chadwick et al.'s (2015) analysis of politics in the hybrid media system to show how fake news relates to politics and democracy. First, it is important to understand what the hybrid media system is. Chadwick et al. define the hybrid media system as a system that “is built upon interactions among older and newer media logics – where logics are defined as bundles of technologies, genres, norms, behaviors, and organizational forms – in the reflexively connected social fields of media and politics” (p. 4). Moreover, actors adjust the flow of information to adapt to their needs and objectives in manners that also “modify, enable, and disable other’s agency, across and between a range of older and newer media settings” (p. 4). In the hybrid media system power plays an important role. Power is understood as the relationship between different social actors, but also as the relationship between social actors and media technologies.

Circulation vs. content

The existence of fake news is nothing new, but the current overload of junk news or junk information is a relatively recent phenomenon. This can be attributed to the development of a system of virality that is, according to Venturini (2019), at the same time economic, communicational, technological, cultural, and political in nature. This will move us towards his argument of circulation versus content.

The economy of junk news is connected to the attention economy. In this economy there is no scarcity of information to sell, but instead consumer attention and engagement is of high value. Traffic and attention is now the commodity, not digital content or services. Google recognizes this and creates opportunities for every website to sell its traffic. The more clicks an advertisement generates, the higher this attention is valued. As a consequence more advertisements are created that only strive for a high amount of clicks. This is where junk news can play a valuable role as a clickbait technique. Sensationalist, misleading, and possibly false content that exploits political interests is used to generate clickbait attention.

To make engagement easier many digital platforms have turned to a technology of virality. The audience or information consumers are encouraged to engage in one-click-labour: by simply clicking on a “like” or “share” button we can engage with content with just one tap on the screen or computer mouse.

This is what Venturini refers to as the communication mechanism of junk news. This mechanism “encourages the circulation of messages that are not only sticky, but also “spreadable”, i.e. designed to be circulated and engaged with” (p. 5). Junk news is highly spreadable in this sense. The technology of junk news strives for a maximization of content virality, since this also maximizes the profit that is generated by clicks and viewing time. This is also why changing algorithms is not favoured by digital platforms as a solution to fake news, because this is counterintuitive to their business model. As Venturini points out: “they will not oppose the very virality that generates their profits” (p. 7).

We have to keep in mind that what is considered true and false can change over time and may differ from place to place.

Algorithms are not the only forces that play a part in the circulation of junk news. Micro-celebrities and some online subcultures thrive under a culture of virality and junk news. The concept of virality that Venturini uses is also mentioned by Chadwick et al. (2015), but in a political context. Digital technologies such as social media can give political campaigners direct and easy access to their potential voters. This stimulates virality and engagement by publishing content that was originally aired on television or published in newspapers on a digital platform. All these aforementioned elements of junk news counteract a fruitful digital sphere and democratic debate. For Venturini, junk news can only be combatted if we consider the full context of virality, rather than solely focusing on fact-checking and providing counter-information.

Defining fake news

In addition to Venturini (2019), various other researchers have concerned themselves with defining fake news? Gelfert (2018) proposes that the term fake news should only be used when information is deliberately designed to mislead individuals. Further, we should be careful with primarily associating fake news with digital media platforms, because “a piece of online fake news does not suddenly cease to be fake news, just because it gets picked up and repeated on AM talk radio or makes its way into an op-ed piece by a newspaper columnist” (p. 98).

For Gelfert, it makes more sense to define fake news according to its content than the medium that circulates the content. However, he acknowledges that this is not a simple matter. Fake news regularly combines some element of truth with deliberate deceptions, however fake news can also be based on truthful information that is wrongfully portrayed or summarised.

To make future conceptual analysis easier, Gelfert proposes the following definition: “Fake news is the deliberate presentation of (typically) false or misleading claims as news, where the claims are misleading by design” (p. 109). His distinction of “by design” points not only to the design of the content, but also to how processes of the hybrid media system are deployed to spread false or misleading information.

Within the hybrid media system, social media can be used to target a specific audience, and content can be designed to exploit the heuristics and cognitive biases of the audience in order to perpetuate the circulation of fake news. This is where Gelfert localises the potency of fake news. Social media provides new ways of circulating and reporting news-like information. Gelfert acknowledges that debunking fake news is not an effective solution because this can lead to “source confusion, belief perseverance, and the backfire effect” (p. 113). However, Gelfert does not offer other solutions to the contemporary problems of fake news.

Is news satire fake news?

To illustrate how broadly defined the term fake news can be, I will draw on an article by Tandot, Lim and Ling (2017) that presents a typology of different types of fake news based on other academic articles. The typology they present is very broad and includes news satire, news parody, news fabrication, photo manipulation, advertising and public relations, and finally propaganda.

What connects these types of fake news is how they all look or feel like genuine news. But there are also important differences situated alongside a continuum of facticity and intention to deceive. Just as Gelfert (2018) acknowledges that some fake news information partly portrays actual facts, Tandot et al. use the level of facticity as a domain to determine how much a type of fake news relies on facts.

The second domain refers to how much the author intends to mislead or deceive the reader. News satire websites such as The Onion score high in facticity but low on immediate intention to mislead the reader, according to Tandot et al. Most types of fake news however, such as propaganda and manipulation, have the intent to deceive the reader. Furthermore, most types of fake news score high on the level of facticity.

However, Tandot et al. stress that this typology is based on previous research and want to make clear that they, for example, do not agree with classifying news satire as fake news. They acknowledge that the contemporary discourse on fake news should be examined. It is important to approach their typology with a critical and nuanced eye, otherwise the concept and definition of fake news will only get fuzzier.

It is also important to consider different definitions and typologies of fake news in their socio-political and historical context. As Dale (2019) points out, prior to the United States presidential elections in 2016, the term fake news was often used to describe news satire such as The Onion. Additionally, as many researchers point out, the term fake news is often thrown around to discredit journalists, news outlets, politicians or public figures. In this context, argues Dale, the term is adjusted to fit the needs of individuals or organizations.

By using fake news to spread doubt, attacking online discussions or spreading misinformation that is created to go viral, public opinion is manipulated and political campaigns are influenced (Venturini, 2019).

The role of digital technologies

Few researchers would deny the significant impact of the hybrid media system and its technologies on the spread of fake news. However, less attention has been given to the analysis of how exactly these digital technologies have contributed to the noteworthy reach of fake news. Dale (2019) argues that digital technologies enable content creation and the spread of fake news in three ways:

  1. by democratizing content creation
  2. by blurring the distinctions between different types of digital content
  3. by appointing the role of gatekeeper and content distributor to an algorithm

Dale acknowledges that not every individual on the internet is a content creator, but it has become increasingly easier to create and publish content on the internet. Online blogs are an excellent example of this. On the one hand, online publishing by non-professional or amateur journalists can be of benefit to democracy because you can bypass the gatekeeping of legacy media and reach a worldwide audience.

On the other hand, this tool is also available for those with malevolent intentions who can profit from spreading misinformation. In their analysis of the hybrid media system, Chadwick et al. (2015) also recognize how digital technologies make it easier for individuals to become involved in the production of news. This does not automatically mean that a hybrid media system benefits the inclusivity of democracy: “hybridity presents opportunities for non-elites to exert power, but media and political elites can, and do, adapt to these new environments" (p. 22).

Placing responsibility on the information consumer ignores the impact of digital media technologies, especially their algorithms.

Similar to Venturini (2019), Dale (2019) directs our attention to how Google advertising has played a role in incentivizing behaviour that results in higher monetary gain, primarily by prioritizing stories that result in a higher amount of clicks, likes and shares. This makes the creation of digital fake news appealing for those who aim to make a profit. This also touches upon the concept of virality. Organizations and individuals are competing for the attention of the consumer. Traffic and attention are the new commodity.

The second factor that contributes to the spread of fake news, according to Dale, is the fact that the quality of online content is increasing. This in turn makes it harder for media consumers to distinguish between fake news and professionally produced news content. He puts this into historical context by referring to advertorials (advertisements that look like editorials). Legacy media, such as newspapers and magazines, introduced the concept of mimicking the design of professional news content by implementing advertorials. Producers of fake news use similar tactics to trick information consumers into perceiving their content as news content.

The final factor that contributes to the increased spread of false news is the role of algorithms. A lot of emphasis is given to media literacy and readers' skill in distinguishing fake news. However, placing responsibility on the information consumer ignores the impact of digital media technologies, especially their algorithms. Algorithms have become very powerful in the digital media landscape because they prioritize content with higher user interaction or engagement in the form of likes, shares and views.

If we take YouTube as an example, content with a lot of user engagement is more visible on the homepage because the algorithm recognizes that this content is popular or trending. Dale compares this with the frontpage of a newspaper. The combination of a high number of views and the appearance on the home page “lends it a quality of legitimacy to the user” (p. 132).

Most importantly, algorithmic tools “are not unbiased and free from human influence as many believe them to be. They are created by people, process data generated by people and, as such, are subject to the same human biases in their operation and the interpretation of their output” (p. 132). This is important to keep in mind, because this also gives us room to change these algorithms. They are not some ungraspable and uncontrollable force of nature, but can be changed to fit our needs. After all, they are created by human beings.

Relating politics, democracy and fake news

Allcott and Gentzkow (2017) studied the spread of fake news during the United States presidential election in 2016. They found that the average American citizen came across and could recall at least one fake news story, especially stories that were in favour of Donald Trump instead of Hillary Clinton.

However, it is important to note that fake news does not stay within U.S. borders. Some tremendously popular Twitter accounts that tweeted in favour of Donald Trump and spread fake news turned out to be Russian bots, not actual people (Ladd, 2017). Some of these bots also managed to get picked up by legacy media, illustrating how in a hybrid media system digitally spread fake news can cross media platform borders.

Dale (2019) also notes how fake news can affect media consumers by “creating feelings of inefficacy, alienation, and cynicism regarding the election process and results” (p. 124). This complicates the role of the hybrid media system in contributing to democracy. This culture of virality also affects politics. By using fake news to spread doubt, attacking online discussions, or spreading misinformation that is created to go viral, public opinion is manipulated and political campaigns are influenced (Venturini, 2019).

How to combat fake news?

Researchers from various disciplines can play an important role in offering possible solutions to slow the spread of fake news. As we have read, Gelfert (2018) and Venturini (2019) acknowledge that debunking fake news and fact-checking is not the end-all solution to the prevalence of fake news. Combatting fake news is not a simple task, because the digital ecology in which is thrives is incredibly complex.

Let’s take the emphasis of many educators on media literacy as fake news solution as an example. They argue we should aim to become more skilled at distinguishing fake news from professional news content. However, this argument does not acknowledge the incredibly refined skills of fake news producers.

Consider for example deep fake videos. These are extremely realistic videos created with machine learning techniques. Often, these videos show people saying or doing something they never actually did (Fallis, in press). It is difficult to determine if a video is a deep fake, and emphasizing personal responsibility in distinguishing deepfakes from real videos would not solve the problem.

Since it is problematic and difficult to let tech giants such as Google and Facebook decide which information is true or false, we have to dig deeper and look at the main contributor to the spread of fake news: the digital ecology of virality. We cannot rely on the goodwill of tech giants, but governments should institute regulations that ensure these companies take responsibility.

However, as Venturini also acknowledges, we should not forget that people also play a role in the spread of fake news. We have to be mindful of the content that we consume and how we, oftentimes unintentionally, contribute to the spread of fake news.

Finally, we should not forget the responsibility of journalism. Journalists should pay attention to which stories they prioritize and aim to prevent the reproduction of digital fake news in legacy media. In my opinion, all these different solutions should be integrated into a holistic approach, in order to successfully slow the spread of fake news.


Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236.

Chadwick, A., Dennis, J., & Smith, A. P. (2015). Politics in the Age of Hybrid Media: Power, Systems, and Media Logics. In Bruns, A., Enli, G., Skogerbø, E., Larsson, A.O., & Christensen, C. (Eds.), The Routledge Companion to Social Media and Politics (pp. 7-22). Routledge.

Dale, T. (2019). The Fundamental Roles of Technology in the Spread of Fake News. In Chiluwa, I. E., & Samoilenko, S. A. (Eds.), Handbook of Research on Deception, Fake News, and Misinformation Online (pp. 122-137). IGI Global.

Fallis, D. (in press). The Epistemic Threat of Deepfakes. Philosophy & Technology.

Gelfert, A. (2018). Fake News: A Definition. Informal Logic, 38(1), 84–117.

Ladd, C. (2017, November 21). Jenna Abrams Is Not Real And That Matters More Than You Think. Forbes. 

Venturini, T. (2019). From Fake to Junk News, the Data Politics of Online Virality. In D. Bigo, E. Isin, & E. Ruppert (Eds.), Data Politics: Worlds, Subjects, Rights. Routledge.