Kristof Verfaillie

Diggit profile: Kristof Verfaillie on security and surveillance

Jan Blommaert

Kristof Verfaillie is a lecturer in Criminology at the Vrije Universiteit Brussel, Belgium. His current research focuses on the relationships between democracy, politics and crime control; and he is particularly interested in the effects of counterterrorism policies, the use of expert knowledge in crime control and surveillance systems, and the sociology of crime control transformations and reform.

Jan Blommaert: Kristof, your work systematically and consistently addresses the social and political dimensions of crime and punishment, the connection between crime, justice and their broader contexts, one can say. That includes the ways in which new discourses have emerged on these topics, discourses of threat, danger and suspicion, accompanied by discourses of what one might call totalized securitization, maximized prevention and zero-tolerance retribution. That makes you a bit of a special breed of criminologist, no?

Kristof Verfaillie: Early on in my training as a criminologist I was introduced to abolitionism, a perspective that offered radically different conceptualizations of crime and its control. Abolitionist scholars, like Herman Bianchi, Louk Hulsman, Nils Christie, Thomas Mathiesen, and later on Willem de Haan and John Blad, were an exciting and innovative read. They offered a forceful critique of the criminal justice system, they showed how none of the practices we take for granted in crime control should be taken for granted, and they explored alternatives to the formal systems of crime control. Although the ideas of the (early) abolitionists are now perceived by many as utopian, politically irrelevant, untenable or even irresponsible, their work still resonates in many important contemporary debates and research programs. Think about alternative drug control policies, self-regulation and sex work, restorative justice, reductionist penal policies, the growing importance of human rights in crime control and social justice to name a few.

While many tend to focus on its activist and social agenda, as a researcher I began taking an interest in a dimension of abolitionism that I felt many seemed to ignore, which is its theoretical basis, particularly its engagement with language and power and questions of social order and change. Studying the abolitionists made me pursue questions like: which assumptions underpin our crime control arrangements? Why do we control crime the way we do? Why do crime control practices change? Can change be engineered? These questions seem to be a common thread throughout my work.

The slippery slope of security

Jan Blommaert: Can we start with the question about assumptions? What I see in the wake of 9/11, and increasingly revolving around migration, is the rise of a weird assumption about human behavior – that of the ‘slippery slope’. The idea that terrorists become what they are after having gone through a linear set of stages, starting with so-called ‘small signs’ of extraordinary banality (e.g. the terrorist-to-be starts talking back to teachers and parents). Such small signs need to be detected early, in childhood, by all those involved with the terrorist-to-be. And if we do that well, we will get rid of terror. I find such views of human behavior not just ridiculously naive, but also dangerous.

Kristof Verfaillie: They are. You see, one of the dominant assumptions in the counterterrorism debate is that a violent act of terrorism doesn’t just happen; it is the outcome of a process, or what is commonly referred to as a pathway. The idea is that terrorism implies a transformation of the way people think. Terrorists are individuals who are believed to have evolved from “normal” people to “people who are willing to commit a violent act”. This idea is then combined with the assumption that this process, this transformation of thought, is reflected in the way people dress, talk and behave (which in itself is a very old idea in criminology). Thinking in terms of pathways now seems helpful because it opens up the possibility of control: because a pathway is something that can be detected, it can potentially be interrupted so that the terrorist act never materializes.

While the appeal of pathway thinking is obvious, the fundamental problem is that we don’t have a clear idea of what a pathway to terrorism is supposed to look like. We don’t know if the notion of pathways is even useful to begin with. It is at this point that such views become dangerous, as you suggest. If you don’t have an adequate theory of human behavior, if you don’t know what a pathway to terrorism looks like, but if you maintain that there is one, and that it can be read off of people, then prevention-through-detection becomes a nebulous practice that is based on “gut feeling” and stereotypical thinking. It is at this point that a wide range of issues - like young people acting out in school, people who dress, talk or act in ways we find strange or deviant - can all potentially become part of a counterterrorism discourse.

It is the wide range of unmonitored everyday practices and assessments performed by people without training or supervision that is most disruptive in society.

The most concerning part about all this to me is therefore not that police and intelligence services attempt to detect or uncover imminent terrorist threats, although we obviously need to remain critical about how such practices are performed. What is most concerning is that this prevention-through-detection paradigm has now spread throughout society. It is the wide range of unmonitored everyday practices and (informal) assessments performed by people without training or supervision that is most disruptive in society. We have come to believe that terrorism is the outcome of a process that is reflected in details of people’s behavior, and that this entails a moral obligation to detect, report and react to such behavior without really knowing what to look for or without ever having any guarantee that what we believe we have noticed has anything to do with terrorism at all. It is at this point that counterterrorism becomes counterproductive as it begins to produce the very effects it attempts to eliminate.

Security and overkill

Jan Blommaert: There are two things that worry me tremendously when you describe this pathway format. Let me start with the first one: it individualizes what might as well be seen as a social and political, collective phenomenon, a globalized one in addition.

Kristof Verfaillie: The European counterterrorism discourse initially never intended to reduce terrorism to individual pathologies (“terrorists are psychopaths”) or to easy moral qualifications (“we have to fight evil”). Early on, (security) experts and officials realised that success in counterterrorism was highly dependent on a thorough understanding of terrorism and its root causes. There was a firm intent to not simply detect individuals on a pathway to a terrorist act, but to take structural measures as well, to prevent terrorism in a much broader structural sense of the word.

Experts like Rik Coolsaet, who were part of such initial expert groups, now conclude that these attempts have largely failed: we still don’t really know how to explain terrorism, there remain a great number of competing claims about the causes of terrorism, and while a clear terrorist profile is yet to be found, the dominant political discourse does seem to favor explanations in which terrorism is reduced to Muslim men who, influenced by radical forms of Islam, have come to feel a deep resentment toward Western society and the Western way of life.

Jan Blommaert: The second thing that worries me about the pathway format is the overkill aspect. In Belgium, we recorded a total of 700 Muslim jihadists who joined IS, out of a total Muslim population of more than 700.000, which is a very small minority in Belgium’s population of 11 million. Elsewhere in Europe, the proportion is even lower. Now, of course these 700 need to be kept under strict control, since I fully acknowledge the security risks they embody. But what we have seen was that the entire population – all 11 million – have become possible suspects due to the pathway format, and that the whole of society was mobilized and incorporated into the security apparatus. Think of security instructions and reporting duties given to schools, health workers and so forth, and posters in railway stations calling out all of us to report any and all ‘suspicious’ individuals or activities and reminding us that ‘security is everyone’s business’. Well, this was the sort of thing we used to find so appalling about the Stasi-ruled former GDR: a nation of informers conditioned to look for any sign of deviance and abnormality…

There is a myriad of well-intended preventive practices out there but we have no idea whether they actually prevent terrorism.

Kristof Verfaillie: It’s true that from the onset preventing terrorism meant establishing connections between law enforcement and other actors in society (schools, social work, etc.). The initial idea, however, was not to simply improve information gathering. The idea was to pick up on problematic “radical” behavior and intervene before such behavior could evolve into something much worse. Today there is a consensus that these interventions have to be tailored to the specific context of a case, a local case-management approach if you will.

However, for such approaches to work or be meaningful, there needs to be a high degree of trust between partners (e.g. police, schools and social work) and there needs to be a clear prevention mechanism (“will this intervention effectively prevent a terrorist act?”). The debates about professional secrecy and information sharing have shown how difficult it can be to form useful partnerships in counterterrorism and we know that a clear prevention mechanism does not exist. This essentially means that there is a myriad of well-intended preventive practices out there but that we have no idea whether they actually prevent terrorism. This is precisely why stereotypical thinking remains so dominant in counterterrorism: knowing there is a problem without knowing exactly what that problem is and what it is you need to be looking for creates uncertainty. Should I report this particular conflict or conduct or not? Who should I report it to? Should I intervene or not? After all, no one wants to be one who overlooked a potential terrorist. And so we focus on stereotypes, on the dominant assumptions we have come to associate with terrorism.

So you are right, such associations result in overkill. But not so much because of the top-down demands of a security apparatus. It is because stereotypical thinking about terrorism has spread throughout society; and this process has no centre. In addition to formal information gathering practices and guidelines in which stereotypical thinking is indeed often reproduced, people have simply come to associate terrorism with specific kinds of behavior and with specific identities. The law enforcement and intelligence communities have openly warned against such thinking, Europol, for instance, has done so repeatedly in its annual EU terrorism situation and trend reports. For them it results in infobesitas, a massive flooding of the system with information that is of little or no use to them. It also results in tunnel vision, a failure to notice or take serious terrorist threats that emerge beyond the stereotype.

Online counterterrorism and its pitfalls

Jan Blommaert: Can I, at this point, return to something you said earlier, “the wide range of unmonitored everyday practices and informal assessments performed by people without training or supervision that is most disruptive in society.” We now read and hear press reports about facial recognition technologies being deployed in various sites, about Huawei phones gathering sensitive data, about Cambridge Analytica and Facebook, about connected databases and so forth. A totalized panopticon, if you wish, and certainly an enormous range of “unmonitored daily practices and informal assessments” performed by people we do not know and who are not subject to democratic control. Companies such as Google now control an intelligence empire unmatched by that of any state. Does this technological revolution involve – paradoxically perhaps – an informalization of the domain of security?

Kristof Verfaillie: Informalization refers to making assessments (“prevention”) on the basis of what we happen to associate terrorism with (“what we think we know about terrorism”). Such informal criteria are important and they can be found in policy documents that openly connect the detection of problematic forms of radicalization to Islam. Think about the 3 I–model in the Belgian prevention policy for instance, where one needs to look for "ïdeology", "indicators of behavior", and "identity and looks". And they are the reason why practices that do not fit these associations can be trivialized (“those memes were just for fun” or “I thought much worse things when I was young”). The online environment is crucial in the creation and dissemination of such associations; it should therefore not be reduced as to its potential for the law enforcement and intelligence communities to monitor and detect pathways to terrorism. Obviously it has that potential.

Security revolves around struggles over meaning, and providing security is about winning such struggles.

Much more important, however, is that in an online world, counterterrorism involves a complex politics of representation. Security revolves around struggles over meaning, and providing security is about winning such struggles. Think about how people use social media to promote terrorism or radical ideologies on the one hand with manifestos, clips and so forth, and the elimination of these attempts or the creation and online dissemination of counter-narratives by governments on the other. Think also about the struggles between groups unrelated to any government, for instance, the GhostSec group and its attacks on IS websites and the security implications this has. And think about the wider efforts by authoritarian regimes to destabilize Western democracies, by fuelling polarisation in these societies.

Such struggles are now moving beyond offering alternative framings of an issue online or eliminating other framings (taking down accounts and websites). We are not simply offered a plurality of narratives anymore. For instance, we were (or better, are) able to see that one political party frames terrorism one particular way, and another political party frames that same issue in a distinctly different way, and the struggle consists of accusing the other of creating fake news.

However, we now become subjected to attempts at imposing the correct or preferred meaning of issues in ways that erase that plurality, or our awareness of it. Think about deepfake technology or the subtle ways AI learns to understand our preferences, the abilities behavioral economics offers to alter choice architectures, resulting in highly tailored ways to experience the online environment, ways that seem clear, uncontested and obscure in terms of how that environment and its truth claims are produced. “Security” is deeply affected by this elimination of plurality. 

Truth and social engineering

Jan Blommaert: It’s fascinating to see how you describe online metapolitical struggles as crucial security issues. And that enables us to come full circle in this interview. At the beginning, you outlined the questions that have guided your research all along, and one of these questions was “can change be engineered?” Do you see these online battles over truth and meaning as important forms of – explicitly engineered – social change?

Kristof Verfaillie: If social engineering refers to the conscious attempt to direct human behavior towards certain ends and produce intended outcomes in the social world, then many of these online battles may indeed be described as explicit attempts to engineer social change. For instance, when governments take down websites or social media outlets of terrorist groups and organize online counternarrative campaigns, this can be seen as an attempt to engineer a particular outcome (“a safer world”). We must, however, distinguish between these rather basic attempts to create intended outcomes and the advent of much darker forms of social engineering.

The former are problematic because they are based on obsolete notions of human behavior and social change. They are essentially rooted in some form of magic bullet thinking, a stimulus-response theory about human communication in which human behavior is simply seen as an unmediated function of a particular message: we frame a topic in a particular way and we assume that the recipients of those messages will somehow adopt that frame and will act accordingly.

Based on such thinking governments, researchers or interest groups produce counternarratives because they believe that these narratives will direct human behavior in the ways they intended. They create a narrative to de-legitimize terrorist propaganda and they believe that at-risk individuals will adopt this message and will therefore be less prone to participate in terrorism. We see similar examples in the offline world: right-wing extremists are sent to a holocaust museum because the judiciary believes this will prompt them to abandon their extremist beliefs and adopt the museum’s narrative (“put things in another perspective”).

Jan Blommaert: And this is an illusion?

Kristof Verfaillie: Well, it is, precisely because behavioral change doesn’t work that way. Magic bullet thinking locates the terrorist intent in the message (propaganda), which means that disengagement or even deradicalization is located in the counternarrative (“democratic values” or “the correct reading of Islam”). To change the message is to change behaviour, that is the idea, and this is why governments believe they can draft counternarratives without really knowing what drives people to commit terrorist acts or without knowing who “at-risk” individuals are - at least not beyond the stereotype.

Because intent is located in the message, magic bullet thinking requires specific connections between social engineering and appeals to truth. “Truth” is what allows qualitative distinctions to be made among different messages. A message which is perceived to be true has more authority than a message that isn’t, and so a counternarrative wants to be more than just another message. It intends to be the message, the only valid account or representation of a particular issue. In addition to making propaganda inaccessible to an audience (shutting down websites), counternarratives therefore, directly or indirectly, de-legitimize propaganda, expose it as a false narrative, a lie, a misinterpretation or misrepresentation of particular texts and ideologies (e.g. religion). The counternarrative essentially wants to end the narrative it intends to counter, which is why online battles are multimodal practices: they don’t simply convey the explicit message that terrorist propaganda is a misrepresentation of religious texts or of life in the caliphate. The message is organized in specific ways to increase its impact - visual imagery, the use of multiple media, the use of authority figures to deliver the message, and so forth. The counternarrative needs to be the only relevant account and thus the one that guides future behavior.

A counternarrative wants to be more than just another message. It intends to be the message, the only valid account or representation of a particular issue.

Jan Blommaert: To me, this sounds quite familiar. It’s exactly what we found a while ago when I investigated aspects of the manosphere with my students.

Kristof Verfaillie: Indeed, these practices are obviously not restricted to governments. And the manosphere, with its red pill imagery drawn from the movie The Matrix, is a very good example, yes. Such online groups do not simply oppose feminism or disseminate misogynist messages. They use the red pill imagery to refer to an awakening, a process of becoming aware of reality, of what is really going on in the world as opposed to what only appears to be going on. Feminism, to them, is not a perspective on what it means to be a woman. It becomes a symptom of ignorance, of being disconnected from reality, and thus a moral disqualification, something undesirable that must be opposed or even fought. And so a distinction emerges between messages that are truthful and good and those who are false and morally reprehensible. The former imply a course of action that is worth pursuing, the latter don’t.

What they ultimately want to erase in these battles is the very idea of a battle. Hence the appeal to truth. Engineering security in these basic schemes therefore always seems to revolve around attempts to eliminate the very idea of opposing narratives on the one hand and presenting audiences with a desirable reality on the other.

Jan Blommaert: This is the idea behind “alternative facts” and “it could have been true” which we find in so many conspiracy theories and in the climate debates: your scientific truth is just an opinion.

Kristof Verfaillie: Yes, these are all attempts to disqualify particular discourses but what it also shows is that simply stating the facts or pointing to the scientific nature of a discourse is not what prompts behavioral change. Don’t get me wrong, it is obviously useful and necessary to point to scientific findings, to have informed debates, to educate people about human rights and about the processes leading up to the atrocities in our recent history. It is obviously necessary to counter terrorism but there is no point in doing so without an adequate theory of change, and I believe this requires an adequate theory of human communication.

Inadequate theories of change, like magic bullet thinking make counterterrorism policies ineffective and they do not allow us to make sense of the world we live in and the security issues in that world. This is a common problem in crime control, by the way: think about rational choice theory and its many variations. It remains popular today although its core assumptions have been obsolete for many decades.

Adequate theories are pressing because this connection between truth, social engineering and security is not restricted to the basic battles or attempts to influence behavior we discussed so far. According to Shoshana Zuboff we now live in an age of surveillance capitalism: we are increasingly caught up in what she refers to as a “ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces”. One of the consequences of this architecture is that it allows companies to use the massive amounts of behavioral data we produce to predict what we will do, and this has great commercial value.

Jan Blommaert: And value for security agencies as well, I suppose?

Kristof Verfaillie: Surely. For crime control this means that control is no longer reduced to monitoring behavior, at least not as traditional CCTV systems used to do. It is about predicting what we will do. While this is not at all new in crime control, Zuboff suggests that our notion of prediction has changed. Prediction is no longer the passive practice it used to be, i.e. making projections of future states based on past behavioral patterns; it is about actively creating the future through behavioral modification. Companies have learned that the traditional ways to predict behavior are in fact highly ineffective or of little added value. Shaping the future through behavioral modification seems much more promising.

This connection between truth and social engineering translates into an obscuration of choice, plurality and of manufactured realities, and this is what is most disconcerting.

Governments are gradually drawing similar conclusions. Think about the experiments with predictive policing, which turns out to have little benefit other than perhaps a more transparent allocation of police resources. Rather than assessing what will, might or will probably happen, it is now felt to be much more effective to actively shape the future. This implies going beyond knowing what people do and assessing what they will do; it refers to actually shaping what they do, herd people, if you will, in subtle covert and non-coercive ways toward intended outcomes – what Zuboff refers to as instrumentarian power.

And so we witness the advent of a new regime of control, ranging from the emergence of 'nudge units' in policymaking across western democracies as well as experiments with social credit systems in China, which will come into full effect in the next few years and combines these new forms of social engineering with older totalitarian notions of control.

This connection between truth and social engineering translates into an obscuration of choice, plurality and of manufactured realities, and this is what is most disconcerting. Whereas the basic narrative-counternarrative paradigm still manifests itself as a battle, the idea of a battle is truly lost in the new fusion of behavioral economics and technology. This fusion is about creating intended outcomes in ways we are unaware of. Engineering security, then, is about making behavior predictable and about creating conformity in ways the subject is unaware of. In other words, we are beginning to inhabit worlds in which our emotions, opinions and choices are only new to us.

Needless to say this creates enormous challenges for liberal democracies, and this, to me, appears to be one of the important challenges of our time. We urgently need to better understand these compliance-producing architectures in which the very act of social engineering is obscured. We have to, if we want to build online worlds that further enhance democracy as a way of life.