As of May 25, 2018, the new European General Data Protection Regulation (GDPR) will be enforced. This new regulation took a long time to prepare and implement, for its scope is vast and its effects will be huge. Reading the voluminous documentation about the GDPR, especially when done with a moderate dose of critical discourse analysis, yields some puzzling and potentially disconcerting insights.
What is the GDPR about?
On the EU website dedicated to the GDPR, the aims of the regulation are described as follows: the GDPR "was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy." Even if the term "data privacy" is used as the element of coherence, we can anticipate something very complex here.
We get a regulation that (a) creates legal uniformity regarding data privacy throughout the EU (and beyond); (b) in some way defines and delineates (and so presumably "empowers") the data privacy rights of EU citizens, and (c) regulates and streamlines the ways in which data privacy is currently handled by "organizations". The latter can be almost anything, from a boy scouts troupe to a multinational corporation, from a badminton club to a newspaper, from a university to a government department and internet provider. Knowing how the EU proceeds in its legislative work, one can only imagine the density of lobbying work that has preceded the GDPR, for almost anyone is affected by it.
Surely, data privacy is one of the most pressing concerns these days. The Cambridge Analytica-Facebook data leaks scandal is still being rolled out to the public, cybercrime (including identity theft, hacking and copyright violations) has become a top security priority, and all of us have had the pleasure of receiving emails from extraordinarily friendly people inviting us to re-activate our bank accounts or social security numbers by sending all our personal details over. So it is clear that something needed to be done about such new forms of risk in an online-offline world. And yes, in the GDPR there are long and carefully worded sections "empowering" citizens when it comes to data privacy.
The citizen's rights to privacy are defined in a regulated world in which Big Data are traded, exchanged, merged, and analyzed by other actors.
But much more attention is given to the transactional aspects of personal data: the conditions and rights to acquire, store and use such data in commercial, security, research and other contexts. The GDPR codifies the age of the Internet of Things, of Big Data and of the many ways in which Big Data generates social, economic, cultural and political effects, most of them entirely new and, consequently, poorly regulated. The citizen's rights to privacy are defined in a regulated world in which such data are traded, exchanged, merged, and analyzed by a tremendous range of other actors. And that is where things become intriguing.
From citizen to data subject
The text is what we would now call "posthuman": human beings are not central in them. This becomes clear from the standard use of the term "data subject" in the GDPR rather than, for instance, "citizen". To understand the implications of that, we need to take one step back and consider the most fundamental definition in the GDPR, that of "personal data". Personal data stands for "any information relating to an identified or identifiable natural person ('data subject'); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person".
I can be "implicit in the algorithm" and identified as a bundle of correlated features only indirectly suggesting particular actions - to which actions I can be held accountable nonetheless
The "identifiable natural person" is equated with "data subject". This is not self-evident, but the reason lies in the small phrase "directly or indirectly" when it comes to being identified. Such innocuous-looking phrases always look crucial to the critical discourse analyst, for they are deliberately vague while tremendously inclusive. They create an almost infinite scope for what counts as "identification" and, by extension, "identity".
For here is the thing: one can be "identified" not just by anything directly leading to the "natural person" but even more by aggregations of indirect data gathered by others and within others' range of legitimate agency through almost every instrument currently in place. The GPS data of your mobile phone, the algorithms of your social media profiles and your Amazon.com shopping routines, the IP address of your computer, records of medical check-ups and footage from surveillance cameras or drones placing you somewhere: all of these are "identifiers" that generate personal data. But without the "natural person" as an active, intentional agent - it is not the "natural person" who creates and controls these traces of identity. We are in the world of "profiling" here and, indeed, I can be "implicit in the algorithm" and identified as a bundle of correlated features only indirectly suggesting particular actions - to which actions I can be held accountable nonetheless.
Are such data still "personal"? Can they be reasonably used as "identifiers" not just of people but of their individual character, behavior and actual (intentional) actions? In a 2015 interview, European Data Protection Supervisor Giovanni Buttarelli was clear: no. Butarelli saw a post-personal (or posthuman, as I called it) future ahead:
"the concept of ‘personal data protection’ will disappear in the near future, as will the concept of ‘personal data’. We will all be easier to predict and identify even without data about our individual identities, it will be easier to reuse the information and group it together with other information and interpret it accordingly". He went on: "The concept of ‘data-subject’ and therefore of the individual will in all likelihood disappear too, as we will be grouped together according to segments of information (...) The difference between ‘personal’ and ‘non-personal’ will then be blurred".
We are, in the most literal sense of the term, the profiles constructed of us by Big Data. And since, in the views of the European Data Protection Supervisor, we are facing the end of the individual, one should wonder what's left of privacy?
Who owns "me"?
Most of the identifiers discussed in the GDPR are not created by the "natural persons" they index, I said. I must add: many of them are not even owned by these "natural persons." We return to the core purpose of the GDPR here: it should enable and regulate the trading, exchange, storage and use of data about "natural persons" but by other commercial and noncommercial actors.
Yes of course, the GDPR now stipulates that when personal data leading to an identifiable "natural person" are being used, these persons have to be informed and consent must be asked and obtained. The latter, however, is far from an absolute rule in the GDPR. There are several exceptions and there is a gigantic loophole under the label of "anonymized data" - the thing Butarelli alluded to when he spoke of people being grouped according to "segments of information" rather than anything properly "personal". This, in actual fact, is what the term "data subjects" actually means: people grouped and identified as "type" or "category" X, Y, Z on the basis of Big Data aggregations, of specific combinations of data leading to specific profiles for people. Such constructions can sometimes lead to "individual identities", as Butarelli said. But they need not.
Even if data are not properly "personal", in the end they still are about, and from, "natural persons", no matter what the degree of privacy protection is.
The thing is that even if data are not properly "personal", in the end they still are about, and from, "natural persons", no matter what the degree of privacy protection is. After all, the entire thing revolves around identifiability, and even if the data gathered on me are used in "non-personal" ways, what is being done with such data can still have an impact on me. So there is a very wide space left for constructions of "me", for which I can be held accountable in variable ways, but which have not been made by me, nor are they owned by me - profiles of me are creations and commodities made by others about me, and what the GDPR does well and in detail is to organize a firm legal footing for trading and exchanging such items in a Big Data market. Data subjects, in the sense just described, are the product in that market; if the specific data subject is me, I am a product, not an actor in the deal. Not much citizen empowerment can be observed in the process.
The clever play of words in EU texts such as the GDPR has, on many previous occasions, had the tactical advantage of silencing or delaying public debate. Public declarations on the GDPR will continue to foreground the phrase in which EU citizens are said to be empowered with respect to data privacy. A careful reading of this text, however, raises several large and pretty fundamental questions left unaddressed by the decision makers - and for that, too, precedents are plenty in the history of EU decision making.