big tech, AI, social big tech, society

Let’s talk about big tech (no, not you)

4 minutes to read
Linnet Taylor

We must regulate big tech. We must promote sustainable computing. We must use technology to combat poverty. We must use big data to predict migration. The technology field has a very recognisable ‘we’ – the ‘we’ of aligning AI with human values and creating a better environment for autonomous vehicles. And most importantly, of course, of embracing blockchain, without which we can never achieve our full potential as a species. The ‘we’ of these broad statements is fairly easy to parse: it is a ‘we’ that covers most of the people reading this column. Connected, educated, living in a high-income country and interested in digital technologies because they constitute a central feature of the environment in which we live.

The other 'we's of big tech

What would we gain from thinking about the other ‘we’s of tech? The 48.7 per cent of the world, for example, who are not connected to the internet but who are affected indirectly by its economy nonetheless? Or the 71 million refugees in the international system who are subject to experimentation by technology experts and vendors in the fields of IDservice provision and healthcare? Who is the ‘we’ of tech if you are a project rather than an innovator?

The hegemonic ‘we’ of global technology is not a purely Northern one. We can distinguish a set of debates rooted in high-income countries that mainly focus on social media platforms and the internet – their moderation, their applications, and their engagement with political and economic systems – and a different debate stemming from lower-income countries that focuses on the relationship of tech to power more broadly through discussions of fintech, censorship and access, connectivity and other relevant issues. The concerns being voiced have some similarities – people care about being connected and having their voices heard, and about not being exploited or sold – but on another level the first discussion can be very insular. Even though it takes in issues from around the world, it still relates mainly to the US because the tech being discussed, and those who developed it, are usually American. The assumptions underlying public discussions of technology are still based on a very old core-periphery dynamic: Silicon Valley is the core, the rest of us are the periphery.  

The main thread that unites both users and non-users of tech worldwide is the notion of self-determination.

Even this hegemonic ’we’ of technology, though, is comparatively recent. Looking down from a plane crossing Europe on a clear night, you can see cities, villages, and roads lit up like constellations. Journeying across North America, something similar. But crossing huge expanses of Latin America on a plane, or Africa, or parts of Asia, you will not see lights. If you fly across DR Congo en route from East to West Africa, you will not see electric lights for easily half of your journey. A portion of humanity lights its nights with (reliable) electricity; it journeys by plane or train across borders using passports, and (through technology and connectivity) has access to much of the knowledge generated by our species so far.

Another portion, by far the majority of us, does not. The ‘we’ of technology only means the first group, and this group has only existed for tens, not hundreds, of years. What are we to make of the fact that the ‘we’ of tech is now contemplating space travel, that it will mostly be able to move out of the way of environmental disaster to new and higher living zones, and that it will spend the next decades experiencing new forms of education and communication? And is it relevant that the other ‘we’ will most likely not? 

A more inclusive technological 'we'

These questions should impact how researchers study technology. It is not separate from the social. I will bet any academic researcher reading this that they have never read a scholarly article that did not assume they were from a high-income country. A review of the literature on privacy published in 2015 cited only work from the EU and US. The biggest computer science conferences usually source less than a hundredth of their participants from low- or middle-income countries.

The only modes of analysis that produce a more inclusive technological ‘we’ are things that point to a less joyous future. Being unsure of how the data you are giving to authorities will be used over its lifecycle, for instance, is a common feeling wherever you live in the world, whether you are applying for asylum in Europeapplying for welfare in the world’s richest country (or the Netherlands) or using Facebook anywhere at all. Being aware your physical appearance is being captured, processed and traded is increasingly a great leveller globally, whether you are Zimbabwean whose image is being sold to the Chinese, a Uighur on whom Chinese facial recognition AI is being used, or a Londoner being scanned for resemblance to criminals. 

The task of research is to interrogate and disaggregate the ‘we’ until the lumpy, inconvenient, political reality emerges, and then to study that – as if it, and not technology itself, were the phenomenon of interest.

What unites the ‘we’ of tech does not seem to be the aspirational, let’s-head-to-Mars-and-solve-poverty rhetoric of those who develop and sell technology. Nor is it the discussions about how to preserve privacy and make technology more ethical, however worthwhile they are. Instead, the main thread that unites both users and non-users of tech worldwide is the notion of self-determination. Payal Arora’s new book on the next billion users argues that if we ask people around the world what they want from data technologies in particular, their answers will not align with either of these rhetorics. Instead they will say they want a space to explore, to play and to build their identity. From the research underway in the Global Data Justice project related findings are surfacing: people want the tech, but not at any cost. They also want to be able to say no to it when it is exploitative, a desire that connects to more than just technology markets and policy.

The consent problem of technology

Globally, technology has a consent problem. People who are already being exploited will become the subjects of experimentation and exploitation by the technology industry. Their consent to use technology they need, such as mobile phones, online platforms and payment services, is being taken to mean consent to an agenda that they seldom wish to be part of: when they go online, they are performing labour for a firm that is usually far away and that will sell that labour on the open market, whether it is communication, self-profiling or offering up bits and bytes of preferences and behaviour. The task of research is to interrogate and disaggregate the ‘we’ until the lumpy, inconvenient, political reality emerges, and then to study that – as if it, and not technology itself, were the phenomenon of interest.