Big Data Is Watching You

The Biases we feed to Tinder algorithms

How a machine-learning algorithm holds up a mirror to society

13 minutes to read
Article
Magdalena Rolle
16/01/2019

As the basis for one of the fastest growing social networking apps in the world, Tinder algorithms play an increasingly important role in the way people meet each other. As Tinder algorithms receive input from users' activity, they learn, adapt, and act accordingly. In a way, the workings of an algorithm hold up a mirror to our societal practices, potentially reinforcing existing racial biases.

Tinder Algorithms: Welcome to #swipelife

Tinder is one of the fastest growing social networking apps on a global scale. With users in 190 countries swiping 1,6 billion pictures and generating around 20 billion matches every day, the location-based dating application plays a game-changing role in the dating world. (Liu, 2017) This article reflects on how the biases of Tinder algorithms hold up a mirror to our society by analyzing the human impact on their technological workings. 

Online news outlets are cluttered with articles on how to win the Tinder game. In the realm of online forums such as Reddit, users collectively try and decode Tinder algorithms by analyzing their personal experiences with it. In order to get more matches, people try and make sense of how the algorithm works, discuss which swiping behavior might be penalized or awarded, why certain profiles disappear from the ‘field’ or are being ‘chocked’ from new profiles to swipe on.

“Tinder is more than a dating app. It's a cultural movement. Welcome to #swipelife.” (tinder.com)

What materializes in both news articles and forums is frequent claims about Tinder algorithms being somewhat biased. They discuss how online dating is tricky, not because of people, but because of the algorithms involved. Both user experiences and experiments indicate that online dating applications seem to be reinforcing racial prejudices within the swiping community. (Sharma, 2016; Hutson, Taft, Barocas & Levy, 2018) “Although partner preferences are extremely personal, it is argued that culture shapes our preferences, and dating apps influence our decisions.” (Lefkowitz, 2018)

The public relevance of algorithms

According to Gillespie, algorithms shouldn’t be perceived as ‘cold mechanisms’, because they are just as much constituted by ‘warm human and institutional choices’ as they are based on technical achievements. (2014: 169) Depending on how an algorithm is programmed, the users' online behavior and the set of data it is given to process, certain cultural aspects will be highlighted while others are left out. Some information of a certain group is prioritized, which affords them greater visibility, while others are rendered invisible. Through this, algorithms play a crucial role in overall participation in public life. Scholars stress the importance of interrogating algorithms as a “key feature (...) of the cultural forms emerging in their shadows” (Gillespie, 2014: 169; Anderson, 2011 & Striphas, 2010).

Approaching algorithms from a sociological perspective, there are different dimensions to its public relevance. One of these is the promise of algorithmic objectivity. This refers to “the way the technical character of the algorithm is positioned as an assurance of impartiality, and how that claim is maintained in the face of controversy”. (Gillespie, 2014: 168)

Another dimension relates to the assumptions made by the algorithm's providers to know and predict their user's practices. Gillespie refers to these as ‘the circles of anticipation.’ (Gillespie, 2014: 168) This second dimension concerns the ways in which users reshape their online behavior to benefit from the algorithms they are dependent on. (Ibid.: 168).

An algorithm can only function when paired with a database, so in order to uncover possible biases of an algorithmic output, the human interference with algorithms needs to be included. This includes the input from both platform users and its developers. This is necessary because “Algorithms are made and remade in every instance of their use because every click, every query, changes the tool incrementally." (Gillespie, 2014: 173) So then, how are Tinder’s algorithms programmed, how are the user and provider influencing their workings, and what data flows into their calculations?

Machine-learning Tinder algorithms

The very notion of algorithms is rather elusive, and the specific workings of underlying Tinder algorithms are not publicly revealed. This doesn't come as a surprise, as developers and platform providers in general rarely give insight into the coding of their underlying programs. They stress not only that algorithms must not be tampered with as they are based on technological neutrality, but also the fact that they’d likely be copied and re-used by competing providers. (Gillespie, 2014: 176)

However, certain features of Tinder algorithms are ‘known’, either through practical evaluation of user experiences or through the app's providers themselves.

Tinder is based on a collection of algorithms that augments their processes to solve problems on a bigger scale. In other words: each of the Tinder algorithms is programmed to collect a set of data that are tabulated accordingly to contribute a relevant output. These results then work together to improve the overall user-experience, which is achieved when there is a notable increase of matches and messages. Since each user has individual preferences, it also needs personalized recommendation systems, which are obtained through collaborative filtering and algorithmic calculations. (Liu, 2017)

If you are losing the Tinder game more often than not, you will likely never get to swipe on profiles clustered in the upper ranks

One part of this collective is the Elo-score, also referred to as the ‘algorithm of desire’. This is, as confirmed by Tinder’s founder Sean Rad, a scoring system that ranks people according to their ‘desirability’. The term itself is derived from the chess world, where it is used to rank a player’s skill levels. Accordingly, this score is set up to compare users and match people who have similar levels of desirability – if you are losing the Tinder game more often than not, you will likely never get to swipe on profiles clustered in the upper ranks. (Carr, 2016)

Desire, though, depends on various factors that are based on personal preferences, which aren’t universal. These are most definitely not objective, but very much subjective in nature. So how can Tinder algorithms objectively calculate a person’s desirability?

Tinder algorithms detects a user’s swiping patterns and uses those for future recommendations. (Carr, 2016) Basically, people who are on a same level of giving and receiving when it comes to right ("like") and left ("pass") swipes, are understood by Tinder algorithms to be equally often desired by other users. This makes it likely that their profiles are rendered visible to one another. Although, Rad argues: “It is not just how many people swipe right on you… its very complicated. It took us two and a half months just to build the algorithm because a lot of factors go into it.” (Cited in Carr, 2016) Nonetheless, details of those factors are not revealed, just like the score itself is not publicly accessible to users.

Being rejected is something that people will try to avoid as much as possible. “The beauty of Tinder, after all, is that rejection has been removed entirely from the process, since you have no idea who dismissed your profile.” (Cited in Carr, 2016) This process is kept hidden from the users, even though it might be considered knowledge about the self that one is entitled to in order to know one's position in the ‘playing field’.

Surprisingly though, it is not only the process of rejection, the number of left swipes, that is kept from the user. The same goes for the reception of right swipes. (Bowles, 2016) Tinder algorithms can actively decide to deny you a match, or several matches, simply by not showing them to you. Tinder programmed this ‘behavior’ into the algorithm to slow down the upper percentages of most ‘desirable’ people, by rendering their profiles less visible to other users in order to give people with lower rankings a chance.

Jonathan Badeen, Tinder’s senior vice president of product, sees it as their moral obligation to program certain ‘interventions’ into the algorithms. “It’s scary to know how much it’ll affect people. […] I try to ignore some of it, or I’ll go insane. We’re getting to the point where we have a social responsibility to the world because we have this power to influence it.” (Bowles, 2016)

Swipes and swipers

As we are shifting from the information age into the era of augmentation, human interaction is increasingly intertwined with computational systems. (Conti, 2017) We are constantly encountering personalized recommendations based on our online behavior and data sharing on social networks such as Facebook, eCommerce platforms such as Amazon, and entertainment services such as Spotify and Netflix. (Liu, 2017)

As a tool to generate personalized recommendations, Tinder implemented VecTec: a machine-learning algorithm that is partly paired with artificial intelligence (AI). (Liu, 2017) Algorithms are designed to develop in an evolutionary manner, meaning that the human process of learning (seeing, remembering, and creating a pattern in one’s mind) aligns with that of a machine-learning algorithm, or that of an AI-paired one. An AI-paired algorithm can even develop its own point of view on things, or in Tinder’s case, on people. Programmers themselves will eventually not even be able to understand why the AI is doing what it is doing, for it can develop a form of strategic thinking that resembles human intuition. (Conti, 2017)

A study released by OKCupid confirmed that there is a racial bias in our society that shows in the dating preferences and behavior of users

At the 2017 machine learning conference (MLconf) in San Francisco, Chief scientist of Tinder Steve Liu gave an insight into the mechanics of the TinVec approach. For the system, Tinder users are defined as 'Swipers' and 'Swipes'. Each swipe made is mapped to an embedded vector in an embedding space. The vectors implicitly represent possible characteristics of the Swipe, such as activities (sport), interests (whether you like pets), environment (indoors vs outdoors), educational level, and chosen career path. If the tool detects a close proximity of two embedded vectors, meaning the users share similar characteristics, it will recommend them to another. Whether it’s a match or not, the process helps Tinder algorithms learn and identify more users whom you are likely to swipe right on.

Additionally, TinVec is assisted by Word2Vec. Whereas TinVec’s output is user embedding, Word2Vec embeds words. This means that the tool does not learn through large numbers of co-swipes, but rather through analyses of a large corpus of texts. It identifies languages, dialects, and forms of slang. Words that share a common context are closer in the vector space and indicate similarities between their users' communication styles. Through these results, similar swipes are clustered together and a user’s preference is represented through the embedded vectors of their likes. Again, users with close proximity to preference vectors will be recommended to each other. (Liu, 2017)

But the shine of this evolution-like growth of machine-learning-algorithms shows the shades of our cultural practices. As Gillespie puts it, we need to be aware of 'specific implications' when relying on algorithms “to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions.” (Gillespie, 2014: 168)

A study released by OKCupid (2014) confirmed that there is a racial bias in our society that shows in the dating preferences and behavior of users. It shows that Black women and Asian men, who are already societally marginalized, are additionally discriminated against in online dating environments. (Sharma, 2016) This has especially dire consequences on an app like Tinder, whose algorithms are running on a system of ranking and clustering people, that is literally keeping the 'lower ranked' profiles out of sight for the 'upper' ones.

Tinder Algorithms and human interaction

Algorithms are programmed to collect and categorize a vast amount of data points in order to identify patterns in a user’s online behavior. “Providers also take advantage of the increasingly participatory ethos of the web, where users are powerfully encouraged to volunteer all sorts of information about themselves, and encouraged to feel powerful doing so.” (Gillespie, 2014: 173)

Tinder can be logged onto via a user’s Facebook account and linked to Spotify and Instagram accounts. This gives the algorithms user information that can be rendered into their algorithmic identity. (Gillespie, 2014: 173) The algorithmic identity gets more complex with every social media interaction, the clicking or likewise ignoring of advertisements, and the financial status as derived from online payments. Besides the data points of a user’s geolocation (which are indispensable for a location-based dating app), gender and age are added by users and optionally supplemented through ‘smart profile’ features, such as educational level and chosen career path.

Gillespie reminds us how this reflects on our ‘real’ self: “To some degree, we are invited to formalize ourselves into these knowable categories. When we encounter these providers, we are encouraged to choose from the menus they offer, so as to be correctly anticipated by the system and provided the right information, the right recommendations, the right people.” (2014: 174)

“If a user had several good Caucasian matches in the past, the algorithm is more likely to suggest Caucasian people as ‘good matches’ in the future”

So, in a way, Tinder algorithms learns a user’s preferences based on their swiping habits and categorizes them within clusters of like-minded Swipes. A user’s swiping behavior in the past influences in which cluster the future vector gets embedded. New users are evaluated and categorized through the criteria Tinder algorithms have learned from the behavioral models of past users.

This raises a situation that asks for critical reflection. “If a user had several good Caucasian matches in the past, the algorithm is more likely to suggest Caucasian people as ‘good matches’ in the future”. (Lefkowitz 2018) This may be harmful, for it reinforces societal norms: “If past users made discriminatory decisions, the algorithm will continue on the same, biased trajectory.” (Hutson, Taft, Barocas & Levy, 2018 in Lefkowitz, 2018)

In an interview with TechCrunch (Crook, 2015), Sean Rad remained rather vague on the topic of how the newly added data points that are derived from smart-pictures or profiles are ranked against each other, as well as on how that depends on the user. When asked if the pictures uploaded on Tinder are evaluated on things like eye, skin, and hair color, he simply stated: “I can’t reveal if we do this, but it’s something we think a lot about. I wouldn’t be surprised if people thought we did that.”

According to Cheney-Lippold (2011: 165), mathematical algorithms use “statistical commonality models to determine one’s gender, class, or race in an automatic manner”, as well as defining the very meaning of these categories. These features about a user can be inscribed in underlying Tinder algorithms and used just like other data points to render people of similar characteristics visible to each other. So even if race is not conceptualized as a feature of matter to Tinder’s filtering system, it can be learned, analyzed and conceptualized by its algorithms.

We are seen and treated as members of categories, but are oblivious as to what categories these are or what they mean. (Cheney-Lippold, 2011) The vector imposed on the user, as well as its cluster-embedment, depends on how the algorithms make sense of the data provided in the past, the traces we leave online. However invisible or uncontrollable by us, this identity does influence our behavior through shaping our online experience and determining the conditions of a user’s (online) possibilities, which ultimately reflects on offline behavior.

While it remains hidden which data points are incorporated or overridden, and how they are measured and weighed against one another, this may reinforce a user’s suspicions against algorithms. Ultimately, the criteria on which we are ranked is “open to user suspicion that their criteria skew to the provider’s commercial or political benefit, or incorporate embedded, unexamined assumptions that act below the level of awareness, even that of the designers.” (Gillespie, 2014: 176)

Tinder and the paradox of algorithmic objectivity

From a sociological perspective, the promise of algorithmic objectivity seems like a paradox. Both Tinder and its users are engaging and interfering with the underlying algorithms, which learn, adapt, and act accordingly. They follow changes in the program just like they adapt to social changes. In a way, the workings of an algorithm hold up a mirror to our societal practices, potentially reinforcing existing racial biases.

However, the biases are there in the first place because they exist in society. How could that not be reflected in the output of a machine-learning algorithm? Especially in those algorithms that are built to detect personal preferences through behavioral patterns in order to recommend the right people. Can an algorithm be judged on treating people like categories, while people are objectifying each other by partaking on an app that operates on a ranking system?

We influence algorithmic output just like the way an app works influences our decisions. In order to balance out the adopted societal biases, providers are actively interfering by programming ‘interventions’ into the algorithms. While this can be done with good intentions, those intentions too, could be socially biased.

The experienced biases of Tinder algorithms are based on a threefold learning process between user, provider, and algorithms. And it’s not that easy to tell who has the biggest impact.

References

Bowles, N. (2016) Mr (swipe) right? After a year of tumult and scandal at Tinder, ousted founder Sean Rad is back in charge. Now can he – and his company – grow up?

Carr, A., (2016). Fast Company.com. I Found Out My Secret Internal Tinder Rating and Now I Wish I Hadn’t.

Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of control. Theory, Culture & Society 28 (6), 164-181.

Conti, M. (2017). YouTube.com. Ted: The incredible inventions of intuitive AI.

Crook, J. (2015). TechCrunch.com. Tinder introduces a new matching Algorithm.

Gillespie, T. (2014). The relevance of algorithms. In Gillespie, Tarleton, Pablo J. Boczkowski & Kirsten A. Foot (eds.) Media technologies: Essays on communication, materiality and society. MIT Scholarship Online, 167-193.

Hutson, J.A., Taft, J. G., Barocas, S. and Levy, K. (2018). “Debiasing Desire: Addressing Bias & Discrimination on Intimate Platforms” Proceedings of the ACM On Human-Computer Interaction (CSCW) 2: 1–18, https://doi.org/10.1145/3274342.

Lefkowitz, M. (2018) Redesign dating apps to lessen racial bias, study recommends.

Liu, S. (2017). In SlideShare: Personalized Recommendations at Tinder: The TinVec Approach.

MLconf (2017). MLconf.com. MLCONF 2017 SAN FRANCISCO.

Sharma, M. (2016). Tinder has a race problem nobody wants to talk about.