Self-Tracking Data and its Commercial Uses

14 minutes to read
Article
Marieke Hendriks
07/11/2019

Self-tracking data introduces new privacy risks as companies commercialize users' health information. This is a consequence of the 'biovalue' that health and medical information have in the digital data economy. 

On November 1st 2019, Google announced it was buying Fitbit, a company selling self-tracking devices, for the whopping amount of 2.1 billion dollars. While Google announced that their collaboration with the Fitbit team will bring “innovation in wearables and will build products that assist people better” (NOS, 2019), the public's first response to this development was not necessarily enthusiastic. In the comment sections of news articles on social media, comments emerged such as “Is Google going to break their promise right away and use our data for other purposes?”, “There goes our privacy...” and “Google will know how you sleep now.” The main concern of the public appears to be related to privacy. What happens to our data? Is it safe? Who has access to it? And what can be done with it?

This is not necessarily a surprising reaction considering that Google did not simply buy a company: it automatically bought  an enormous database of personal health information generated by self-tracking practices through Fitbit devices. As a result, Google does not only own the company and the customers, but also the data. After the acquisition was made public, Heather Dickinson (head of corporate communications for Google Cloud) immediately issued a privacy statement: “Fitbit will come under Google’s privacy policy”, she says. “Similar to our other products, we will be transparent about the data we collect and why. We will never sell personal information to anyone. Fitbit health and wellness data will not be used for Google ads and we’ll empower Fitbit users to review, move or delete their data.” But even with Google promising the collected data will not be used for other purposes, there seems to be doubt and skepticism as to their intentions present amongst the public. 

“Data can either be useful or perfectly anonymous, but never both”.

While the public is becoming more concerned about data privacy and security issues, the popularity of self-tracking practices (especially through the usage of apps and wearables) remains ever-growing. Self-tracking goes beyond being a technology of the self and can also be considered a data practice (Lupton, 2016). While the data collected about the self may be valuable for self-improvement and gaining insight into one’s habits, this practice is not without risks. In this article, I will discuss the commercialization of self-tracking data and the consequences of this new form of commercial value, namely the privacy risks that emerge for people who collect health and medical information through these self-tracking practices.

Self-tracking data practices

Over the last decades, technology has become increasingly important in the pursuit and maintenance of good health. Self-tracking health technologies range from mobile phone apps to wearables, and cover a broad range of health aspects. Individuals can track their food intake, daily exercise, mood, medicine usage, sleep and countless other aspects. Moreover, present-day technology simultaneously allows the user to analyze their collected data and offers them the possibility to share this data with others through social media platforms such as Facebook and Instagram.  

Self-tracking can be understood as the individual’s use of technology to record, monitor and reflect upon features of daily life (Lupton 2014; 2016). A dominant discourse in these data collection practices is that of self-optimization. The practitioners of self-tracking can use the information they collect to achieve self-awareness, a better insight on their habits and more and deeper self-knowledge (Wolf, 2010). Using digital technologies to monitor details about an individual’s body is presented as a more reliable and accurate way of tracking health than simple human perception. It, therefore, offers greater truth and meaning. This subsequently offers the user the possibility to optimize and improve their lives based on these insights. However, as Lupton emphasizes, it is critical to acknowledge that self-tracking data is not always reliable. This would disprove the more accurate self-tracking claim. Next to self-optimization, another core dimension of self-tracking is the sharing of this collected data online. Many self-tracking technologies (such as wearable devices and mobile phone applications) allow users to connect and communicate with other people through online communities. This is done either through the technologies or on social media platforms. Sharing brings a socially meaningful dimension to the practice of self-tracking in which the individual is supported and celebrated in achieving personal goals. This strengthens the feeling of self-improvement through collecting data.

The valorization of self-tracking data: ‘biovalue’ 

However, generating self-tracking data and sharing this data on social media platforms does appear to be not only beneficial for the self. Lupton (2018) emphasizes that personal health and medical data are used more and more for commercial, research and managerial purposes by entities beyond the individual. After all, most health-related data do not come from clinical information, but from people’s use of search engines, self-tracking devices and social media activities. The benefits of engaging in self-tracking one's health and collecting medical information for the aim of self-knowledge, reflection and self-improvement are, therefore, countered by the potential for this information to be used by other parties. This can be done in ways that may be detrimental to the person who generated the data (Lupton, 2017). An example of these disadvantageous practices is United States’ healthcare, which is mostly privately funded, where healthcare providers regularly use databases to determine whether patients are credit risks (Lupton, 2017). When private health and medical information leaks into the digital data economy with the patient being unaware of this, this could have negative effects for the individual. Examples of this are discrimination based on personal health information by not receiving adequate medical help, being excluded from health insurance or having to pay a higher price for health insurance

Data about ‘life itself’ has become exploitable in many ways, creating a new form of commercial value: ‘biovalue’.

With the growing popularity and spread of self-tracking practices the digital data economy has expanded and entrepreneurs, researchers, managers, businesses and cybercriminals have recognized the possibilities and value of personal digital data. This data about ‘life itself’ has become exploitable in many ways, creating a new form of commercial value: the so-called ‘biovalue’ (Lupton, 2016; 2018). Biovalue is produced from surplus commercial value that is attributed to biological objects such as human body tissues, cells and organs (Vesna, 2000; Rose, 2007). Not only actual human body parts have value nowadays. Data about the human body (and its functions and behaviors) that is generated through self-tracking practices has also become a valuable commodity. “This incorporation of biometric self-tracked data into the digital data economy and the subsequent exploitation of this data is an instance of biopolitics and biopower, states Lupton (2018, 13). 

Commercialization practices on social media

One of these commercialization practices lies in the personal health and medical data that is uploaded to social media sites. Public health professionals use various social media tools to extract information about health risks and disease outbreaks and to collect data on the incidence of illness and disease (Lupton, 2018). An example of this can be found in the development of tracking systems by Google for outbreaks of influenza by monitoring online queries and search terms for social media status updates. Li (2015) emphasizes that there are many privacy threats involved in the uploading of personal health and medical information to social media platforms. They include the misuse of data, accidental data releases and disclosures to third parties and user profiling across sites. Here, the commercialization of self-tracking data clashes with a core dimension of self-tracking, namely sharing data online. While this practice of sharing only has social and supporting value for the individual that engages in it, the information they share has financial value for other parties. This does not only undermine the potential for self-optimization in self-tracking, but also imposes the risk of harmful effects for the individual that engages in this practice. “The social media platforms that have been developed to promote the uploading of self-tracking data”, Lupton remarks (2018, 13), “their sharing and their subsequent entry into the digital data knowledge economy provide routes for the translation of biodata into a new form of biovalue that generates biocapital.” As is apparent, this introduces a conflict of interest in the goals underlying the practice of collecting data on the self.

Commercial uses of medical and health data raise many questions concerning privacy and ethics.

However, the exploitation of personal health and medical data is not only apparent in the legal circuit. Cybercriminals and hackers also recognize the financial value of self-tracking data. The value of this data on the black market is high. Hackers can make tenfold the profit from hacking individuals' medical information than from accessing their credit card details (Lupton, 2017). As a result, digital data sets on personal health and medical details have become a target for cybercriminals. Considering the high value of this information on the black market, it is easy to imagine that this breach of privacy has no positive effects for the individual who this health data originally belonged to. 

As is apparent, there are many ways in which the commercialization of private health and medical data counters the self-optimization aims of self-tracking. This new form of commercial value called ‘biovalue’ imposes several risks on individuals who engage in self-tracking practices. 

Countering self-improvement: privacy concerns in self-tracking

The expanding commercial uses of medical and health data raise many questions concerning privacy and ethics. Perhaps the greatest overall risk faced by self-tracking practitioners is the risk of the loss of privacy (Barcena, Wueest & Lau, 2014). These privacy concerns can have implications that range from being mildly annoying, such as the use of personal data for personalized advertising, to severe breaches of privacy, for example the spread of sensitive information about individuals to third parties. One of the privacy concerns that emerges is the extent to which people give their informed consent when allowing access to this data. Many users of self-tracking applications and devices are unaware of what can, and does, happen with their collected information. Other concerns include inadequate measures to protect data security and prevent breaches and hacks, questions about ownership of personal data and the ways in which their data may be used to perpetuate social disadvantage. Paul & Irvine (2014) emphasize that with the recent rise in the popularity of wearable personal health-monitoring devices (such as Fitbit and Garmin), a number of concerns regarding user privacy is raised., Especially with regard to how providers of these devices make use of the data obtained from them, and the safeguarding that user data enjoys. 

People who actually generate data don't receive financial compensation for sharing their experiences through online platforms.

In essence, the data collected in self-tracking is for personal use and the goal behind it is non-commercial. However, for-profit companies render the data into a form that is valuable for commercial entities. In exchange, the people who actually generate this data do not receive financial compensation for sharing their experiences through online platforms, apps and wearables. What individuals often do not realize is that when they accept the terms and conditions of developers of self-tracking devices and software, they often agree to provide their data., This may then be used by second and third parties for purposes that go well beyond the original intentions of self-trackers (Lupton, 2014). In many cases, the developers even explicitly state in their terms and conditions and privacy policies that they hold the right to share or sell data to third parties.

An example of this can be found in the terms and conditions of Fitbit, which state that Fitbit is allowed to use and commercially exploit any text, photographs or other data and information you submit to the Fitbit services” and that users “waive any rights of publicity and privacy.” (Paul & Irvine, 2014). When individuals do not access the fine print of terms and conditions or privacy policies, they are often unaware of the various ways in which their data is used for commercial purposes. And even if self-trackers actively search for privacy policies in order to examine the privacy of their data, their efforts might be in vain. A report by Barcena et al. (2014) shows that 52 percent of the fitness apps they examined did not make privacy policies available at all. While the sharing or selling of personal health data is common practice, individual users that collected this data are often granted far less access to their data since developers limit their opportunity to export their personal data from their databases.  

A rhetoric that companies like Fitbit often rely on is the promise of anonymization. They underline that collected personal data is anonymized and de-identified. Therefore. even when personal data is shared or sold, there are no privacy issues present. However, such procedures have the well-documented risk of re-identification: the potential for data-mining and data-profiling practices that de-anonymized this information, whereby previous anonymized data can be re-associated with the identity of the individual that collected that information. Ohm (2010) emphasizes that “data can either be useful or perfectly anonymous, but never both”. In combining diverse data sets on people, Lupton (2017) explains, data-mining and profiling methods can begin to reveal detailed and sensitive information about them in ways these individuals may not have anticipated when they granted consent to the use of their data (or without their knowledge). These methods use data to construct detailed profiles or predictive analytics that may be used to make decisions about people’s eligibility for employment, insurance and credit (Lupton, 2018). In using anonymized data, companies, therefore, still put the privacy of users at risk.

Responses to data privacy and security concerns

Developers of self-tracking devices and applications are not oblivious to the privacy concerns that individuals have. “In the wake of numerous scandals and events that have highlighted vulnerabilities in personal data privacy, there are a growing number of indications that digital technology developers are beginning to recognize that the public is getting more concerned on data privacy and security issues”, says Lupton (2017). Apple is an example of a company that takes a strong stance on the importance of their user’s privacy. Apple CEO, Tim Cook, emphasizes that Apple’s profits come from selling its devices and not customers’ personal data. The CEO even publicly advocates for new digital privacy laws in the United States, warning that the collection of huge amounts of personal data is harming society. “Private and everyday information is weaponized against us with military efficiency”, he states.

Individuals who engage in self-tracking practices are often unaware of what happens to their data.

However, there are also companies that uphold a different revenue model than Apple. Companies like Google and Facebook both rely on profiting from users’ data to generate income. On the one hand, Google implements strong security features to protect users' personal data from unauthorized outsiders. On the other hand, the company continues to maintain the right to access users' data for its own purposes. While guidelines for ensuring data privacy (such as clear and easily understandable privacy policies) came into being in an attempt to win consumer trust, individuals remain skeptical regarding the privacy of their data. This is not surprising when we consider the fact that many developers of health and fitness tracking apps do not succeed in securing the personal data that is uploaded to these apps. They even often leak personal data in concealed ways. (Adhikari, Richards & Scott, 2014; Huckvale, Prieto, Tilney, Benghozi & Car, 2015). 

Golden information

The sharing of data with third parties may be detrimental to the person who generated the data. These harmful implications range from mildly irritating, such as the use of personal data for personalized advertising, to severe breaches of privacy that subsequently may cause severe damage., For example the spread of sensitive information about the individuals to third parties can, which in turn, can lead to financial loss or discrimination. 

People’s fears for the safety of their data upon hearing about Google's purchase of Fitbit are not an unjust concern. The commercial uses of medical and health data that people collect on themselves or share on social media platforms are expanding rapidly. We can recognize a contradictory development here: the promise of self-optimization that underlies self-tracking is countered by commercialization of the collected data. This commercialization of data has the consequence that multiple privacy issues arise. Individuals that engage in self-tracking practices are often unaware of what happens to their data, who has access or whether data can be and will be shared with third parties. In addition, it is not as easy for individuals to discover what happens to their data as it should be.

References

Adhikari, R., Richards, D. and Scott, K. (2014). Security and Privacy Issues Related to the Use of Mobile Health Apps. 25th Australasian Conference on Information Systems, Auckland, New Zealand.

Barcena, M. B., Wueest, C., & Lau, H. (2014). How safe is your quantified self? (Version 1.1 - August 11, 2014).

Huckvale, K., Prieto, J. T., Tilney, M., Benghozi, P., & Car, J. (2015). Unaddressed Privacy Risks in Accredited Health and Wellness Apps: a Cross-Sectional Systematic Assessment. BMC Medicine, 13(1).

Li, J. (2015). A Privacy Preservation Model for Health-Related Social Networking Sites. Journal of Medical Internet Research, 17(7).

Lupton, D. (2014). Self-tracking Cultures: Towards a Sociology of Personal Analytics. In: Proceedings of the 26 thAustralian Computer-Human Interaction Conference on Designing Futures: the Future of Design (pp. 77-86). New York: ACM.

Lupton, D. (2016). You are Your Data: Self-Tracking Practices and Concepts of DataLifelogging, 61–79.

Lupton, D. (2017) Digital Health: Critical and Cross-Disciplinary Perspectives. London: Routledge.

Lupton, D. (2018). Lively Data, Social Fitness and Biovalue: the Intersections of Health Self-tracking and Social Media. In: J. Burgess, A. Marwick, & T. Poell (Eds.), The Sage Handbook of Social Media. London: SAGE Publications Ltd.

NOS. (2019, November 1). Google koopt Fitbit, maker van fitnesstrackers, voor ruim 2 miljard dollar.

Ohm, P. (2010). Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review, 57(6).

Paul, G., & Irvine, J. (2014). Privacy Implications of Wearable Health Devices. Proceedings of the 7th International Conference on Security of Information and Networks - SIN '14. 

Rose, N. (2007). Molecular Biopolitics, Somatic Ethics and the Spirit of Biocapital. Social Theory & Health, 5(1), 3–29.

Vesna, V. (2000). The Visible Human Project: Informatic Bodies and Posthuman Medicine. AI & Society, 14(2), 262–263. 

Wolf, G. (2010, April 28). The Data-Driven Life, New York Times.