Algorithmic world-making : a speculative view on representation
The proliferation of artificial intelligence in the last decade has for some time now drawn our attention back to automation and hence to algorithms. One can be relatively sure that if one asked around “what is an algorithm?”, the most popular answer would sound more or less like this: “an algorithm is a sequence of instructions or a set of rules that are followed to complete a given task”. A possible variation of it, or a condensed version, might be: "the algorithm is like a recipe...indeed, it is a recipe". Generally speaking, such an answer could be described as essentially correct. Yet, one cannot help but notice how a mathematical view that frames computation as function-based validates such a definition of an algorithm while simultaneously downplaying a broader comprehension of computation itself.
Besides deciding what sorts of computational problems an algorithm can solve, in any case, it would be reductive to claim that it corresponds to the bare and crude implementation of a set of instructions. Computation, as such, is more than a series of reversible processes from which patterns emerge (Parisi, 2013; Rutz, 2018). If not understood this way, a reality immanent to the algorithm and equally relevant in terms of material processes could be overlooked, that of the incomputable (Parisi, 2015; Galloway, 2015 ). In brief, the incomputable is everything that allows the code to run and namely what contributes to approaching the algorithm as a performative object, with qualities that are untranslatable and cannot be implemented in a logical sequence such as that of programming languages.
It is therefore essential to grasp how the cultural and normative constructs increasingly and more imperceptibly determine how our lives take shape. First and foremost, in this sense, are the constructs produced by the statistical models at the basis of machine-learning, i.e. the continuous and generalized machine learning that is finding application in the most diverse fields and that contributes to redefining notions such as identity, justice, mobility, health, etc. My aim here, however, is not so much to analyze the functioning of the algorithms, but rather to take stock of the degree of the epistemic and interpretative ambiguity induced by them.
Epistemic algorithmic practices
With the rise of industrialization, artefacts such as images, audio, or video recordings have given rise to the archives, that is, technical objects currently shaping and synchronizing our individual and collective experience. Archives, in fact, play a crucial role in the shift towards automation based on algorithmic constructions, and to fully grasp their importance, we should consider their testimonial value against the notion of tertiary retention (Stiegler, 2010). In brief, tertiary retention presupposes the materialization of memory into a technical object (ie. a book or a photograph) that has the ability to enable different temporal dimensions. Hence, technical objects can be past/present/future-oriented and allow “humans to temporalise themselves and their world” (Beardsworth, 2010, p. 210). Therefore, in the historical consolidation of practices of knowing - or epistemic practices - archives should be approached in their double nature of temporal and technical objects.
Digital technology has turned practically everything in our lives into an archive, a trace, or a dataset, revealing at once the far-reaching implications of recording anything at any time in the ethical debate about big data and AI
To understand this better, let us think of the abovementioned audio/visual recordings which, as philosopher Regina Rini (2020) points out, have for some 150 years played an essential role in disseminating and sedimenting the logic of our testimonial activities. Recordings, Rini suggests, actively correct errors in past testimonies and passively regulate everyday witnessing practices, to the point that they act as an epistemic backstop for us, i.e. they represent the tipping point beyond which our ability to discern is replaced by interpretative uncertainty and ambiguity.
In other words, we could say that recordings represent a critical element in determining, for instance, if something occurred or did not occur in a certain place and at a certain time. As of today, due to the proliferation and ubiquity of the digital, the role of recordings in our contemporary epistemic practices definitely seems to have become predominant. Digital technology has done nothing but amplify this role, turning practically everything in our lives into an archive, a trace, or a dataset, revealing at once the far-reaching implications of recording anything at any time in the ethical debate about big data and AI.
Humans and algorithms
On this note, we notice that the new recordings generated by algorithms further extend the epistemic aim of pre-digital ones to include identification technologies that abstract and represent the human body - from macro to micro-level - monitor and recreate the movement of humans and objects, and ultimately predict patterns and behaviors. As such, the datafication of bodies, identities, culture, and the environment, also sheds new light on the archive itself. The archive acquires a heterogeneous status, reflected, for example, in its use for creating and enforcing digital borders, as well as for developing techniques used in counter-forensics investigations with the aim of "understand[ing] incidents that slip between" sound and images, and of "compos[ing] evidence that is simultaneously real, media-based and testimonial" (Weizman, 2017, p. 100).
In a similar way, the multifold nature of the archive emerges also from the striking juxtaposition between the mobilization of online media-based testimonies - e.g. testimonies of citizens, migrants, and activists during protests, wars, or catastrophes (Schankweiler et al, 2018) – and the simultaneous rise of the post-truth society.
Researchers have demonstrated that Automatic Border Control systems (ABC) can be successfully attacked using morphing alterations in the pictures of passengers
These examples briefly illustrate how the archive in digital times reveals different logics that define its artefacts, instruments, and techniques, and, by consequence, produce new, conflicting, and complementary kinds of evidence and truth. More in detail, archival evidence seems to build simultaneously on a database logic (Manovich, 2002; Napolitano 2020), where the knowledge is stored and then operationalized, as well as on an abductive logic (Kitchin, 2014; Napolitano 2020), in which the knowledge is instead obtained via a training process and is thus performative. The latter is here of particular interest, as examining the technical archive from this perspective allows us to return to issues of computation and machine learning, by reflecting upon their nature without being specifically concerned with human agency. So far, in fact, we have looked at digital computation and algorithms as a human attempt to “mechanise reason” and build an “instrument of knowledge magnification” (Pasquinelli and Joler, 2020, p. 2) based on an epistemic logic that assumes "a fixed universal correspondence between images and concepts, appearances and essences" (Crawford & Paglen, 2019, p. 34). In such circumstances, when an algorithmic bias emerges we attribute it to the statistical model, while, on second thought, we find out that the bias is inherently human and that the machine in turn embeds and reproduces it.
We might therefore assume that the training dataset is always biased in some way or the other, rejecting the rhetoric that statistical models are neutral entities. Furthermore, it suggests that including an increasing amount of data to address issues of representation and ethics, “works as a convenient cover for problems that are […] not separate from but intrinsic to technologies predicated on surveillance, social sorting, and optimization” (Hoffmann, 2021, p.2). That said, we could actually start focusing on something else, namely the algorithm and its incomputable logic. In this light, we might soon realize that although our algorithm is performing its recognition task - for instance matching a certain image to a certain individual, validating the identity of someone from her biometric data, or recognizing the speech of a person - in reality, there are several ways to invalidate the human hypothesis that a single set of practices can deliver unquestionable evidence of individual identities. In other words, algorithms and computation can be wrong, or better, can undermine, and always in more effective ways, our own epistemic backstop.
Consider the case of the face, which has always been central to theories of identity (Goffman, 1967; Ting-Toomey, 1994; Brown et al, 1987; Scollon, 1995) as well as to both anthropometric and physiognomic approaches in the 19th and 20th centuries. Deemed “as the privileged body part bearing the user’s ‘singularity’” (Azar, 2018 p. 31), that currently generates most of the bio-political capital as well as tensions in the social discourse (Leone, 2021), the face is one of the main targets of such epistemic crisis. For instance, researchers have demonstrated that Automatic Border Control systems (ABC) can be successfully attacked using morphing alterations in the pictures of passengers (Ferrara et al., 2014).
Similarly, the spread of apps that modify and retouch it, shows that the face is indeed trackable, its characteristics alterable and therefore hackable (Azar, 2018). To make matters even worse, Generative Adversarial Networks (GANs), a class of machine-learning systems used to create deepfakes, show the ability of artificial intelligence to further deteriorate our epistemic backstop through an "algorithmically constructed hermeneutic ambiguity” (Azar, 2018, p. 32). Such deterioration comes to the point where, as shown by a recent series of perceptual studies, “synthetically generated faces have emerged on the other side of the uncanny valley” (Nightingale & Farid, 2022).
The aesthetics of uncertainty
The interpretative ambiguity provoked by algorithms is symptomatic on our part. In short, it reveals a lack of existing criteria by which we might assess the algorithmic output, while the realization that those outputs are the product of (in)computable performances is feeding our epistemological uncertainty. We no longer know, to put it differently, whether what we see is true or false. Using the example of GANs, we observe that the method succeeds in undermining our judgment by modeling a distribution of probability on real images and sounds - a function on a high-dimensional space - and by creating a contest between two networks, a generator and a discriminator (Goodfellow et al, 2014; Kahng et al, 2018). In order to reach its output, the generator produces fake samples which gradually improve and become ever more realistic, while the discriminator is committed to telling them apart from real ones: the competition at play sees the two influencing each other as they iteratively update themselves (Kahng et al, 2018).
In doing so, they renegotiate a decision boundary that separates real and fake samples, until the discriminator no longer distinguishes between the two different distributions, that is, until the GAN finally learns to select samples from the 'generative part' of its network (Kahng et al, 2018). At this point, while the algorithmic process has reached its optimum, we are instead confronted with a deepfake, an AI-generated deception that leaves us with questions and implications about the nature of truth and authenticity. Yet, displacing the attention from the human to the non-human, this condition of undecidability may equally suggest the emergence of a “new conceptual approach to the aesthetic qualities of artefacts produced using machine learning” (Lee, 2019, p. 261). Indeed, at a closer look, not just the issues of truth and authenticity appear to be at stake, but more generally that of representation within the frequent situations of aporia arising from algorithmic-generated content.
People should be ethically responsible for the creation and use of differentiation mechanisms, especially on a technoscientific basis.
In this sense, Lee (2019) provides a speculative approach to the logic of machine-learning by examining the eigenfaces method used for face recognition, whose result is obtained through the mathematical decomposition of a large volume of images and the approximation of human vision that the analysis entails. The resulting eigenfaces, appear to us as a sequence of indefinable and spectral faces that arouse uncertainty, but which underline the existing link between the human tendency to interpret/represent on the one hand, and the modus operandi of the algorithm based on a non-representational logic on the other. In this regard, Lee asks us to evaluate the aesthetic potential of eigenfaces in improving our grasp of the machine logic and its incomputable operations, rather than dismissing our uncertainty as a simple reaction to the representational dysfunctionality of eigenfaces.
Interestingly, both eigenfaces and deepfakes come to existence by means of reconfigurations and iterations in which the elements of an algorithmic object appear, disappear or change position among themselves (Rutz, 2018). A mechanism, in short, that abandons the idea that the algorithm is simply reducible to code and patterns and rather evokes Karen Barad’s intra-actions that iteratively shape phenomena by rearranging their properties from within (Barad, 2007). This way, identities derived from images of bodily features originating from a fixed set of relations that inform stable representations, seem to be at odds with an entangled view that questions the distinction between object and apparatus and makes us responsible for the formation of particular boundaries. Or, in other words, conceiving identities in terms of stable representations would deny the fact that we perform arbitrary extrapolations of inclusive and exclusive criteria from an assemblage that actually never stops reconfiguring itself. As a matter of fact, GANs and the mathematical analysis that generates the eigenfaces produce their output regardless of our intervention, and, similarly, algorithmic processes do not limit nor prevent the simultaneous co-existence of many different material realities. On the other hand, our choice to make cuts and shape representations is far from without consequences, and for this reason, as Barad argues, we should be ethically responsible for the creation and use of differentiation mechanisms, especially on a technoscientific basis.
The 'making of worlds' by algorithmic technology
The epistemic confusion that comes from an aesthetics of uncertainty immanent to machinic reconfigurations (Lee, 2019) draws attention to the ontological indeterminacy at the very core of the algorithmic process, a radical openness recalling once more the unnatural nature “not given, not fixed, but forever transitioning and transforming itself” that Barad ascribes to matter (Barad, 2015, p. 401). Not unlike matter, an algorithm explores virtually infinite spatio-temporalities, reconfiguring itself intra-actively. Therefore, if we look at it as a performing object, our aesthetic value-attribution eventually shifts from the assessment of its formal qualities expressed by patterns, prediction, and probabilities (Rutz, 2016; Parisi, 2013), to a speculative logic of the sensible that acknowledges the resistance of the algorithm to “become determinate” (Rutz, 2016, p. 32). If understood this way, algorithmic agency recalls the challenge posed by the techno-scientific invention that in destabilizing the real transforms it into a modality of the possible as argued by Stiegler (Stiegler, 2010).
A potential opportunity to subvert, or at least intervene in the real, lies at the heart of an aesthetic that contemplates the emergence of algorithmic regimes of truth, and in so doing offers us the possibility of exploring new onto-epistemological dimensions. To paraphrase, the prospect of 'making worlds' is realised through the performative capacity of algorithms, which give shape to alternative and co-existing versions of what we are accustomed to calling 'reality', i.e. the actualisation of the real. The metaphor, or even the identity of the algorithm as a recipe, is thus disrupted by aesthetics of uncertainty that does not focus on representations considered as fixed givens, but on the production of algorithmic outputs. Such outputs can be better understood by introducing “a more speculative dimension to an otherwise merely functional concept” (Schwab, 2018 in Arlander et al, p. 8) such as that of reconfiguration, which in turn, embodies the incomputable nature of algorithms themselves.
The prospect of 'making worlds' is realised through the performative capacity of algorithms, which give shape to alternative and co-existing versions of what we are accustomed to call 'reality'
Thanks to reconfigurations we become aware of the limits and ontological biases of the algorithms-as-recipes, but also of the inseparability and irreducibility of the two poles, the human and the algorithmic. Both, in fact, taking up Barad (2010), appear cut-together-apart according to a vision that places the agentivity of matter, no longer considered inert, transversally at the origin of all phenomena (physical, natural, social). Shifting the attention to such intra-actions, we are required to examine the uncertain relationship between the atelic logic of the machine and the human quest for meaning, and further, we are obliged to acknowledge the limits of our language in engaging with alien, namely machinic and algorithmic, worlds (Leach, 2020). Therefore, “to encompass and theorise [the] alien”, and arrest “the effects of anthropomorphism”, we should broaden “the terms describing human relation with the world” (Leach, 2020, p. 226), and try to become the object (Harman,2018) in the attempt to grasp the aesthetic experience of algorithms, that is, the machine sensation (Leach, 2020). In other words, a performative understanding of the machinic - that is, a focus on its incomputable component - gives the bias an ambivalent status, which ultimately exposes its discriminatory and normative patterns but also shows its inherent ability to undermine the ontology of the world the bias itself was supposed to uncritically reproduce, paving the way for the creation of new possible ones.
References
Arlander, A., De Assis, P., Braidotti, R., Kirkkopelto, E., D'Errico, L., Gonzalez, L., ... & Weiberg, B. (2018). Transpositions: Aesthetico-Epistemic Operators in Artistic Research (p. 324). Leuven University Press.
Azar, M. (2018). Algorithmic Facial Image. A Peer-Reviewed Journal About, 7(1), 26-35.
Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. duke university Press.
Barad, K. (2010). Quantum entanglements and hauntological relations of inheritance: Dis/continuities, spacetime enfoldings, and justice-to-come. Derrida today, 3(2), 240-268.
Barad, K. (2015). Transmaterialities: Trans*/matter/realities and queer political imaginings. GLQ: A Journal of Lesbian and Gay Studies, 21(2-3), 387-422.
Beardsworth, R. (2010). Towards a critical culture of the image. J. Derrida and B. Stiegler. Echographies de la Télévision. Entretiens Filmés, Galilée-INA, Paris: Tekhnema.
Brown, P., Levinson, S. C., & Levinson, S. C. (1987). Politeness: Some universals in language usage (Vol. 4). Cambridge university press.
Crawford, K., & Paglen, T. (2019). Excavating AI. -Excavating AI The Politics of Images in Machine Learning Training Sets . Retrieved March, 25, 2022, from https://excavating.ai.
Derrida, J., & Stiegler, B. (2002). Echographies of Television Filmed Interviews. Cambridge Polity.
Ferrara, M., Franco, A., & Maltoni, D. (2014). The magic passport. In IEEE International Joint Conference on Biometrics (pp. 1-7). IEEE.
Galloway, A. R. (2015). issue 25: Apps and Affect. Apps and Affect, 10.
Goffman, E. (1967). Interaction ritual; essays on face-to-face behavior. Garden City, N.Y: Doubleday.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
Harman, G. (2018). Object-oriented ontology: A new theory of everything. Penguin UK.
Hoffmann, A. L. (2020). Terms of inclusion: Data, discourse, violence. new media & society, 1, 18.
Kahng, M., Thorat, N., Chau, D. H. P., Viégas, F. B., & Wattenberg, M. (2018). Gan lab: Understanding complex deep generative models using interactive visual experimentation. IEEE transactions on visualization and computer graphics, 25(1), 1-11.
Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 1-12.
Leach, T. G. (2020). Machine Sensation: Anthropomorphism and ‘Natural’Interaction with Nonhumans. Open Humanities Press.
Lee, R. (2019). Aesthetics of Uncertainty. In Proceedings of the Conference on Computation, Communication, Aesthetics & X (pp. 256-262).
Leone, M. (2021). From fingers to faces: Visual semiotics and digital forensics. International Journal for the Semiotics of Law-Revue internationale de Sémiotique juridique, 34(2), 579-599.
Manovich, L. (2002). The language of new media. MIT press.
Napolitano, D. (2020). The cultural origins of voice cloning. In xCoAx 2020 (pp. 59-73). Universidade do Porto.
Nightingale, S. J., & Farid, H. (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences, 119(8).
Parisi, L. (2013). Contagious architecture: Computation, aesthetics, and space. MIT Press.
Parisi, L. (2015). Instrumental reason, algorithmic capitalism, and the incomputable (pp. 125-137). Meson Press.
Pasquinelli, M., & Joler, V. (2020, May 1). The Nooscope Manifested: AI as Instrument of Knowledge Extractivism. Retrieved March 25, 2022, from https://nooscope.ai/
Rini, R. (2020). Deepfakes and the Epistemic Backstop. Philosophers' Imprint, 20 (24),1-16.
Rutz, H. (2016). Agency and algorithms. Journal of Science and Technology of the Arts, 8(1), 73.
Rutz, H. (2018). Algorithms under Reconfiguration. In Schwab M. (Ed.), Transpositions: Aesthetico-Epistemic Operators in Artistic Research (pp. 149-176). Leuven (Belgium). Leuven University Press.
Schankweiler, K., Straub, V., & Wendl, T. (Eds.). (2018). Image testimonies: Witnessing in times of social media. Routledge.
Scollon, R. (1995). Plagiarism and ideology: Identity in intercultural discourse. Language in Society, 24(1), 1-28.
Stiegler, B. (2010). Technics and time, 3: Cinematic time and the question of malaise. Stanford University Press.
Ting-Toomey, S. (Ed.). (1994). The challenge of facework: Cross-cultural and interpersonal issues. SUNY Press.
Weizman, E. (2017). Forensic architecture: Violence at the threshold of detectability. MIT Press.