previous day
next day
all days

View: session overviewtalk overview

08:30-10:00 Session 20B: Explaining the Unexplainable: critiquing machine learning across art, policing and the workplace
Explaining The Unexplainable: critiquing machine learning across art, policing, and the workplace

ABSTRACT. The panel explores critical and creative code approaches to the investigation of machine learning systems, including predictive policing, activity recognition in the workplace, image and language generation models, and AI zine toolkits. Investigating the inherent obfuscation of machine learning models, the presentations examine problems of explainability and ethical harm in contrast with intersectional considerations of care, co-creation, and counter-imaginaries. The panel also draws on methods of the newly forming Critical AI Studies.

08:30-10:00 Session 20C: Machine Consciousness v. Self and Mortality in the History of Ideas

(sponsored by The Department of Philosophy, Purdue University)

Homeostasis and the Mortality of Self-Reading Machines

ABSTRACT. In a recent article published in Nature: Machine Intelligence, "Homeostasis and soft robotics in the design of feeling machines" (2019), the neuroscientist Antonio Damasio and his collaborator, Kingson Man, propose that in order to take the next step in artificial intelligence design, what is needed is to imbue machines with a sense of their own "risk-to-self," or mortality. In this way, Damasio and Man argue, machines can begin to become "self motivated," rather than merely taking on tasks that are externally imposed on them. The implementation of "soft robotics" will, they argue in sum, have transformative effects on the productivity of these machines, as value becomes an internally generated attribute of AI. In this paper, I take up Damasio and Man's proposal, so as to reflect on the connection between their notion of homeostasis (as a biological, and now robotic characteristic), and phenomenological notions of mortality, in Heidegger (as a distinctly human characteristic) and Jonas (as an essential element of life itself, everywhere there is metabolism). What Damasio and Man are proposing, I argue, is to create machines that are "self reading," in precisely the way that classical phenomenology describes organic life. Outside of the existential risks that such machines might impose, how can we understand the specificity of "human" mortality, in the wake of this proposal?

Evolution, AI, and Samuel Butler’s “The Book of the Machines”: A Pedagogical Perspective

ABSTRACT. Exploring evolution and artificial intelligence with undergraduate students can be both immensely challenging and remarkably rewarding, especially when those students have little to no prior experience with the topics. In this pedagogical presentation, I would like to share my approach to teaching Victorian understandings of artificial intelligence in the undergraduate survey course. I begin the presentation with an overview of how excerpts from Darwin’s On the Origin of Species (1859) can be used to contextualize Samuel Butler’s engagement with artificial intelligence in “The Book of the Machines.” In this chapter of Butler’s novel Erewhon (1872), an imagined author erroneously uses evolutionary theory to argue for the inevitability of machinic consciousness, drawing parallels between biological life and machines in the domains of language, thought, and bodily processes. By explicating his argument from a pedagogical perspective, I demonstrate the ways in which “The Book of the Machines” can be used to cultivate within students a more critical understanding of artificial intelligence and a better appreciation for our humanity in an increasingly autonomous world.

Human and Not Human: Erwin Schrödinger’s Theories of Consciousness; A Physicist in his Literary Context

ABSTRACT. Though primarily known for his contributions to quantum mechanics during the 1920s – and his well-memed thought experiment about an alive-and-dead cat more specifically – the Austrian physicist Erwin Schrödinger crossed disciplinary boundaries to have an outsized impact on the field of biology, ultimately serving as a point of inspiration for the scientists who discovered the structure of DNA. Strikingly, he moved beyond exclusively scientific writings to engage with broader cultural dialogues surrounding consciousness and the nature of life by turning toward Eastern philosophy and exploring questions of consciousness beyond the individual human by conceptualizing consciousness through the different lenses of the sub-human (cells, organs), the supra-human (broader society of man, ancestry), and the non-human (animal consciousness); these examples allowed him to move beyond the known and challenge it with the unknown to elucidate his ideas about consciousness more broadly. Here, he shares a key parallel with authors of his time, notably Thomas Mann’s literary inquiries into the questions of life and consciousness in The Magic Mountain (published in 1924) and Franz Kafka’s literary experiments on consciousness from the animal view in Report to the Academy (written in 1917) and Investigations of a Dog (written in 1922). I examine how each of these figures approach questions of consciousness, life and evolution and how these strategies parallel one another. Confounding though Schrödinger’s dedication to questions of consciousness may be, it cannot be disentangled from his scientific thinking, nor from his cultural context, especially given his own affection for interdisciplinarity.

08:30-10:00 Session 20D: Possession, Spirituality and Grounding
Simulating Empathy with Spiritual AI

ABSTRACT. This paper explores empathy, and its simulation, in AI designed to be “spiritual machines.” We seek to move away from the notion of elite “profiteering” transhumanists (Boss forthcoming) like Kurzweil and his conception of machine spirituality consisting in its eventual "claim to be conscious, and thus to be spiritual," and instead examine a more “grassroots” case called the Spiritual Chatbot Project (Asante-Agyei., Xiao, and Xiao 2022). In Xiao et al.’s study – which asked "How do people perceive the use of chatbots for spiritual purposes?" and "What preferences do people have for chatbots for spiritual purposes, if any?" – test users of a “spirituality chatbot” indicated that their preferences can be satisfied through the AI's language; specifically, its performance of a type of reflective listening. In other words, participants’ expectation was that a spirituality chatbot should offer a form of cognitive empathy. Cognitive empathy entails a capacity for understanding the other's emotional state and acting in a comforting and appropriate way in the situation. Here, the participants seem to shift the valuative problem away from visual displays of emotion or "humanness" – emotional empathy – that many AI and robot studies link with uncanniness, and towards the use of reflective empathy in language. This suggests that the ontology of the AI's "agency" or personhood – whether it claims to be having a “spiritual experience” – is practically unimportant compared to its linguistic and empathic behaviour when it comes to grounding spiritually authorizing encounters. By "authorizing," we mean "deemed as having some claim to arbitration, persuasion, and legitimacy" such that the user might make choices that affect their life or others in accordance with the AI or might have their spiritual needs met. Here, the language of the chatbot begins to form infrastructures that foster affective intimacy with whatever is considered sacred (special; set apart [Taves 2009; Knot 2006), or mediate divine presence (i.e. reproducing the authorizing function of monks, nuns, priests, gurus, imams, rabbis, etc.), or legitimate a particular religious ontology or a particular metaphysical conceptualization of the user's spiritual care needs.

Artificial Possession: Exploring the Imbrications between Possession Literature and Brain Chips

ABSTRACT. Biotech moguls are currently exploring the idea of merging human intelligence with artificial intelligence in the hopes that implanting brain chips in organic brains will lead to a pseudo-immortality, prevent technological unemployment, and create a global consciousness, while also retaining ultimate control over the artificial intelligence itself. However, in my presentation, I aim to imbricate the spiritual, mythological, and folkloric traditions regarding possession with the purported altruistic use of brain chips professed by companies like Nueralink, Facebook, and Kernel.

The literary implications surrounding possession-lore (i.e. zombie, vampire, Afro-Caribbean, Ancient Greek mythology) suggests consciousness is something that can be forcibly taken from a conscious individual by possessing that individual. Analysis of these literary traditions reveals that at the heart of each of these possession myths is an imbalance of power as a result of dualistic hegemonies, further complicating the mind-body problem.

Using literature like possession-lore, throughout this presentation, I aim to speculate upon the connection between consciousness and possession as it applies to inserting brain chips in conscious individuals. I posit that brain chips are a form of possession similar to the stories that have been woven throughout literary history—stories warning of the inevitable (spiritual, physical, social) death that occurs when one gives up their selfhood to be possessed by another. Considering the aforementioned lore, the possessed in these stories lost their consciousness in order to do the bidding of another entity, an idea that cannot be ignored. Ultimately, the literature surrounding possession is clear. Dualistic forms of oppression expressed in literature, especially as those forms of oppression seem to manifest in some form of possession (either of a commodity or a body) suggests an important and scary connection between this literary history and the projected future of artificial intelligence merging with biological consciousness.

Mindreading is a Red Herring: Grounding neuro-hype by following a neurocognitive object

ABSTRACT. Since the turn of the 20th century, scientists and journalists have anticipated that one of the earliest brain imaging technologies, Electroencephalography (EEG), would allow for mind-reading (Borck 2005). A century later, this dream (or nightmare) of transparency continues to shape research and development of technical applications for EEG in legal, military, and clinical domains. What is the status of this promise? I offer an ethnographic, semiotic analysis of EEG to ground neurohype about the promise or peril of mind-reading. By examining how brain activity comes to mean in the situated research practices of cognitive neuroscientists, I show that ERPs are recursive, context-embedded semiotic objects, for which scientists use EEG to write and read messages of their own crafting in order to stabilize their inferences about cognition. ERPs are fiddly to evoke and scientists don’t agree about what they index. These contested neurocognitive objects nonetheless make their way into various engineered assemblages. I follow one object, the P300 ERP component, into several contemporary applications: brain-fingerprinting, BCIs for non-speaking disabled people, and brain-enhanced weapons systems. My analysis and tracing of the P300 will support the argument that these technologies are concerning for reasons other than their as-yet-hypothetical ability to invade one’s private thoughts: what these technologies are asked to do and the unequal impacts when they work or fail. I argue that promoting the dream or nightmare of mind reading is a red herring, laundering concern toward individual privacy and away from systemic impacts.

08:30-10:00 Session 20E: Neuro networks: human, animal and other organisms
The Human/e and the Rabbit Hole: The Recycling of Social Darwinist Tropes and Animalistic Metaphors in Contemporary Pop Tech Critique

ABSTRACT. In the late 2000s and early 2010s, to satisfy venture capitalists shift in prioritizing user growth over profit, recommendation systems and behavioralist designs came to dominate emerging digital platforms for the purposes of increasing engagement. Technologists came to describe their work in terms of hooking (Eyal 2014), captivating, or trapping users’ attention, as a hunter would their prey (Seaver 2018). Since the 2016 ‘techlash,’ tech critique has gone mainstream with former technologists like Tristan Harris (star of The Social Dilemma) coming to take an outsized role in debates over what should be done with big tech and raising ambiguous notions of humane technology as ideals for technologists and policymakers to now strive. These “tech humanists” (Weigel and Tarnoff 2018) frame “Godlike” technology as hacking the supposedly inherent weaknesses in the brainstem, where the non-human prey they previously sought to hook lie at the bottom of the outdated tripartite model of the brain. In this figuring, the mind itself becomes a site of this internal conflict between the human animal or paleolithic human and the human/e. This paper aims to map the deployment of animalist metaphors like the “lizard” or “reptilian” brain, Skinner box pigeons, rabbit holes, and other Social Darwinist hierarchies as well as anxieties over “tribal mind” or even a “Chinese … brain implant” shrunken into the brain to describe the “attentional serfdom” (Williams 2018; Schüll 2022) or “human weakness” that so-called humane technology is tasked with taming. Here, I take cues from feminist STS scholars who have theorized the white fear over Artificial General Intelligence taking over humanity (i.e., “the singularity”) as the cultural byproduct of the legacies of exploiting racialized and gendered populations as animated technologies (Crawford 2016; Nobel & Roberts 2019) or “surrogate humans” that uphold the liberal subject (Atanasoski & Vora 2019). In contrast to recent scholarship on the movement of ‘tech humanists’ that seeks to merely correct misinformed analysis, I argue these recycled tropes must be understood as potent myths that evoke conservative politics of nostalgia that romanticizes a lost capacity for “attentional sovereignty” (Burkeman 2015; Seaver 2019; Rivers 2020) that never existed in terms that resemble a “Make Attention Great Again” approach and play into emerging US technationalism (Tan 2019).

Animals and the Literature of COVID-19

ABSTRACT. In her recent article “Aliens, Plague, One Health, and the Medical Posthumanities,” written for the 2021 “Science, Technology, and Literature during Plagues and Pandemics” special issue of Configurations, Lucinda Cole laments the absence of engagement with “the nonhuman animal world” in the COVID-inspired essays that surround her own. Following her lead, this paper argues that the epidemiological history of COVID-19 offers many opportunities to re-think our relationships with the nonhuman world around us. This history might remind us that, because of our shared genetic heritage, humans and other animals are equally subject to microbial infection, and the diseases that affect non-human communities have clear analogs in and often cross over into (and back out of) human communities. These crossings, in turn, might through into relief how intertwined our worlds are, highlighting the fact that the health and welfare of our nonhuman community members directly affect the health and welfare of our human communities.

That said, as I will also argue, these possible histories face stiff resistance from emerging cultural narratives embodied in COVID poetry and fiction, which tend to reinforce the differences between the human and the nonhuman and the importance of keeping those two worlds physically and conceptually separate from one another. Setting this dominant tendency alongside Linda Gregerson’s quite different poem “If the Cure for AIDS, [sic]”, I show why we should consider the current COVID pandemic as an ongoing inter-species event and what difficulties we will face as students and teachers of literature in doing so.

08:30-10:00 Session 20F: Images, Films, Social Media
You, the World, and I (2010) & perusing the generated archive

ABSTRACT. I propose to speak about the computational generated archive. To give an example of this, I look to Jon Rafman’s digital retelling of the Orpheus legend in You, The World, and I (2010). In this six-minute short, the narrator reports that he lost contact with the love of his life. After scouring the Internet, he stumbles upon a photograph of Her -- an unnamed, nude woman facing the Adriatic Sea -- on Google Street View (GSV) (Figure 1). Then, hoping that the GSV camera captured more accidental photographs of Her, he scours images of the cities they visited before. His quest comes to a screeching halt when he finds the photograph has disappeared, forcing the film to cut to black (Figure 2). After ten seconds of mournful pause, the video becomes available to play again.

For this paper, I wonder aloud about the relation between reading, replay, and reproduction. I start this concept with Jamie Baron’s “archive effect,” in which filmic material may simulate the experience of “reading” an archive by appropriating found footage through temporal and intentional disparity. I posit that the generated archive goes one step further, however, in that it is inspired primarily by the Derridean death drive. Here, a paradox haunts the very idea of computational preservation. Even though computational archives seek to perfectly “preserve” its contents, it instead replays, reproduces, and thus “regenerates” the archive upon (re)visitation. Reading minds - or at least, traces of past minds - is more recursive than it seems.

A few scale tricks and how to avoid them

ABSTRACT. In my recent article, “Scale Tricks and God Tricks,” I build on and respond to recent work on scale by developing the idea of scale tricks in relation to critiques of the Eames’s “Powers of Ten.” This presentation will clarify this idea of scale tricks further by clarifying and describing particular scale tricks, giving examples of these tricks from both science and popular media, and analyzing why they are tricks using the framework I develop in my larger work, Scale Theory. I will highlight seven scale tricks: the losing sight of the comparative measure, which produces scaling effects such as those used in Godzilla; failing to change scale in an effort to show something about scale (as seen in the online graphic “1 pixel to the moon”); scaling objects without seeing that one is changing perspectives (as in Gulliver’s travels--what Anna Tsing calls nonscalability); selectively switching perspectives of the same object across a threshold (e.g. when we speak of covid as both virus and pandemic without acknowledging the scale shift); adding hidden criteria to a scalar threshold (e.g. when we think economics is equal to the planetary scale); forgetting the scale at which one is at (as in claims to power via satellite imagery); and scalar synecdoche (substituting oneself for a scalar object). In each of these examples, I will discuss how a careful accounting of scale permits us to say that these are tricks and examine techniques for avoiding them. The last part of this talk will then open a discussion of depictions of scale and scalar relations that attempt, at least partially, to avoid or counter these problems, with brief overviews of the scaling operations in Terrence Malick’s 2011 Tree of Life and Primack and Abrams’s The View From the Center of the Universe.

Reading between the Lines of Sino-Anglo, Digital-Printed, Aural-Visual, Neural-Screen, and Bot-Bod

ABSTRACT. Streaming China’s current TV dramas nearly in real time on my laptop after dinner has been an eye-opening, mind-blowing experience. My daughter, home for Christmas, had raised her eyebrows over the nightly viewing of kitschy, melodramatic series, a serious sign of my late-stage addiction, if not regression. On the other hand, my two recent scholarly monographs have drawn in large part from Chinese TV series, both period costume dramas and contemporary rom-coms, awaiting yet the transnational intellectual communities’ acknowledgement, assault, or indifference. My viewing and recollecting in critical analysis entail a methodology of reading between the lines, between Sino-Anglo, Digital-Printed, Aural-Visual, Neural-Screen, and Bot-Bod cross-hatchings. To google streaming platforms such as immediately presents issues of accessibility, not because a credit card payment is required, but because the menu of a plethora of shows—Chinese, Taiwanese, Korean, Japanese, and Thai—are all listed in Chinese, thus barring any non-Chinese speaking visitor. Clicking randomly on any picture would access the drama itself, yet the absence of English subtitles dooms the viewers to what amounts to a “silent movie.” By contrast, viewers in China and in Chinese diaspora sit back, their bodies relaxing, to be entertained, washed over, even brainwashed by the bots, the algorithmic delivery of digital signals on the screen, with Chinese dialogue oftentimes dubbed in post-production and subtitled in simplified Chinese on the bottom of the screen. The Chinese subtitles are to meet the different levels of proficiency with spoken Putonghua or Mandarin, the native tongue to northern Chinese, yet unfamiliar to southern Chinese and many of those overseas. The viewing, the dubbing, and the subtitling already intimate a split screen, a split China, one that feigns to be whole while it is full of holes, one that, like Frodo’s One Ring in Lord of the Rings, “in the darkness binds” Chinese and diasporic subjects while blinding, subjugating them. The mind game over Chineseness, otherwise known as President Xi’s “China Dream,” is hereby exposed in English. These static printed words on the dynamic TV series seem a useless afterbirth to some, yet they may carry medicinal, even magical, properties for millennial Sinophone-Anglophone bodies politics suffering each other through fever and chills.

10:00-12:00 Session 21B: Workshop: “Terms of Fantasy Reader”

Participation is limited, please sign up for this workshop from the conference website “Workshops” tab.

Terms of Service Fantasy Reader

ABSTRACT. Terms of Service Fantasy Reader is an art and research project that interrogates the relationship between the regulation and negotiation of power of digital selfhood in contemporary interface culture. It functions as a public rehearsal performance workshop and resulting installation that comes into being from the growing audio archive of collective dramatizations of terms of service of various services. Accompanying the vocalizations is a continuous collection of examples of mobile app messages showing specific uses of language, whether it be the employment of narratives of ethics of care, the existence of emotional, commanding or humorous tone, as well as notifications around verifying or disabling accounts. Terms of Service are legal documents that get "signed" on a daily basis, when downloading and installing an app, in most cases without reading, accepting the silent ultimatum for participating in contemporary forms of socialization, most commonly accepting the default settings unchanged and unquestioned. These conceptually demanding, long and often hermetic to read documents both define and reinforce the invisible borders of the user's options for self expression. At the same time, they are one of the main battlegrounds for user rights. Terms of Service Fantasy Reader offers this never fully available luxury of time to experience the content of various Terms of Service through a collective expression of unspoken, unspeakable, unclear and affective relationships towards spaces of algorithmic cocreation and regulation of the self, as well as community.

10:30-12:00 Session 22A: Three Artists Visibilizing the Invisible: Hilma af Klint, Athena Tacha, and Susan Hiller
Three Artists Visibilizing the Invisible: Hilma af Klint, Athena Tacha, and Susan Hiller

ABSTRACT. This panel brings together art historians to discuss three women who were deeply engaged with contemporary science and, in certain cases, the occult: Hilma af Klint (1862-1944), Athena Tacha (b. 1936), and Susan Hiller (1940-2019). An interest in invisible realities is a pervading theme in their works, with af Klint developing a pioneering abstract style of painting in response to the early 20th century's frequent co-mingling of scientific discoveries, such as X-rays, with spiritualism and the occult. Athena Tacha has been one of the very few artists to follow closely quantum physics and other recent scientific developments, incorporating these new ideas in her works. And Susan Hiller's focus on the invisible and phenomena such as auras has produced a mode of "paraconceptualism" that parallels the renewed openness to the occult among later 20th-century artists.

10:30-12:00 Session 22B: Biological v. Artificial Intelligence
Neural Networks and AI

ABSTRACT. Contemporary neural networks still fall short of human-level generalization, which extends far beyond our direct experiences. In this paper, I argue that the underlying cause for this shortcoming is their inability to dynamically and flexibly bind information that is distributed throughout the network. This binding problem affects their capacity to acquire a compositional understanding of the world in terms of symbol-like entities (like objects), which is crucial for generalizing in predictable and systematic ways. To address this issue, I will propose in my paper analysts a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs (segregation), maintaining this separation of information at a representational level (representation), and using these entities to construct new inferences and performances. My analysis draws inspiration from a wealth of research in neuroscience and cognitive psychology, and surveys relevant mechanisms from the machine learning literature, to help identify a combination of inductive biases that allow symbolic information processing to emerge naturally in neural networks. I believe that a compositional approach to AI, in terms of grounded symbol-like representations, is of fundamental importance for realizing human-level generalisation.

Plants, AI, and the Animation of Life

ABSTRACT. Suzanne Simard’s research into the social lives of plants has drawn attention to forests’ vast, underground mycorrhizal relationships, which have been variously described as a “wood wide web,” a vast “neural network,” and a system of “talking trees.” These terms not only capture the forest’s ecological interconnectedness, they also convey the ways in which discourses of artificial intelligence often frame discussions of plant intelligence. This paper engages with contemporary discourses of plant animacy to reveal the complexity of the relationship between plant and artificial intelligence beyond what reading vegetal awareness through the lens of machines can offer. Analyzing science articles on plant communication alongside science fiction films ranging from Invasion of the Body Snatchers (1956) to After Yang (2021), I show how the figure of the plant unsettles distinctions between the machinic and the organic, representing the moment when consciousness passes into being. Plants, though, are more than just a symbol for conditions of emergence. As demonstrated by the cultural objects in this paper, narratives of plants animate the biopolitical threshold whereby life is suddenly enlivened and whereby intelligence is rendered legible.

Bio: Kathleen M. Burns holds a PhD in English from Duke University. Her research has been supported by fellowships from Duke University, American Council of Learned Societies, and the American Philosophical Society. Beginning in fall 2022, Dr. Burns will be Harvey Mudd’s Hixon-Riggs Early Career Fellow in Science and Technology Studies.

The problem of being a multiply conscious cerebral subject

ABSTRACT. If I can create a perfect copy of myself in a computer, does it share my consciousness? If there exists a parallel universe identical to this one, but where I made one alternative choice, is that person still “me” or an entirely different stranger? To probe these questions, I use the lens of two recent works: one television show (Upload) and one film (Everything Everywhere All at Once). The former centers on concepts regarding the potential for “uploading” a human consciousness to a virtual world, and how the perspective of being conscious – and, indeed, whether a consciousness is truly unique – is inextricably linked to the embodied form of the consciousness. The latter revolves around a “multiverse” story wherein the main character is able to draw on experiences of herself in other lives, often toying with the idea of what it means to be the “same person” under any other conditions.

In this analysis, I draw primarily on the works of Eric Schwitzgebel concerning the nature of consciousness and Fernando Vidal about the experience of a cerebral subject (i.e., a brain that knows it is a biological being). I also make use of the narrative subject in the vein of Judith Butler and Adriana Cavarero as an explanation of why storytelling is a rich and enlightening way to approach these questions that are, at least superficially, perhaps more philosophical or scientific than artistic in nature.

10:30-12:00 Session 22C: AI in the Workplace: Labor Ethics, Surveillance and Algorithms of Oppression
Dorothy West’s “The Typewriter” and the Racialized Labor of Speech Recognition

ABSTRACT. When the Stanford University Institute for Computational & Mathematical Engineering released its (2020) report on “Racial Disparities in Automated Speech Recognition,” many cultural critics read it as a welcome confirmation of their critiques of ostensibly neutral algorithmic processes. Particular grievances, such as with Amazon Alexa’s high word error rate with African American Vernacular, impel a growing coalition aimed at algorithmic justice, as seen for instance in Safiya Umoja Noble’s Algorithms of Oppression. In this paper, I argue that in order to grasp the incipient challenges of digitized voice, we must become conversant in the ways communication technologies historically have imposed the condition of inaudibility on marginalized vernaculars. As scholar and activist Joy Buolamwini has put it, “the past dwells in our algorithms.” I read contemporary developments in Natural Language Processing in light of the inaudibility of AAV and against a long history of language processing technologies. As case study, I consider Dorothy West’s short story “The Typewriter,” in which a young typist and her machine facilitate her father’s fantasy of social transcendence and its ultimate disillusionment. I adumbrate the historical arc from this Harlem-Renaissance-era narrative to the present moment, with special attention to the interchanges of Black women’s auditory labor and digital dictation. This arc reveals coextensive traditions in which Black women assert creativity over and against the labor of passive transcription, while leveraging the deafness of listening technologies for expressive ends.

Machine Learning and the Rise of Computational Model Systems

ABSTRACT. While improvements in machine learning outcomes across domains have captured headlines in recent decades, these advancements were quietly underwritten by a more profound change in the form of knowledge production following the Second World War. The emergence of computational modeling, exemplified by the Cellular Automata theory of John von Neumann and Stanislaw Ulam, also enabled the design of complex simulated systems for heuristic experimentation (Burks 1971). By the 1980s, these model systems were revolutionizing fields as disparate as international relations, economics, and evolutionary biology. Together with refinements to the theory of non-cooperative games by John Nash, John Harsanyi, and Reinhard Selten, they provided a compelling account of how decisions by individuals can lead to system-wide change. Capaciously termed Agent-Based Modeling (ABM), the experimental use of model systems was touted for, “its ability to analyze problems by simulation when mathematical analysis is impossible” (Axelrod 2008). Crucially, for Robert Axelrod and other influential practitioners of ABM, inspiration came from the analysis of chess and checkers AI programs, such as the work of Arthur Samuel. And as the writings of Norbert Wiener make clear, these same AI programs were also the inspiration behind the cybernetic discourse on learning machines (Wiener 1948; 1950; 1964). The purpose of this talk will be to contextualize contemporary machine learning within this longer history of computational systems media. I argue that machine learning research depends on a prior authority of heuristic modeling as experimental knowledge, as much a product of the computer chess tournament as the simulated laboratory.

10:30-12:00 Session 22D: Sights and Sounds in Bytes
Quantum Theater: the dramatization of physics

ABSTRACT. Karen Barad's seminal Meeting the Universe Halfway produced many concepts, including entanglement, intra-action, diffraction, and the cut, that have been widely deployed across the humanities and social sciences, and continue to shape the expanding discourse and practice of New Materialism(s). This paper contends that the full force of these concepts has yet to be understood. Arguing against the dominant reading of Barad's work as providing a materialist foundation for feminism and the humanities more generally (one that is rooted in quantum physics), the talk demonstrates how the lasting impact of their work lies instead in its radical transdisciplinarity–the bringing together of physics, theatre, rhetoric, design, art, area studies, and more. This talk takes up quantum theory and theater, looking specifically at Barad’s reading of Michael Frayn’s play, Copenhagen, to argue that their use of the play to open the book can be read as a lens, frame, or proscenium, if you will, for watching the quantum drama unfold. In short, using the work of Gilles Deleuze, I show how Barad uses theater to dramatize quantum concepts.

What was Mathematical Reading?: On the Appel-Haken Proof of the Four-Color Theorem (1976)

ABSTRACT. In this paper, a computer scientist interested in math and a literary scholar with training in the same come together to discuss a central question of this conference--how does AI change our understanding of “reading” and minds?”--through the consideration of reading in a field amenable to both people: mathematics.

As a field of the imagination conjured up by language (Rotman), math bears similarities to the field of literary fiction. And yet this field of the imagination seems endlessly effective (Hirsh); in computer science, fields of logic and category theory have been useful in clarifying the structure of programming languages. Most importantly for this paper, math, like literary fiction, has long been associated with human intellect and creativity.

We explore the question of how computer technologies, increasingly used in mathematical proof, challenge and reveal implicit beliefs about what is sought after in the reading of proofs. We take as our case study the “Four Color Theorem,” proven in 1976, which states that using four colors, a map can be colored such that no two adjacent regions are the same color, and was proven using a brute force approach that used a computer to test this theorem for 1,936 spatial configurations. This method of proof was dissatisfying to many mathematicians, who were reluctant to accept that what had been presented was a proof due to the computational methods involved. We examine the nature of this dissatisfaction by exploring the controversy over the proof in two ways, 1) in the conversation it generated about the nature of proof in the 2-3 years after its publication and 2) in AMS papers that revisit the proof decades later.

Our analysis suggests that the use of computing in mathematics led to reconsiderations of the definition of mathematical proof. These discussions moved the concept of proof from mere verification to something more, requiring that the proof be capable of being witnessed by the human mind. They draw on Enlightenment notions of mathematics, and reveal the centrality of mathematics to the concept of the universal human, as well as the contradictions inherent in thinking of mathematics both as a deductive system of thought and as a creative act. This paper contends that when computers perform acts thought to be unique to humans, these acts do not always become seen as acts performed by both humans and computers, but sometimes become redefined to remain in the realm of the uniquely human.

Note: This paper came out of a series of discussions between the two authors after the end of the spring semester. We realize the date has passed, but wanted to send it in to see if there was interest. Although "Single Paper" is checked, we'd be happy to be placed in panels, roundtables, or lounges that the organizers feel might be productive.

Neuroscience, Synaesthesia and Extraordinary Experience in Art

ABSTRACT. The difference between seeing auras, experiencing psychic type phenomena, neuroscience, and synaesthesia

Having had personal experience with all three phenomenon I believe I am in a unique position to reflect on those experiences and their qualitative, perceptual differences. Within my experience I am aware that I would not be able to differentiate between these experiences and give an impersonal account of the differences if I had not known the difference at a personal level. I am an artist and through my painting I discuss seeing auras and visionary experiences. I show in paint what the Aura looks like around people places and things.

I also investigate and research the visionary in art as a genre

Neuroscience is able to observe the brain and operate on the brain. The scanning of the brain with MRI machines in particular reveal only grainy images which bear little relationship to genuine knowledge. Experienced neurosurgeons can take a judgement view over time as to what they consider might be happening in the brain. Medical staff who are new to a patient make assumptions about scans and raise concerns where equivalently trained staff with a knowledge of patient history can allay concerns. There are happenings in the brain which neuroscience does not comprehend at a material level.

Furthermore, the issue of consciousness and the academy needs much greater exploration.

Synaesthesia is a very broad topic and is defined ‘a perceptual phenomenon in which stimulation of one sensory or cognitive pathway leads to involuntary experiences in a second sensory or cognitive pathway. People who report a lifelong history of such experiences are known as synesthetes.’ (Webster Dictionary) I wondered what this neural condition often considered to be on the border line between neural and psychic phenomenon, might be like. Artists such as Kandinsky are known as synesthetes with colour and music interweaving in form ‘the brush with unbending will tore pieces from this living colour creation, bringing forth musical notes as it tore away the pieces.’ Kandinsky.

It took me awhile to realise that I experience a form of synaesthesia, mirror-pain synaesthesia, where I experience others’ pain. This has led me to a knowledge of the very distinct experiential difference between synaesthesia which I do not put into my art for good reasons and extraordinary experience. I will discuss this in my talk.

I am currently working as a doctoral researcher in fine art at The University of East London with Professor Fae Brauer. My research is on The Visionary in Art.

10:30-12:00 Session 22E: Roundtable: “Art, Science and Technology Studies: An Emergent Field”
Art, Science & Technology Studies: An Emergent Field

ABSTRACT. This roundtable panel features the four editors of the newly published Routledge Handbook of Art, Science & Technology Studies. The Handbook, and the process by which it came to be published, will serve as a jumping off point to discuss the study of art and science as its own distinct field with a set of organizing ideas, foundational theories, and methods. The panelists will begin by discussing the nature of the handbook, focusing on some of the organizing principles and methods it proposes for thinking about art–science across the sciences, social sciences, humanities, and arts. They will also discuss some of the challenges inherent in bringing together such varied perspectives and methodologies, as well as some of the ways they envision establishing an open field not meant to settle questions about the relationship between art and science, but to raise them. After this brief introduction, the roundtable conversation will be open to all participants. Questions and themes for consideration in the open discussion will include the role art, science and technology studies might play in exploring novel technologies, such as artificial intelligence and urgent problems, such as the climate crisis. Finally, the roundtable will offer an opportunity to look to the future, and to examine the ways that multiple forms of knowledge, drawn from both the arts and sciences, can transform the way we think about science and technology studies as well as the ways science and technology studies can inform arts practice and theory.

13:00-15:00 Session 23: Workshop: “Radio Play: Live Participatory Worldbuilding with GPT-3”

Participation is limited, please sign up for this workshop from the conference website “Workshops” tab.

13:30-15:00 Session 24D: AI, the Reading Mind, and Human Exceptionalism
Embodied Cognition and Artificial Intelligence: What AI Cannot do (and Maybe Never Will)

ABSTRACT. All living beings are continually creating themselves—producing, inventing, modifying, renovating themselves. They need no external intervention in order to continue to exist. A living being lives, and does not merely exist, like a machine. What matters are the ongoing interrelationships that exist among mind, brain, body, and context, the way we create our cognitive worlds, the way in which we make our pragmatic knowledge in relationship with external reality, our social as well as our physical external reality. Artificial Intelligence has no mind/brain or body, does not live in a context, has no personal experiences or memories, cannot imagine. So, among the things that AI cannot do are experience the images and feelings that readers generate as they read a novel, play chess (Big Blue did not defeat Garry Kasparov in 1997), or improvise dance moves to whatever music is being played. The human brain’s complexity, power of thought, and versatility is not matched by any I will discuss these and other examples of the fundamental differences between a human being—an autopoietic system—and an AI machine—a tool made by human beings. No AI device on earth can match the versatility of the embodied human brain for a long time, and maybe never will.

Writing AI Reading: Kazuo Ishiguro’s Klara and the Sun as Allegory of Profession

ABSTRACT. In both fiction and scholarship on artificial intelligence, it has become commonplace to ask whether humans themselves can be understood as qualitatively different from advanced AI systems. As our media produce images of ever more sophisticated AI, they rouse cultural anxieties about whether our own human intelligence can be programmed to follow predetermined loops, to pursue predetermined goals, and to replicate surplus-value-generating algorithms. In short, AI serves as both the consummation of human enlightenment, and the most fundamental challenge to the basic tropes of human exceptionalism.

The fiction of Nobel-winning novelist Kazuo Ishiguro has always ruthlessly interrogated tropes of human autonomy, but his most recent novel, Klara and the Sun, is his first to do so via an AI narrator. Many initial interpretations of the novel have read it as a proleptic diagnosis of the social impact of AI. Reading Klara in the context of Ishiguro’s previous narrators, however, my paper shifts focus to a presentist diagnosis of our contemporary ideology of professionalism. I read the artificial Klara as an extreme fantasy of AI as a living intelligence that could be absolutely engaged in its profession, without any costly concessions to the “work/life balance” demanded by human beings. The novel confounds this fantasy, however, by suggesting that it is precisely Klara’s need to read the world to perform her function—to decode, interpret, and classify experience—that leads her beyond any predetermined purpose and enables an autonomous volition to emerge. The novel imagines reading as a psychically generative process that renders humanity impossible to ever fully instrumentalize.

Who’s Afraid of Artificial Intelligence? Performativity, Difference, and Aesthetics in the Turing Test

ABSTRACT. Is the Turing test still relevant for artificial intelligence today? Many scholars of computer science would likely argue that ‘the imitation game’ Alan Turing outlines in his seminal 1950 paper “Computing Machinery and Intelligence” has long been irrelevant for the praxis of developing new AI technology. Stuart Russell and Peter Norvig writes that “AI researchers have devoted little effort to passing the Turing test, believing that it is more important to study the underlying principles of intelligence,” gesturing toward an emerging trend in research of decoupling the search for “intelligence” from imitation of the human. On the other hand, the famous thought experiment has permeated public consciousness and make constant appearances in news headlines and popular discourse, in particular with regards to new developments in conversational AIs. In attempt to answering this question, this paper takes a different approach and reads the Turing test not as a technical, evaluative tool of AI, but as a work of philosophy of the human. I argue that in outlining a foundational theory of thinking machines, Turing also implicitly provides a reading of gender and racial difference, aesthetic production, and human subjectivity. By unpacking the underlying principles and implications of the Turing test, I suggest ways in which the origin of AI may illuminate issues in contemporary computing, gesture toward new creative directions, and challenge our unstable definitions of humanness for an AI future.

13:30-15:00 Session 24E: Let Sleeping Bots Lie
Hypnopedia 2.0: Podcasts, Audiobooks, and Reading as Listening (and Sleeping)

ABSTRACT. In the last ten years, the popularity of audiobooks and podcasts has grown exponentially, as millions of listeners everyday “read” fiction and non-fiction, journalism, and cultural and political commentary with their ears. As much as “Reading Minds” asks us to reimagine reading in the context of artificial intelligence, machine translation and data mining (etc), I suggest there is an equally compelling case to consider reading as listening, and to examine the evolving cognitive, aesthetic and attentional dimensions of reading as an auditory process.

In my paper I explore one particular aspect of the burgeoning practice of auditory reading, namely the rise of podcasts for sleeping, or the intentional repurposing of podcasts/audiobooks into sonic sleep aids. The “sleep podcast” is an established genre at this point, while other sleepers regularly listen to “normal” audio books or podcasts, only to consume their content in and by the act of falling asleep. This kind of listening practice (which I call “som(n)iferous,” indicating the coincidence of sleep and sound) takes place in a shared architecture of reading and sleeping: the attention-grabbing dynamics of the speaking voice and the slow solicitation of a narrative structure gradually phase into a non-semantic vocal texture, and the droning cadence of sentence after sentence. The reading ear of the so(m)niferous listener is activated at multiple, contradictory levels at the threshold of cognitive and ambient listening, where narrative content disappears into quasi-musical vocalization that is aesthetically apprehended precisely (and paradoxically) by sleeping.

Folding sound studies into a critical media studies of sleep, I approach this kind of auditory reading as a resurgence of the early-to-mid 20th century fad of hypnopedia, or sleep learning, with the twist that, instead of learning Mandarin or reading War and Peace in our sleep, we construct sleep itself as a mode of sonic self-control (Hagood 2019) in the dormant architecture of narrative and speech.

Scales of Intelligence: Dreams and Nightmares of 1950s AI

ABSTRACT. This paper will examine the scalar dynamics of discourses surrounding “electronic brains” (i.e., digital computers) of the immediate postwar period in the US and UK, as they were first introduced to the public: as intelligent machines ready to out-think humans. Not only sensationalist news accounts, but also writings and interviews with prominent computer engineers and theorists promoted the idea that computers were (or could become) giant electronic brains functionally analogous to, but vastly more powerful than, the human brain. Alan Turing, for instance, insisted that “it is not altogether unreasonable to describe digital computers as brains” and suggested that they could possess creativity. The first public exhibitions of computers, especially at international festivals in 1950 and 1951, reinforced this trope of scale-distributed intelligence through both anthropomorphized names (e.g., “Bertie the Brain”) and format: these computers were very often designed to play games against members of the public, producing an agonistic and ludic narrative of instrumentalized Artificial Intelligence. Prominent computer designers and theorists such as John Von Neumann, Maurice Wilkes, and Turing spent many years exploring the game-playing capacity of computers as the forefront of AI research. The field of game theory arose from these attempts to re-code real world problems into computable language using the formal properties of games as their models of branching path optimization (decision making). This presentation will argue that game theory attempted to reduce the scale of real-world cognition at the same time that computer architecture sought to scale up computational engagement with analog dynamics through distributed input-output and the hierarchical distribution of control circuitry. This presentation will explore these these intertwined dynamics of game theory, public-facing ludic software, and advances in computer science during the first postwar years by examining both narrativized dreams of intelligence at new scales and dystopian nightmares of computational AI outscaling human networks.

The computer game, as both actually developed for public interaction and as speculatively pursued as a validation of AI, rendered the computer both an analog of human intelligence and a radically alien information processor that functioned at scales far beyond the human. Scale became the human-computer marker of difference while intelligence become its marker of identification. Hence the modality of play lead to the “giant brain” as an uneasy aporia at the intersection of the digital and the analog, giving rise to anxieties of deskilling, machine revolution, and computerized domination at the same time that the proponents of AI (and digitization in general) were touting the computer's potential to vastly increase the scale of human action and cognition.

Post-Truth: The Department of Truth and the Science of Fabricating Reality

ABSTRACT. The world is flat. In the first chapter of James Tynion IV’s graphic novel The Department of Truth, undercover FBI agent Cole Turner travels with a group of Flat Earthers to Antarctica where, to his utter astonishment, he discovers a gigantic glass wall cordoning off the end of the world. The rest of this mind-bending series is dedicated to exploring conspiracy theories like the JFK assassination, kindergarten Satanic cults, and Bigfoot through a fascinating premise: if enough people believe something is true, then it becomes true. While this conceit sounds like a fever dream for anti-vaxxers and Fox News pundits, it raises serious epistemological issues for science studies scholars, especially around the nature of information and disinformation in our post-truth era. This presentation uses The Department of Truth as a case study for exploring the cultural and scientific production of knowledge at a political moment where science denialism, fake news, and alternative facts have proliferated. In particular, I argue that Tynion’s text literalizes the so-called Science Wars of the 1990s which pitted the facticity of claims in the hard sciences against the deconstructive claims of the postmodern humanities. In reinscribing the trench warfare between scientists and social constructivists for the twenty-first century, Tynion asks if the world truly is a “text” that is endlessly interpretable or if society requires a stable body of empirical knowledge to properly function. Other questions this talk will address include: Who gets to determine what counts as truth? What is relationship between scientific facts, culture, and material reality? How can science and technology studies help us navigate the challenging epistemological landscape raised by The Department of Truth and, by extension, post-truth discourse? To answer these questions, I will draw from the work of scientists, historians, and philosophers like Alan Sokol, Mary Poovey, Thomas Kuhn, Thomas Gieryn, Lee McIntyre, and Stephen Fuller. At stake in this conversation is the vital—and perhaps dangerous—roles of literature and science in illuminating the boundaries of truth in an age that has already exceeded it.

13:30-15:00 Session 24F: Computational Literary Analysis
Humanizing Computational Literature Analysis Through Art-Based Visualizations

ABSTRACT. Computational text analysis using natural language processing frequency and sentiment techniques allow for the characterization of a work or collection of works of literature through a quantitative lens. Analysis of this data presents a new way to experience “reading” books. Rather than gaining insight by focusing on a single novel, we are able to experience the insights of a large body of literature by reading metadata. For example, in this paper, we analyze the frequency of female versus male pronouns and/or the sentiment expressed in sentences with female versus male characters in a study of 10,000 works of fiction in Project Gutenberg. While these results may be communicated through classic bar charts and scatter plots, these modes of visualization lack the emotional charge of experiencing literature or art. To this end, we explore a progression of results visualization ranging from bar charts to studio art inspired by the data. Our visualization techniques include computer-generated works, pen-plotter drawn work using paint on paper and on linen, up to purely studio based work. How best to present literature metadata to enable people to “read” large groups of novels and experience fiction in a fresh way is an interesting opportunity. We posit that more visually interesting artworks, as those shown in this paper, may resonate more powerfully with readers.

Towards Tolkien research with computational literary analysis

ABSTRACT. Since the advent of the term “distant reading” two decades ago, we have seen computational literary analysis give rise to “mixed-method[s] reading” in an attempt to balance out the weaknesses that pure distant reading brings. Computational literary analysis has been spreading across the major fields of literary analysis, including Shakespeare studies and Jane Austen studies, yet there is hardly a peep about it being used for Tolkien studies. I make the case for such research to be done, drawing examples from other fields of computational literary analysis to explain what techniques could be useful here, and why. I also identify potential sources of error and argue for open-source data, in an attempt to encourage methodological transparency and to foster communication between those who prefer close reading and those who prefer distant reading. The appendix includes a literature search log.

Word Prediction Algorithms and the Analysis of Literary Texts

ABSTRACT. My presentation will describe my current research using word prediction algorithms such as word2vec, GPT2 and BERT to analyze the extent to which literary language departs from ordinary language.

Specifically, I have found that word prediction algorithms generally have little success in predicting the words of literary texts. My claim is that this shows that the structure of literary language differs markedly from the structure of ordinary language, and that this has important implications for poetics and the theory of reading.

One salient feature of word prediction algorithms is that they treat both language as a whole and the individual text as a network of words that relate to each other in specific and characteristic ways. In contrast, data mining techniques based on counting word frequencies treat texts as “bags of words” in which the order and position of the words and the ways words relate to each other in the text are largely ignored. I think these things matter a great deal in literary texts, and that this is a reason to prefer a network approach.

The network created by a word prediction algorithm is equivalent to a conceptual map of ordinary language: certain combinations of words occur in ordinary language, while other ways of putting words together, even if grammatically correct from a linguistic perspective, do not. Put another way, these algorithms operationalize Wittgenstein’s notion of “language games”: words normally have meaning only in the context of their ordinary use, and word prediction algorithms provide a map of the language games that comprise ordinary language usage.

My research will demonstrate that literary works use language in ways that depart significantly from ordinary language usage. What this means, I will argue, is that we should understand literary language as an attempt to create new forms of language that articulate new meanings, concepts and feelings, and that the way we read literary texts should correspond to this intent.

13:30-15:00 Session 24G: War and Peace
Narrative for Peace in Armed Conflict: Simulation, Empathy and Emotional Engagement with combatants and victims

ABSTRACT. Internal Armed Conflict has been an important source for contemporary cultural productions in Perú and Colombia. Representations of political violence have been prolific in countries in which the State was, (and is) victim and perpetrator. This papers uses a cognitive approach to study empathy, simulation and emotional engagement. I analyze the novel Blood of the Dawn (2016) by Claudia Salazar, inspired by the findings of the Truth and Reconciliation Comission in Perú; and short stories written by ex FARC guerrillas who participated in creative writing workshops after the Peace Accord in Colombia in 2016. This paper explores how these narratives elicit empathy for literary characters and environmentally vulnerable spaces. I explore how narrative makes us feel with and for others, and how this provides a prism through which we can question simplistic narratives of political conflict and violence.

I analyze our emotional engagements with fictional characters using embodied cognitive theory. According to the theory of embodied simulation, when we read fiction we reuse the brain-body mechanisms we employ daily. I posit that fiction broadens and enhances our capacity for identification and emotional attachment, even to transgressive characters whom we would be reluctant to approach or bond with in real life. I argue that the experiences induced by fiction and the strong emotional attachments to characters in representations of violence enable readers to explore the motivations and consequences of war. By so doing, fiction could potentially have an impact in shifting attitudes towards peace and reconciliation after an Internal Armed Conflict.

Neuroscience and Posthuman War: A Revolution in Military Affairs

ABSTRACT. This paper focuses on one aspect of what has been recognized since the events of 9/11 as a Revolution in Military Affairs. Such a revolution utilizes a “communication, command, and control” approach to human identity, communities, and bodies, in the way cybernetics thinks about information. As in examples such as the much-debated Human Terrain System program, the latest counter-insurgency theory conceptualizes humanity per se as a complex computational entity, or as a “system-within-systems” (“SOS”).

By attending to military researchers call “the interior terrain,” war neuroscience similarly depends on quantitative expertise and begins from the “SOS” concept. Here computers and computer screens offer unprecedented accuracy for human terrain mapping, in this case, neurological census-taking, but on a larger scale than demography or anthropology used to allow. Focusing not on mapping insurgent identities or cultures, but on the cell as biochemical circuit—“firing” in both the synaptic and military sense—neuronal “onto-cartography” works at to formerly unthinkable scales. Characteristic of the U.S. Departments of Defense’s theory of “full-spectrum war” as the weaponization of epistemic, social as well as biological “systems,” the scaling-up of census “enumeration” produces “finer categories,” at the level of “human circuit plasticity.” While the brain’s “parts list contains thousands of millions of non-linear relations,” human cognition becomes recognizable as network-centric code. Encoding life, the war posthuman war machine operates in vivo, not so much as opposed to counterinsurgency but as the final step in what war planners call “identity infiltration.” Here war exists within humanity, as much as humanity exists within war. War becomes forensics. As exemplified by new technologies connecting human and military circuitries (“cognitive avionics,” real-time neural stimulation, the overlapping of virtual and physical battle zones), war neuroscience turns cognition into an arsenal that subsumes the human being.

15:30-17:00 Session 25B: Roundtable: “The Machinic Unconscious and the Literary Reading”
The Machinic Unconscious and the Literary Reading: A Roundtable

ABSTRACT. Before the coming of natural language processing, machines had been imagined to be capable of certain kinds of reading. Reading machine, for one, has long existed as an assistive technology that helps blind people to read by translating printed texts into audial signals. The qualifier “machine-readable” dates back to the late 1950s at the intersection of library science and emerging digital technology, denoting data that can be written onto a device, thus stored and suited for machine processing. For machines, to read was to abstract information from what is written on cards, disks, tapes, and the digital computer. Such a process is but one instance in the history of the scientific appropriation of concepts from the human sciences and practices into disciplines such as cybernetics and computer science.

With deep learning, the combination of powerful models and ample data, machines can now read in another sense than that of abstracting information from memory devices, namely, mimicking a literary operation on texts as a whole, and exercising a liberty in the focus of attention, for instance, in the case of GPT-3 models. No longer something for mere decoding or copying on the level of basic processing units, data has become a fecund substrate for the mining of new meaning. Beyond the conventional linearity ascribed to human reading, machines uncover patterns on a sub-word basis and synthesize meta-textual logic. This parsing procedure never stands alone as automatic, but is a technique practiced with human discretion and intention, like “distant reading” as a method introduced in digital humanities (Moretti 2013). What do we want from learning machines when it comes to reading literature? What do we make of the products of machine-assisted reading, what ascriptions, projections, and other cathexes are at work in our positions towards them?

Phenomenologically speaking, the linearity ascribed to human reading already constitutes an abstraction: sequential parsing assumes the reading body whose attention is channeled and language as a system to be static, such that other senses and faculties, as well as something akin to an unconscious can be deemed irrelevant for the activity of reading.

For Bernard Stiegler, the techné of reading is bound up with that of writing, both implying an impersonal knowledge, a certain mode of repetition of what has taken place, embodied in the artifacts that constitute the externality of memory. The directed, addressed necessity of human reading is constitutive of our tie to the other - partaking in what Husserl calls “communautization”, bearing on the historic necessity of both reading and writing for the ancient Greek isonomia. What lies in the operative separation of reading and writing as technified today? What kind of relation to the other and to memory does machine reading entail?

In Glas, Derrida proffers a reading of Jean Genet through an embodied sounding procedure. Counter to the Saussurean unity of sound image and meaning, the performative utterance of words as Derrida carries out emit sonorous hints that tread paths in the psyche, often unbeknownst to the reader and to the writer, leading to limitless and infinitely calculable meanings. Every reading, just like every translation, prompts a singular adventure of words in the body, taking flight from conscious use or intended signification, unfolding a virtuality of meaning beyond the fixed form of the sign, reminiscent of Freud’s mystic writing pad. Beneath the threshold of words, in a realm that semantics cannot register, one reads what Derrida calls the logic of the bit [French: mors].

Could one speculate on something like an algorithmic system’s unconscious, in the gaps delineated by the matrices of learning models and training sets? Can one interrogate the material conditions for the kind of readings artificial intelligence generates, in the modified sense of the term? What kind of reading does the reading of algorithmic readings entail, recursively? For whom exactly do machines read, what is their other? The stochastic nature of natural language generation seems to open up new possibilities for language use, but also tends to reproduce given patterns without fluidity. The absence of surprise for a machine does not abolish awe on the human side. As Turing noted, it is almost impossible to escape ascribing creativity to a machine. Yet, it is also the very lure of technical determinism that, in such questions, risks effacing the questions tied to its infrastructure - relations of property, relations of labor, and logics of extraction.

This roundtable seeks to elicit a discussion of these questions, drawing on a wide range of methods (from media theory to psychoanalytic theory and phenomenology) and historiographies. Involving five (pending) discussants, each will present an entryway into the problem by a short argument of about ten minutes, before moving into a plenum discussion.

15:30-17:00 Session 25C: We All Want to Change Your Head: Sonic Affordances and the Transformation of Self in the Extended Mind
We All Want to Change Your Head: Sonic Affordances and the Transformation of Self in the Extended Mind

ABSTRACT. This panel approaches the conference theme of “reading minds” by considering how music takes us “out of our minds” (echoing the last pre-pandemic SLSA2018 theme, “out of mind”). Music’s role as an affordance for extended cognition necessitates a sonic rhetoric emphasizing something other than a discourse-based understanding of meaning-making. Each paper singles out an aspect of music—numinous extremity (Reddell), generative noise (Nail), and the sonics of "heaviness" (Rickert)—to explore how these sonic dimensions potentiate cognitive change. Our approach to music as a transformative agent involves attending to music’s actively instrumental capacity to draw us out into the world, as well as music’s relational compacting of perceptual bias or expectation and environment, which recasts psychedelic theory’s iconic pairing of “set and setting.” Theories of extended mind demonstrate that listening to music is less a passive, aesthetic experience than a series of active, integrated, and moving involvements. But contrary to Joel Krueger’s work on oscillatory entrainment, the use of glitch, noise, the random, and the extreme forestalls entraining synchronization in favor of perpetuating the out-of-phase cognition of the sonically extended and augmented mind. Our approach argues that the musically extended mind is already a moved and transformed mind.

15:30-17:00 Session 25D: Digital Humanities
Deduction from Data: A Cognitive-Computational Analysis of Arthur Conan Doyle’s Sherlock Holmes Stories

ABSTRACT. This paper argues that Sherlock Holmes’ ‘deduction’, his process of inference about people from ‘data’ (their behavior, clothes or appearance) to solve crimes, can be understood as an explicitization of the process of mental state attribution in social psychology. Mental state attribution is a process we all engage in every day to infer the mental states of those around us from their behavior–a frown is taken as a sign of disapproval or confusion, for example, depending on the context. I first analyze the representation of deduction in the Holmes stories to argue that it engages directly with debates about psychological research methods in the late-nineteenth century, preceding the emergence of behaviorism, and explain how deduction as represented in the Holmes stories complicates received models of mental state attribution today by pointing to how non-bodily objects play a key role in how we ‘read’ other people’s behavior to deduce their mental states. In the second part of the paper, I argue that deduction in the Holmes stories reveals the extent to which we rely on attribution while reading literature to interpret the behavior of characters, and particularly that attention to description of Holmes’ own behavior can help us understand why he has become such a persistent character in the popular imagination. By analyzing the descriptions of Holmes’ social behavior in combination with the publication format of the Holmes stories, I use recent developments in the neuroscience of social interaction to argue that descriptions of Holmes engage the predictive underpinnings of mental state attribution in particularly effective ways. As part of this argument, I present a computational analysis of behavior descriptions in Holmes stories, in comparison with a corpus of 650 other nineteenth-century fictional texts. Ultimately, the paper explores the extent to which reading a literary text makes use of our capacities for ‘reading’ people in social interactions, and uses computational analysis of behavior descriptions to present the human reader’s interpretation of character as a form of predictive modeling.

Teleology Trees and Layer Cakes: Simulated Histories in Video Games

ABSTRACT. This paper explores the theme of simulated history in video games. It examines a variety of games that mediate historical processes and historical causality for players, including the probabilistic and geographic modeling of The Oregon Trail, the technology trees and national identities of Sid Meier's Civilization series, and the more chaotic, emergent, simulated histories of games such as Dwarf Fortress and Caves of Qud. As I will argue, history games are important both as a form of public pedagogy about history, and as philosophical explorations of the nature of historical knowledge and historical change. As windows into contemporary sociotechnical imaginaries about history, games can tell us what kinds of historical agency the public expects to experience or be subject to. But different approaches to simulating history can also be thought of as analogues to different forms of historical agency, and thus they can help us think about how overlapping processes mingle to produce real-world events. Throughout the talk, I will draw on work from media studies (e.g. Boluk, Lemieux, Raley), philosophy (Benjamin), and science fiction (Darko Suvin, Cixin Liu) to theorize how games model the kinds of historical knowledge and historical change that different approaches to politics, narrative, affiliation, and technology are conceivably able to bring about.

Flux, Wi-Fi, Fractals: Opening the Investigation into Spatial Rhythms

ABSTRACT. Beginning from Heraclitus’s definition of flux and expanding into technology and fractal patterns, this paper traces the spatial qualities of rhythm through pre-Socratic thought, into Timo Arnall’s “Immaterials Project” installations and Benoit Mandelbrot’s investigations of coastlines and digital images of fractal sets. This paper argues that contrary to the predominant Platonic viewpoint of a temporal interpretation of rhythm, an investigation into rhythm’s spatial qualities unearths a new understanding of the material qualities of the invisible ground defined as space.

15:30-17:00 Session 25E: Socially Transformative Art and Literature
Language as a practice: a web application for glossary co-creation

ABSTRACT. In this talk, I will present a web application designed to facilitate the co-creation of a polyvocal glossary of terms that describe aspects of transformational change. 

The Glossary was commissioned by CreaTures, a research consortium that is working to identify those aspects of creative practices that contribute most effectively to social transformations towards more sustainable ways of life. In the CreaTures project, the Glossary was envisioned as a compilation of key terms and processes that could aid with creating better understandings through the use of a common language.

I understand language, as part of the commons, as a site where displays of power are continuously produced and contested. Therefore, I built a set of digital tools to facilitate and document continuous linguistic interaction. The tools are also metaphors, which enact some of the processes of change that the lexicon is meant to describe.

Users who visit the site become co-authors of the Glossary, with the ability to add to, edit, or erase existing definitions. They must negotiate the power they are afforded to effect change. Users also contribute to building and changing the vocabulary that exists on the site. Meaning becomes plural and fluid, and the lexicon is constantly changing. 

Used in conjunction with online workshops, the web application tries to capture the drama of everyday acts of linguistic co-creation. Coded entirely with p5.js, the website is located at

Prescription Poetry: Healthy Reading, Healthy Critique

ABSTRACT. The notion that poetry is a therapeutic device, a form of individual—if not social—medicine, has been baked into the foundation of contemporary American poetics. We find this idea voiced by literary darlings, mass-market phenoms, and experimentalists alike. The poetry-is-healing line had become widespread enough, and vexing enough, that as early as 2006 Adrienne Rich was compelled to publicly lampoon it: “Poetry is not a healing lotion, an emotional massage, a kind of linguistic aromatherapy.” Rich denounces the transformation of poetry into something like a self-care commodity that sustains “health” as a consumer category, as self-optimization synonymous with “capacity to work.” In this paper I argue that “prescription poetry” can be read as a means of working through a set of social and economic dynamics that emerge in the U.S. around the 1970s, as the postwar social democratic promise unravels and neoliberal restructuring makes “life building and the attrition of human life” indistinguishable. In short, poetry is phantasmatically tasked with a job once claimed by the state and tasked with healing the wound inflicted by the “broken promise” on which the neoliberal order no longer pretends to deliver. I read CAConrad’s most recent book of poetry Amanda Paradise (2021), which is structured around therapeutic “somatic rituals,” as an instance of what Jasper Bernes (2017) describes as contemporary American poetry’s bid to “ward off recuperation by a restructured capitalism through a…preemptive sabotage.” Rather than a flight from the reality of alienation, mysticism becomes a practical critique of alienation. CAConrad hails their readers—who will hypothetically perform the rituals themselves and write their own poems—into a healing that doesn’t enclose itself around a return to a prior plenitude, but instead makes perceptible the loss rendered invisible by the privatization of healthcare and everyday life during the entwined AIDS and COVID pandemics.

Reading Alzheimer's Anew

ABSTRACT. How should we read the narratives of Alzheimer’s patients? In recent years, linguists have tended to read them for symptoms of cognitive decline, using machine-learning to pick out such features as word repetitions, simplistic narrative structures, and other “syntactic deficits.” Sociologists, on the other hand, read these narratives as strategies of meaning-making, examining how they wrestle with questions of identity and the emotions of grief and hope. In this paper, I experiment with a way of reading such narratives without the expectation of linear progression or a sense of purpose. Instead of attending to what these narratives might signify, I study how they cultivate a mode of interpretation that encourages readerly indifference while also inviting communal participation. Taking Dianna Friel McGowin’s Living in the Labyrinth (1993) as my object of study, I trace how the narratives of Alzheimer’s patients persuade their readers to abandon their implicit desire for stories and resolutions in favor of sitting with frustration, discomfort, and fear. In doing so, I attempt to describe an alternative model of storytelling that is grounded in the attentiveness of listeners who have been taught to recognize the value of salvaged fragments of memory.

15:30-17:00 Session 25F: Science Fiction and Monster Theory
Reading Time: Marcel Proust's Influence on Science Fiction

ABSTRACT. French novelist Marcel Proust is an unlikely source of intertext for science fiction writers. Proust’s literary universe, for all its experiments in form and imagery, remains anchored in French society of the nineteenth and early twentieth centuries. And yet references to Proust’s work and his ideas appear dotted through science fiction works such as Kim Stanley Robison’s Mars trilogy, Doris Lessing’s Shikasta, and Liu Cixin’s Remembrance of Earth’s Past trilogy. These authors reference Proustian psychology, sociology, and above all temporality, and my paper will explore how Proust’s excavatory exploration of selfhood (individual and communal) developing through time laid a literary groundwork for science fiction’s worldbuilding.

I propose that these authors are not merely referencing Proust but thinking with Proust, and in particular, thinking with Proustian notions of time. Liu Cixin has written that he came to science fiction because of his ability to “touch and feel” “scales and existences that far exceeded the bounds of human sensory perception.” It is my hypothesis that Proust gives us analogous training for concretely imagining time. Proust defamiliarizes time for the reader in order to make it more than mere abstraction, and the science fiction authors that refer to him are extending this work by asking us to imagine the self within ever larger and more complex temporalities.

“I Don’t Know Which of Us Should Be More Afraid of the Other”: Laura’s Lack of Agency within Le Fanu’s Carmilla

ABSTRACT. While Sheridan Le Fanu’s Carmilla was written much before 21st century’s modern technological advancements, the monstrosity of Carmilla represents a sort of pre-technological form of AI where complex knowledge exists outside of the human realm, where ‘non-human’ subjects can essentially perform as living human beings, with Carmilla’s presence illustrating the horror of a monster that is able to blend in. Carmilla is dehumanized both through her vampiric embodiment, which actively places her out of the category of being alive, and through the rest of the households’ consistent identification of her to a creature, placing her as the eroticized other within the home. Laura’s ‘sickness’ within Carmilla is placed as the result of Carmilla’s presence, with Laura experiencing “dreadful convulsions” (69), which are described as quite similar to an orgasmic experience, with it being ‘dreadful’ already illustrating Laura’s knowledge of the shameful manner of this occurrence. The fact that Laura’s own sexual desires are placed on the fault of Carmilla then illustrates that Carmilla’s monstrous identity is due to the projection of the men of the house in order to deny Laura’s own state. The bodies of monsters and technology both are ways in which those who are ‘living’ can place blame and ostracize the ‘others’ in order to remove agency from themselves.

Love Machine Reading: Exploring Human-AI Relations through Reading Practices in I’m Your Man (2021)

ABSTRACT. Throughout the last years, various films such as Ex Machina (2015), Her (2013) and “Be Right Back” (2013) as part of the Netflix-series Black Mirror have discussed human-AI relationships and their consequences for defining the human subject. A more recent example released in 2021 is the critically acclaimed film and German entry for the 2022 Academy Awards I’m Your Man (original title: Ich bin dein Mensch). In the film, human beings have begun to manufacture humanoid robots equipped with AI as romantic partners. The film’s protagonist Alma is tasked with assessing the humanoid robot Tom who is designed to be her ideal partner. What makes I’m Your Man a special case in contrast to earlier examples is that the film is specifically concerned with different forms of reading as a basis for the construction, interaction, meaning-making of human subjects and AI. My paper aims to discuss how the film negotiates between different reading practices. Initially the presentation of reading in I’m Your Man seems to be based on the common binary opposition between close and distant reading. While Alma performs close reading as a human anthropologist, Tom depends on distant machine reading, processing large amounts of data. However, in the course of I’m Your Man, different forms of reading begin to complement each other and to contemplate their respective limits. Key issues the film considers and my paper aims to highlight in this context are the functions ascribed to different types of reading, ranging from the functional gathering of information to humanist ideals of education and expression, as well as the collision of general patterns derived from Tom’s reading of data with the single story of an individual such as Alma. Addressing these aspects, my paper examines how machine reading adds to the construction of the human subject in relation to previously established forms of reading, AI and data.

17:30-19:00 Session 26: Keynote: Donald Spector, “Quantum Mechanics as a Ritual of Ambivalence: Listening to the Language of Particle Physics.”

Keynote Lecture 

Introduction by Peter Bermel, Elmore Associate Professor of Electrical and Computer Engineering.


20:30-22:00 Dance

Band: Frank Muffin