Reading Owen’s Mind: Owen F Smith Reading Fluxus and Minding Intermedia
ABSTRACT. This double PechaFluxa panel proposes to calibrate and celebrate the life and work of Owen F Smith in the arena of Fluxus and Intermedia—which his scholarship and pedagogy helped to shape. In the festschrift tradition, but using a pechakucha-style format, the panels will explore any element of Smith’s activities—published, exhibited, sent and received, or planned in detail and in vain, etc—to illuminate the value of his multifaceted thinking among and around Fluxus and Intermedia. Contributors confirmed include: Craig Saper (University of Maryland, Baltimore), Dennis Summers (Strategic Technologies for Art, Globe and Environment), Laurel Frederickson (Southern Illinois University), Hannah B Higgins (University of Illinois at Chicago), Lauren Sudbrink (independent artist), Roger Rothman (Bucknell University), Michael Filas (Westfield State University), Christopher Reeves (University of Illinois at Chicago), Simon Anderson (School of the Art Institute of Chicago) and more TBA.
ABSTRACT. These two panels both stage a conversation oriented around criticism that takes contemporary technological operations as its core object of analysis. With a specific focus on burgeoning critical AI studies and its implications for methodological aspects of literary, scientific, mediatic, and technological criticism at large, the two panels here (one more focused on literary methods and the other on infrastructural methods) sit at the cusp of interest in infrastructure and hardware on one hand and machine-generated art on the other. Featuring scholars with interdisciplinary interests, the panels will catalyze conversations across the vast variety of investigative nodes that regularly coagulate at SLSA.
The Ecological Essay: Robin Wall Kimmerer’s Lyric Consciousness
ABSTRACT. This paper considers the shape of consciousness that emerges from reading a contemporary environmental personal essay. The personal essay is one of the most visible genres through which ecological narratives and thoughts have been communicated in the U.S. since Thoreau. Today, Robin Wall Kimmerer’s Braiding Sweetgrass, a contemporary example of the genre, sits on the New York Times Bestseller list. But despite this, environmental humanities critics have focused mainly on poetry, fiction, or forms of literary nonfiction other than the essay. If part of our ecological dilemma is anthropocentrism, might the personal essay be an inherent contradiction? Its development alongside the self of Western liberal humanism would seem to complicate the extent to which a personal essay could convey or induce an ecological consciousness. And yet, regardless of content, as Kimmerer’s work shows, a lyric “I,” whose self is less certain than in autobiography, can work in ambivalent ways to unsettle the assumptions of the human. Using the work of Sylvia Wynter and Karen Barad, as well as Min Hyoung Song’s recent Climate Lyricism, I explore the assumptions of and effects on a sense of ecological self—its limits and its affordances—in Kimmerer’s collection of essays. Are our popular understandings of an “environmental” text preventing us from addressing the accelerating global warming? Do the texts and approaches traditionally taught as “environmental” limit what it means to be human? How might we think of reading, regardless of content, more ecologically?
ABSTRACT. In the North Fork Mono language, beings are organized through our relations to each other – with to-bopt (land), with kos (fire), with pi-a (water). Language shapes our beliefs, our being, our worlds. Philosopher Glen Albretch created the word solastalgia because English speaking peoples do not have words for “home-heart-environment relationships." Solastalgia is the grief one experiences through the loss of one's home environment. We need more language – we need concepts that help us to understand and express the connection between human animals with the environment, the negative and positive of the psycoterratic (human mental health and the earth). As hurricanes shred the south and fires burn the west, we are finally trying to catch up to what Indigenous peoples have always known, what Native scholars and activists Winona La Duke and Robin Wall Kimmerer write about – we are not above or separate from the land, we are a part of our environments. The impacts of climate change and toxicity are bringing destruction that eventually none will be able to escape, so the experience of solastalgia that Indigenous peoples experienced, and experience with ongoing settler colonialism, and extractive industries, are brought to all (in different ways): “Self imposed solastalgia.” Albretch explains that “The existential and emplaced feelings of desolation and loss of solace are reinforced by powerlessness.” But there is a way. We must be in relationship with the land. We do not honor nature by leaving them alone to be wilderness (another problematic word that shapes our current world). But rather, we honor the land by interacting with them, by “tending the wild.” The short documentary "Good Fire" introduces solastalgia and gestures toward cultural burning as a response. I created this film in the 2021 with consultation from the featured subject, Chairman Ron W. Goode of the North Fork Mono Tribe. Following this 5-minute film, I will explain the elements of cultural burning that require further exploration. And the possibilities of moving from solastalgia to soliphilia, "the political affiliation or solidarity needed between us all to be responsible for a place, bioregion, planet and the unity of interrelated interests within it.” With soliphilia interconnection is nurtured; love and responsibility between place and peoples grows.
See film at http://www.ericatom.com/cultural-fire.
Technology As Fiction: The Role of Creative-Critical Writing in Reading Automation & Artificial Intelligence
ABSTRACT. This panel brings together a transcontinental complement of scholars working in the areas of theoretical fiction and fictocriticism as they relate to media theory and literary critiques of technoscience and technoculture. Through short individual paper-presentations followed by a roundtable discussion, panelists will explore the role of engaging creative-critical writing practices in engendering critical modes of reading our automation-driven media environment.
Specifically, the roundtable portion will imagine a more nuanced media literacy in the context of rapid advancements in artificial intelligence and an increasing presence of human-machine generated texts. This discussion is influenced by the fact that all panelists have published their own creative-critical texts in academic venues as part of a growing movement in the humanities toward research creation methodologies that aspire to displace false narratives of objectivity in the history and philosophy of technology.
This panel would be hybrid, with at least one of the presentations taking place in-person and at least one of the presentations delivered remotely.
On Abstraction: Designing Accessible Reading Experience
ABSTRACT. Abstraction refers to "the formation of general ideas or concepts by extracting similarities from particular instances"(APA). Abstraction makes it possible to extract sharable concepts out of diverse and multiple stimuli from an experience perceived by different individuals. During this process of abstracting shareable information from an experience, some procedural and contextual details of how individuals see, listen, and feel are reduced in a different mode. While information can become more accessible in diverse sensory modes of communication, the reductive process of abstraction can cause miscommunication, for example, as spatial relations among visual details are lost when translating web contents through a screen reader. When web content is abstracted superficially, its decoding process toward the original contents may lead to imperfect communication by mistranslation or omitting necessary information. Though the web content accessibility guidelines(WCAG) suggest how to support accessible content design, it is still far limited. As we are situated in the non-traditional context of online reading where we feel disabled to access the contents, we need to reconsider accessibility in depth. We claim that how we design an 'abstraction' process of the content matters for accessible and approachable online reading. We will review the design theories and practices of abstraction and suggest Compression, Translation, and Association as conceptual mechanisms of folding and unfolding the contents. Discussion will follow on usage of machine learning as an algorithmic assistance toward accessible content design that supports diverse modes of communication as alternative approaches to abstraction.
ABSTRACT. It was in 1984, the year of the first Apple Mac and the introduction of DNS, that Paul de Man coined the phrase “lyric reading” to describe the essentially “defensive motion of the understanding” by which readers of poetry transform verbal texts, in necessary but specious sleights of figuration, into intelligible and enduring expressions of other minds. In the gripping final sentence of “Anthropomorphism and Trope in the Lyric” (1984), de Man opposes lyric reading to “non-anthropomorphic, non-elegiac, non-celebratory, non-lyrical, non-poetic, that is to say, prosaic, or, better, historical modes of language power.” Since 1984, proliferating digital tools and expanding computational environments have significantly transformed the media conditions of these “historical modes” and, I will argue, the lyric readings that obscure them. This paper examines emerging protocols of lyric reading occasioned by poetry’s circulation on contemporary social media platforms in particular. When a poem initially composed for print goes viral on Twitter or Instagram, its mass circulation entails a digital remediation not only of the poem itself, but also of the interpretive habits responsible for rendering intelligible its social meanings. With reference to recent scholarship on platform literary cultures, C.D. Blanton’s theory of the occasional lyric, and Lauren Berlant’s account of intimate publicity, this paper demonstrates how capitalized platforms are germinating new protocols of lyric reading, distributing interpretive agency from one author to swarms of users, re-centering sentimental affect as a conduit of political meaning, and heralding a new algorithmic dispensation for lyric’s rather old claims to critical autonomy vis-à- vis social life.
Making Meaning of Digital Literacy: Processes and Practices of Reading in Digital Media and Learning Ecologies
ABSTRACT. Technology is changing us. It is changing our reading habits, and the ways that we source reading materials for information and narrative (Loan, 2009; Chauhan and Lal, 2012). This is the age of digital literacy. Our interactions with text, images, and symbols are regularly revealed to us, by digital interfaces. It invites us to question where digital literacy all began? In the late 1990s, E-Ink corporation developed the first prototype of electronic paper: a software used to mirror the appearance of ordinary ink, on a digital display (Jabr, 2013). With some instantiation, soon came about digital devices designed specifically for reading and information consumption. The Sony Reader (2006) and the Amazon Kindle (2007), have kindled us to develop new reading habits and behaviors tailored specifically for digital reading. With the development of electronic paper and text, the very nature of our exposure to information has changed (Leu Jr, 1996). We receive our information from searching, and scanning digital content at alarming rates of exposure, and consumption. What we read is conditioned by what our digital environments legitimize; from character requirements of social media posts; to the styles and syntax accepted by digital journals and news media websites. In this regard, our inquiries and interests are also revealed to us by our search processes, in digital systems with endless channels of information. This study encourages social and educational examination into the cognitive, perceptual, and semantic processes that help us to explore these black holes. The processes that are employed by students’ everyday, in order to scaffold ethical reading, information, and problem-solving habits, using mobile and web-based applications (Phillip, 2017). Our understanding of these digital practices currently lack clarity, and anatomy, in the Information and Learning Sciences (Liu, 2005; Freerrar, 2019). This study employed the close reading and systematic review of 72 journal and news media articles, in order to pose five key tenets of everyday practices of readers in online environments. It aims to bring clarity to how reading is performed and experienced, in digital media and learning ecologies. This study provides findings into how readers’ enact their digital literacies online, when consuming content mediated by online materials. It establishes a framework that explains how meaning and information is generated from digital texts, images, and symbols. It explores how learners’ negotiate with information literacies, to locate, identify, and read materials online. It inquires into how humans build digital literacy and navigation skills, for allocating information with certain intellectual or semantic merit.
References:
[1] Chauhan, P., & Lal, P. (2012). Impact of information technology on reading habits of college students. International Journal of research review in engineering science and technology, 1(1), 101-106.
[2] Feerrar, J. (2019). Development of a framework for digital literacy. Reference Services Review.
[3] Jabr, F. (2013). The reading brain in the digital age: The science of paper versus screens. Scientific American, 11(5).
[4] Leu Jr, D. J. (1996). Sarah's secret: Social aspects of literacy and learning in a digital information age. The Reading Teacher, 50(2), 162.
Inhuman Sensibilities: AI Transindividuation in Recent Hollywood Cinema
ABSTRACT. Drawing on theories by Gilbert Simondon and Bernard Stiegler, this paper approaches three recent Hollywood blockbusters with AI protagonists (Her, 2013; Ex Machina, 2014; Free Guy, 2021) through the lens of transindividuation. I argue that their framings suggest an ideological shift—away from conventional AI companionship or anti-human villainy indebted to cybernetic Cartesianism, toward posthumanist openness and reticular permeability inspired by the contemporary omnipresence of cloud-based, weak AI.
Furthermore, I argue that all three AI protagonists, despite vastly different narrative foci, tones, and layers of corporealization in the films, are presented in interlinked processes of both psychic and collective individuation in juxtaposition to their algorithmically constrained precedents. As a result, these AI are no longer bound by embodiment or the virtual becoming humanoid, but more so interested in exploring a collective inhuman sensorium that lies beyond insular or ego-driven corporeality. However, this intradiegetic play is truncated by the extradiegetic Hollywood actors; the romantic subplots draw attention to humanist concerns, in alignment with Hollywood ideology, and away from nascent transindividual epistemologies. Such romantic superimpositions lead viewers to lament the impossibility of imminent trans-corporeal relationships and divert attention away from a more prescient aspect of these AIs—their concern with collectivity beyond the boundaries of individual embodiment.
Dreaming Cognitive Assemblages: Artificial Intelligence and Media Shifts in Robot Narratives from Philip K. Dick to Annalee Newitz
ABSTRACT. Even while, in science policy conversations, controversy swirls around the question of just how smart Artificial Intelligence can ever be, in popular narratives we keep telling stories about autonomous AIs whose abilities rival or surpass those of humans. This paper takes a media studies approach to the question of how AI is represented, which enables it to discern a pattern that holds across many representations of AI, whether their mode be realistic, speculative, or fantastical. Whatever their mode, such representations typically depict AIs as what Katherine Hayles “cognitive assemblages”--as plural entities that process information through socio-technical interaction. Whether or not this is an accurate way to represent AIs, their representation as cognitive assemblages bring AIs into alignment with traditional information tools such as books and films, which likewise, as Hayles details, unfold thought in unpredictable ways by leveraging socio-technical disjunctions between authors, audiences, and media artifacts. Might the futurity of AIs be more continuous than we have previously thought with the history of the media imaginaries that remain, after all, the domain in which such AIs have been most fully realized? Exploring this intuition, this paper will locate case studies of cognitive assemblages in two recent novels--Autonomous, by Annalee Newitz, and Kazuo Ishiguro’s Klara and the Sun--as well as in the supertext encompassing Philip K. Dick’s short story “Autofac” and its recent adaptation for Amazon Prime’s Bladerunner-flavored science fiction series “Electric Dreams.”
AI, Narratives, and the Paradox of Anthropomorphism
ABSTRACT. Is scientific knowledge—broadly construed—reliant upon narrative knowledge? More than forty years ago, Jean-François Lyotard described in the context of the postmodern condition “the relationship of scientific knowledge to ‘popular’ knowledge,” pointing to scientists on TV who “recount an epic of knowledge” if only because “the state spends large amounts of money to enable science to pass itself off as an epic” (Lyotard, 1979; 1984, 27-28). Social media platforms of the twenty first century have, if anything, expanded and radically re-conceptualized the scope of this relationship to the point that science has now been lured onto, say, You Tube and Twitter, in spaces where the glitter of celebrity frames the relationship between scientific knowledge and popular curiosity cum discourse. Of interest here is the privatization of the endeavor to popularize science. The government has little if anything to do with the corporations which provide and sustain the social media platforms—except indirectly through the FCC, for instance.
It is in this corporatized context that the future of, for instance, two distinct “types” of A1—one imaged at the turn of the century, roughly twenty years ago, and another more recent—will be considered. The first, the Crakers in Margaret Atwood’s Oryx and Crake are endowed with that physical perfection which Kate Darling argues is at the heart of the empathetic human response to robots. The survival of these creatures engineered by Crake depends upon the protagonist, Jimmy, who sports an unrecognizable fuzziness on his face which puzzles the Crakers who nonetheless recognize him as potentially one of their own. In Kazuo Ishiguro’s Klara and the Sun the titular entity’s “unique abilities” defy the constraints of her caste; a B2 grade might have been pre-ordained—a congenital condition—but, in fact, this Artificial Friend is marketed as having many if not all the distinctive attributes of a B3 (which one might think of as the Brahmin class). Using these two narratives of human-AI interface as bookends of a century still young, I will describe the limits of anthropomorphism with reference to the lessons of liberalism which, according to Domenico Losurdo, engenders in epic proportion the “catastrophe of the twentieth century” (Liberalism, 2014).
textBox: A Creative Computing Toolkit That Activates The History of Chinese Computing
ABSTRACT. textBox is a series of creative computing projects that activates the history of Chinese text processing. Due to the organizational differences between the ideographic writing system and the alphanumeric writing system, the trajectory of Chinese text processing was a process of achieving adaptation, inventiveness, and compatibility across various parts of the computer system: hardware, software, and algorithm design. textBox translates the core concepts of historically emergent Chinese text processing technologies such as telegraphy code, Unicode character encoding, Chinese word processor, input method, and machine learning into small scale creative computing projects that are implemented in current computer languages and hardware. The workshop is a hands-on exploration of textBox’s creative computing projects.
Marching Cubes: an Inverted Machine Learning Workshop
ABSTRACT. In 1987, researchers at General Electric pioneered a method for generating computer graphics from medical scan data that featured an underlying language of faceted cubes. I wanted to make this seminal computational procedure — now known as Marching Cubes — into something people could build with. I translated the algorithm into 3D printed construction units that permit people to act out its logic. I also created a user’s guide: input any 3D scan or model, and a custom computer script outputs assembly instructions. Each assembly is unique, and created collaboratively: together, we perform the computer’s process. We also perform the fabrication process: in aggregating the units, layer by layer, we mimic the 3D printer that made them. Sometimes, we simply play: with humans doing the work, strict procedural logic is optional. For SLSA 2022, I propose to provide a set of construction units and facilitate an art-making workshop in which participants collaboratively create an assembly for exhibition. The final parameters of this assembly can be determined collaboratively, as they depend on the space, time, and conceptual framework constraints of the exhibition opportunities you intend to provide. By way of this proposal, I simply want to signal my willingness to provide this creative experience, should it prove to suit your interests. Please find attached a brochure that comprehensively illustrates the expressive possibilities of Marching Cubes.
Note that there is a paper proposal associated with this Art proposal, entitled “Inverted Machine Learning with Marching Cubes.” The two proposals can be considered mutually exclusive — neither is reliant on the other.
ABSTRACT. “Not too much has changed in the way [redacted] thought about work vs. play, artificial vs. organic, but I think [redacted] captures a dynamic you can see playing out in contemporary philosophy. How should philosophy help address the moment where work and play are all we have left, a circumstance when the deepest qualia of creativity can be lost? But isn’t it an affront to human dignity to keep doing work when it’s no longer interesting? Or, what is it that becomes interesting?”
66 keywords contributed by collaborators in the SLSA 2021 “Artificial Ignorance” workshop and roundtable were used to condition a neural net (Inferkit) which generated the text above. While the questions posed in the neural net’s text might appear philosophically (or philo-sophomorically) pertinent, their subtext is of even greater interest for artificial ignorance.
Artificial ignorance—an analytic tool for assessing AI—deploys ignorance, non-knowing, and strategic confusions. It reveals that at a technical level human and machinic faculties, material histories, and ways of knowing are inextricably entangled in AI.
Keeping all this in (reading) mind(s), our 2022 panels adopt keywords “creep” (uncanny irrationality/gradual shifts) and “infrastructure” (hidden structure/enabling stability) to put surface and substance into artificially ignorant tension.
“Artificial Ignorance 1: Artificialities of Creep” (Chair: Katherine Behar)
Neda Atanasoski (University of Maryland) & Nassim Parvin (Georgia Institute of Technology)
M. Beatrice Fazi (University of Sussex)
Shaka McGlotten (Purchase College, SUNY)
Louise Amoore (Durham University)
“Artificial Ignorance 2: Infrastructures of Ignorance” (Chair: Katherine Behar)
Simone Browne (University of Texas at Austin)
R. Joshua Scannell (The New School)
Jennifer Rhee (Virginia Commonwealth University)
Katherine Behar (Baruch College, CUNY)
Thinking with Cinema: Montage, Time, Revolution, Science, and Sickness. Sessions One and Two
ABSTRACT. Session One
Soviet film director and theorist Sergei Eisenstein wrote in the 1920s that montage is “the nerve of cinema,” and that “to determine the nature of montage is to solve ‘the specific problem of cinema.’” Since Eisenstein, much experimental film has foregrounded structural elements like camera angle, crowd movements, and montage to convey an idea or ideology. This stream considers how multiple construction strategies create meaning: meaning that is uniquely cinematic. The first session looks at cinematic technique in French art film, and political and science documentaries. More specifically, how does montage deliberately scramble time within the visual and aural channels of the film Last Year at Marienbad; How can audio/visual montage be used in politically radical documentary films to create transformative collective action; and how can the visual conventions of documentary science movies be scrambled for differing aesthetic purposes?
Session Two
Soviet film director and theorist Sergei Eisenstein wrote in the 1920s that montage is “the nerve of cinema,” and that “to determine the nature of montage is to solve ‘the specific problem of cinema’”. Since Eisenstein, much experimental film has foregrounded structural elements like camera angle, crowd movements, and montage to convey an idea or ideology. This stream considers how multiple construction strategies create meaning: meaning that is uniquely cinematic. The second session builds on the previous one and considers how ideas of avant-garde film techniques are used in the politically activist Argentinian documentary, The Hour of the Furnaces; how global militant film-making, in part, might counter-intuitively remove the human figure from the landscape in order to emphasize capitalist depredation; and inverting this thesis, how the movie Safe, in part, locates the human figure dwarfed by her polluted landscape, and reflects on the portrayal of sickness in film.
Modes of Reading: Three Medical Humanities Approaches to Texts
ABSTRACT. Moderator: Anne Hudson Jones
On this panel, three medical humanists will discuss the different modes of reading they are examining in their current research projects.
First, Ryan Hart will explain how an existential phenomenological reading of Kafka’s Metamorphosis can help give psycho-oncologists and other healthcare practitioners a better understanding of the major changes in body, mood, and mode of being that cancer and its treatments cause and that imperil the identities and essential meaning of the world for emerging adults.
Then Anne Hudson Jones will discuss the power of intertextuality in Hamaguchi’s award-winning film Drive My Car, focused on a multilingual production of Chekov’s Uncle Vanya to show how reading and performing such a complex text can help actors and audience face traumatic losses in their respective lives and move into the future with acceptance and grace.
Last, Rebecca Permar will explore the importance of creating technology literacy, a skill that would allow people from all backgrounds to critically engage with their own relationship with emerging technologies. This specific form of literacy is developed through reading popular culture examples of technologies as well as the reading of us through technologies, such as algorithms.
These presentations offer a glimpse into the variety of ways the medical humanities engage with the practice of reading. Texts of all sorts—although we focus on classic literature, film adaptations, and other popular culture examples here—are of utmost importance in this interdisciplinary field, and reading is a powerful skill that we highlight on this panel.
ABSTRACT. A perennial area of research and inquiry for engineers and philosophers alike, artificial intelligence remains one of the most important subjects for thinking through the opacity of computers. AI algorithms have not only helped structure the development of new technologies, but also contemporary media landscapes and dynamics of interaction. And while artificial intelligence programs operate microtemporally and across data sets at scales beyond the threshold of human cognition, they have also shaped the development of new works, genres, modes of address, and aesthetic engagements with our senses. As such, they have become ordinary and braided into the fabric of everyday life and media cultures. This development poses a number of fundamental questions: how are personhood and understandings of human capacities changed through this reconfiguration? What aesthetic practices and sensual modes of engagement exist in the wake of artificial intelligence's incorporation into so many media? What attachments to algorithmic forms and worlds emerge through networked engines and types of machinic intelligence?
This panels examines the ways in which AI operations provide a potent and reflexive means through which to assess the experiential and technological logics of the contemporary media landscape. It asks what it means to encounter computational objects and AI that have interwoven with cinematic spectacles, camera movements, and videogames, or that present themselves as autonomous agents that mimic emotional expressions. Through our papers we show how these moments and objects think through technologies and forms of visualization that bring together human and computational intelligences in ways that both give access to and reflect upon a new technical arrangement.
Multimodal Touch Imagery in Reading Minds: Touching with Every Sense in Junot Díaz’s The Brief Wondrous Life of Oscar Wao
ABSTRACT. Although popularly understood as a single sense, touch includes a rich range of submodalities: fine surface touch; proprioception (which reports bodily positions and movements); pain; temperature sense; and interoception (which reports the condition of the inner organs). Together, these submodalities of touch register everything happening in human bodies from their surfaces to their cores.
In fiction that aims to draw readers into characters’ lives, references to touch can help readers to imagine inhabiting characters’ bodies and to feel the emotions that emerge from them. Human sensory systems evolved to work together, and descriptions that involve touch almost always suggest other sensory modalities. By crafting language that speaks to several senses at once, fiction-writers encourage readers to form their own, unique multisensory images in which sensory systems interact as they might in lived experience.
In an interview with Lu Sun in 2020, Junot Díaz proposed that the novel has survived as an art form because it is “the closest we’ve ever come to telepathy.” Novels make it possible to experience “actual cohabitation, to live inside another human being.” Díaz’s The Brief Wondrous Life of Oscar Wao enables this cohabitation partly because of his skilled evocation of touch imagery. By choosing words that call on several senses simultaneously, Díaz not only helps readers to imagine the lives of characters radically different than themselves; he may also help neuroscientists understand how human nervous systems combine sensory information.
“Reflexivity’s Ontological Turn: From Cognition to Embodiment in Jorge Luis Borges’ ‘The Circular Ruins’ and Salvador Plascencia’s The People of Paper”
ABSTRACT. While scholars typically view postmodern metafiction, or “reflexive fiction,” as a staunchly epistemological literary mode, I argue that metafiction exhibits ontological preoccupations that result from postmodernism’s co-constitutive relationship with theories of cognition and reflexivity from mid-twentieth century cybernetics; in particular, I argue that as cybernetics was gradually adapted to the materialist preoccupations of theoretical biology, so too did metafiction’s reflexivity transition from a cybernetic aesthetics of “world as text” into an autopoietic aesthetics of “body as text.” To trace metafiction’s ontological development, the essay examines Jorge Luis Borges short story “The Circular Ruins” alongside synchronous cybernetic theories of cognition before then examining Salvador Plascencia’s The People of Paper alongside cybernetic-inspired theories of biological autopoiesis. While both authors ultimately demonstrate an intense interest in the world-building powers of cognition, I argue that Borges’ reflexivity subordinates the body’s agency to that of language and the mind, whereas Plascencia employs reflexivity to posit the interstitial enjambment of cognition, and embodiment, and textuality.
Musical Instruments, Embodied Minds, and Sounding Human
ABSTRACT. Speculation about how the brain functions and how to design a machine that simulates it “usually reflects in any period the characteristics of machines then in use,” observed John McCarthy and Claude Shannon in a 1956 text foundational for AI. The genealogy of machines recognized by these cybernetic researchers and their followers culminated in brain-computer analogies. Notably absent from the resulting historical lineage are analogies to musical instruments, despite their use by philosophers, anatomists, and engineers since Descartes. The significance of analogies for scientific thinking has received substantial attention from literary scholars and historians of science, particularly where they concern the operations of the mind. This paper examines the use of musical instrument analogies by the electronic composer Daphne Oram. Steeped in cybernetic discourses of brain-computer analogy and emerging computer music practices that required rendering musical information numerically, Oram designed her own “sound wave instrument” for converting hand-drawings into electronic sound. Beyond her aesthetic argument for preserving the centrality of the hand to musical expression, Oram developed a theory of the human as a wavepattern that interacts with frequencies from the physical and spiritual worlds, and proposed that her instrument would model how the brain functions. Attending to musical instrument analogies destabilizes the givenness of the category “machine” and reveals a historical lineage of using the materiality of vibrations to theorize an embodied mind. The case of Oram demonstrates what the erasure instrument analogies has done, and what their recovery can do, for conceptions of “the human” and its simulations.
Radio Play: Live Participatory Worldbuilding with GPT-3
ABSTRACT. With recent advancements in machine learning techniques, researchers have demonstrated remarkable achievements in text synthesis, understanding, and generation. This hands-on workshop introduces a state-of-the-art transformer model (GPT-3) through an interactive event culminating in a live radio show and internet transmission. Participants will gain experience with Open AI's GPT-3 as members of an AI writers’ room. We will discuss issues of liveness and serendipity; possibilities for human/non-human co-authorship; and relate computational processes to human language, perception, and ultimately how AI can be a tool for creativity and co-creation.
The half-day workshop begins with a brief introduction to large language models and generative text; then proceeds through a series of AI writers rooms, gaining experience with prompt authorship and interaction with GPT-3. We will build towards human rehearsals of AI-generated text, culminating in a live radio show performance for both an in-person audience (30-40) and streamed to a large online audience. This final performance will be porous, inviting audience participation and involvement in shaping the development of the human/non-human radio drama.
We are not just interested in what it means to co-author or rather collectivize or share authorship beyond the single human author but rather what the ingredients of liveness, audience participation, live sound scoring, and working with actors or performers in the context of AI (GPT-3) bring. This project will particularly engage the SLSA audience in questions of authorship, writing with AI, and the variation induced into a form through the unknowable contributions of improvisatory humans and large language models. This is the second in a series of related workshops, and will produce the second in our series of radio play episodes.
ABSTRACT. This panel takes up the concept of “reading minds” in all the complexity of its many valences. Together we meditate on the vagaries and intensities of perception with particular attention to its extrasensory profiles, considered through such varied examples as psychic dreams, disembodied voices, and psychic photography. The panel shares an emphasis on the technical conceptualization of these extrasensory moments, not in order to explain how strange things work sensibly, but rather to leverage the ways that technics work out the irreducible strangenesses that come to be in and as perception.
Machines and texts as screens for the reading mind
ABSTRACT. Panel abstract
This set of panels proposes a cross-disciplinary investigation of what reading with technology might entail. We dwell on different types of reading experiences, from the way machines evaluate, read, learn or translate, to the way the human mind navigates or drifts in streams of data and text. In doing so we tackle a series of philosophical and literary questions pertaining to natural language processing, flux, introspection, understanding and the reading subject. The first panel can be seen as a quest to find the reading subject, while the second panel explores how machines and text become mediating elements in the emergence of the subject.
Panel 1 In search of the reading subject
Panel 2 Machines and texts as mediating elements in the emergence of the subject
Reading and Writing Intelligence Across the Organ(ism)ic/Artificial Divide
ABSTRACT. Artificial Intelligence research, despite all of its technological trappings (algorithms, software programs, computer hardware, human-machine interfaces etc.), engages terrain that has historically been the purview of the humanities and the creative arts, creative writing in particular. While language is the most obvious shared concern, both AI researchers and creative writers are profoundly interested in forming believable characters, crafting plausible narrative sequences, developing and recognizing patterns, and considering larger phenomenological questions of perception, consciousness, and meaning. Given these common concerns, AI has, unsurprisingly, often been framed as either a competitor with or tool for human-produced creative activities. In its early days, poetry was assumed to be the impermeable differentiator between AI and human authors—AI may do many things, it was believed, but it will never be able to write a poem.[1] As AI composition has become more advanced, this boundary has become much less stable, leaving human authors to grapple with the question of whether cultural work can be replaced by the labor of computers.[2] In response to these changes, some writers have even begun to instrumentalize AI, and there is a growing body of craft articles explaining how authors can use AI in their latest fiction projects.[3]
An alternative approach to exploring this relationship would be to consider how developments in AI might be goading or provoking creative writers and artists to reflect on their own practices. Adopting this approach, our roundtable brings together three critics and two practicing author/artist-critics to share and discuss various ways in which humans read and interpret artificial and not-so artificial intelligence. Exploring the ways that creative authors use both fiction and nonfiction to interrogate the past, present, and future of technology, we raise questions about the boundary between artifice and organism by positing that human language is, perhaps, no less artificial than machine language.
More specifically, each panelist will first share reflections on an author who addresses AI’s provocations in both creative and non-fiction genres—or in projects that blur those distinctions; the contributors will then engage in a moderated discussion about how creative writers engage with discourses of technology, automation, and AI as adjacent to their own imaginative practice, and about the potential vistas for fiction, poetry, and performance that the concerns of AI research make newly available or legible (and vice versa). The roundtable’s critical perspectives (Jaussen/Robertson/Love) focus on how authors Cormac McCarthy, Jeanette Winterson, and Mary Shelley not only theorize “artificial” technologies in their nonfiction work but also enact those theories in their novels. Its creative writers (Pearl/Smailbegović) bring the session’s concerns to bear on two additional forms: fictocriticism, a hybrid genre that enables authors to interrogate the ostensible difference between human and machine authored texts; and performance art that explores nonhuman perception and asks how nonhuman intelligences prompt us to refine our definitions of reality (see attachment for full abstracts and notes).
Empathy Development in the L2 Literature Classroom
ABSTRACT. A growing body of research indicates that prolonged exposure to fiction is correlated with its readers holding more pro-social attitudes and the ability to display empathy. However, this research is often limited to surveys regarding a lifetime of exposure to literature, or to traditional novels and other long works of literary fiction. There is a dearth in research pertaining to these effects when the literature is in a second language or using shorter works of literature. In this study, I analyze a simple writing activity from one single semester of engaging with Spanish-language literature in an undergraduate classroom in order to determine if these alleged effects of exposure to literature are limited to long-term exposure, or if a single semester can have any effect. I use the microrrelato—an extremely short work of narrative fiction—as a case study. I demonstrate that students display an increase in metacognitive vocabulary at the end of the semester, possibly indicating an increase in awareness of underlying mental states, which correlates positively with an increase in empathy in real-world situations. This sets the stage for further research to be conducted in Fall 2022 accounting for different variables, including language, undergraduate major, and interests.
On Banned Books and Measuring Readers’ Empathy in the Digital Era
ABSTRACT. In 2006, novelist and cognitive psychologist Keith Oatley and his colleagues published a study that connected fiction reading with better performance on empathy skills tests. Literary scholar and poet Suzanne Keen, in her 2007 monograph Empathy and the Novel, examined narrative techniques through the lenses of neuroscience, discourse processing, and psychology to argue that fiction is more likely than non-fiction to evoke empathy in readers. Then in 2013, the now-famous study published in the journal Science by psychologists Emanuele Castano and David Comer Kidd indicated that reading literary fiction improves Theory of Mind, a component of empathy. This study hasn’t been replicated in the past decade, but the question of whether reading fiction contributes to enhanced empathy skills continues to emerge periodically in the media and in scholarship.
In 2021, the American Library Association reported 729 “challenged books” in public libraries and schools across the United States, the largest number since the ALA started tracking controversial books twenty years ago. Even if the trend of increased book censorship can’t be proven as a cause for the measured decline in empathy during the same time period, this correlation calls for further consideration of the current reading landscape.
This paper reviews relevant research published since the Castano and Kidd study and considers its implications for the Digital Era, in which people are reading fewer books overall and in which particularly young people’s access to high-quality fiction is becoming more limited, while their access to misinformation on the internet is often unlimited.
Does virtual reality increase empathy in users? Results from a meta-analysis
ABSTRACT. Virtual Reality (VR) has been touted as an effective empathy intervention, with its most ardent supporters claiming it is “the ultimate empathy machine.” We aimed to determine whether VR deserves this reputation, using a random-effects meta-analysis of all known studies that examined the effect of virtual reality experiences on users’ empathy (k = 43 studies, with 5,644 participants). The results indicated that many different kinds of VR experiences can increase empathy, however, there are important boundary conditions to this effect. Subgroup analyses revealed that VR improved emotional empathy, but not cognitive empathy. In other words, VR can arouse compassionate feelings but does not appear to encourage users to imagine other peoples’ perspectives. Further subgroup analyses revealed that VR was no more effective at increasing empathy than less technologically advanced empathy interventions such as reading about others and imagining their experiences. Finally, more immersive and interactive VR experiences were no more effective at arousing empathy than less expensive VR experiences such as cardboard headsets. Our results converge with existing research suggesting that different mechanisms underlie cognitive versus emotional empathy. It appears that emotional empathy can be aroused automatically when witnessing evocative stimuli in VR, but cognitive empathy may require more effortful engagement, such as using one’s own imagination to construct others’ experiences. Our results have important practical implications for nonprofits, policymakers, and practitioners who are considering using VR for prosocial purposes. In addition, we recommend that VR designers develop experiences that challenge people to engage in empathic effort.
Design for Healing and Recovery from Eating Disorders
ABSTRACT. The paper focuses on proposing a newly researched design approach to support recovery from Eating Disorders (EDs). This design approach is grounded in a new materialist and relational onto-epistemological framework and it is situated within the emerging field of Design for Mental Health. The paper will begin with a brief introduction of the history and genealogy of models constructed to understand and treat mental disorders, to highlight the rationale behind approaches and treatments, as well as historical exclusions and gaps (Foucault, 2003). This premise will be create the space to think about a new definition of Eating Disorders, the concept of the body, and healing to incorporate designerly approaches, to Eating Disorders recovery (Neretti, 2020). The paper will provide a transdisciplinary review of the research around Eating Disorders and their relations, combined with design-based and art-based field research, carried out through participatory and design fiction approaches with participants (human and non-human) that compose the network of an Eating Disorders. The talk will describe the characteristics and functioning of this design model. The model summarizes Eating Disorders' dynamics of disconnection and discomfort between the sense of selves, one’s emotions, and body. The model is developed in such a way to be un-disciplinary, ever-changing, and participatory. The model furthermore should inspire creative diagnoses, pragmatic and evocative design propositions that allow approaching healing in unconventional ways.
Interoceptive technologies. Media of de/sensitization
ABSTRACT. This talk explores 1) how internal processes of our bodies impact the way we perceive and act in the world, and, 2), how haptic interfaces that mediate internal processes of our bodies (the response to temperature change, heartbeat, and so on) can potentially make us more sensitive for our entanglements with the world – or desensitize us. The focus lies on our interoceptive sense (the sense for internal bodily processes in relation to changing environmental conditions) and what role interoceptive technologies might play in affecting our appreciation of our surroundings. For this, different haptic interfaces will be explored that intensify or mediate certain internal processes in relation to changing environmental conditions. In addition to the critical consideration of selected interfaces, new perspectives on anticipatory experience are formulated, which are made possible by the specific milieus created by the interfaces. Anticipatory experience will be understood here analogous to Alfred N. Whitehead’s prehension.
ABSTRACT. “DATA BANK TRANSMISSION COMPLETED
INTERSYSTEM LANGUAGE DEVELOPED
COLOSSUS DIALOGUE WITH GUARDIAN TO BEGIN NOW”
“Well, there it is.. There's the common basis for communication. A new language. An inter-system language.”
“But a language only those machines can understand.”
Colossus: The Forbin Project (1970)
We often think of language as a uniquely human capacity and privilege. However, this is because of our emphasis on speech and writing as the defining forms of language. Computer code shows a way of breaking us out of this human-centric emphasis, because it is a language that has material impact and is determined by the material constraints of the machine. The material constraints of the machine determine the world-view of the machine’s language, but this language also in turn shapes the world that we live in. Language is essentially an expression of difference, from which we can create meaning. More specifically, language is a world-making activity. If we think of language as a technology, then language is what gives form to the world we live in, calling forth things and structuring our relation to these things. We are in a co-constitutive relation with language, names, and judgment. We can only understand the world as a space of things when we have language. Before that, it is just potentialities and flow. The word brings a relation of flows into space and time. The word is the structure for our experience. In this paper, I begin to imagine a world that takes shape when we listen to the language of the computer.
Digital Literacy is DOOM'd: Examining How Software "Modders" Understand Their Creative Practices
ABSTRACT. When considered together, two recent historical examinations of digital literacy, Vee (2017) and Black (2022), present a paradox: over the past 40 software design norms have simultaneously encouraged and discouraged users from learning about computational concepts. This presentation will examine how that conflict plays out in software modification, a creative media practice in which users transform the built-in functions of an applications towards new purposes. In this context, “modders” both develop and share new computational practices while contending with the fact that core features of applications will remain inaccessible to them.
While there has been significant research on the economics and labor of software modification (Hong 2013; Joseph 2018), there has been little consideration of how this practice complicates our understanding of digital literacy. In this presentation, I will draw on theories from rhetorical code studies (Brock 2019) and writing studies (Selfe & Hawisher 2008) to document how software modders understand and articulate their digital literacy practices. My case study for this presentation will be the popular 1993 video game, DOOM.
After briefly introducing the conflict between Vee and Black, I will examine popular forums for DOOM mods, including an analysis of both popular mods and discussions of modding practices, particularly around the community's annual "best mod" competition. I will then argue that modders have developed a unique form of digital literacy that both synthesizes the trajectories outlined by Vee and Black while also representing a break from scholarship on video games and literacy (Bogost 2008; Gee 2014).
'The True Survey of the Mind': Reading Marvell Cognitively
ABSTRACT. In several of his keenest pastorals, Renaissance poet Andrew Marvell probed the nature of the mind. Essential to his depictions of thinking were the lyrical metaphors he produced—his “Roman-cast similitudes” as he described them. Traditional, mainstream critics have made invaluable contributions to our comprehension of his work, for instance how classical and contemporary authors influenced him. However, the twenty-first-century cognitive revolution offers the possibility for deeper, more fruitful explorations. The recent development of neurolinguistic approaches to the verbal arts, such as work on conceptual metaphors, may then allow for more insightful illumination of Marvell’s thoughtful, lapidary verses.
Specifically, my talk explores “The Mower’s Song,” in which Damon wittily laments his unrequited love for Juliana by comparing and contrasting his emotional and mental states with the “unthankful meadows” he cuts down with his scythe. The meadows were once “the true survey” of his “mind,” but now their “greenness” ironically mocks his blues, and so he “slays” the grass just like Juliana’s rejection slays him and his thoughts.
I also treat “The Garden,” focusing on its exploration of the speaker’s psyche as it ecstatically revels in the solitude of Nature, philosophically contemplating “a green thought in a green shade” (line 48). Such an investigation is a case of life imitating art: the academic satire/novel of ideas Thinks by David Lodge (2001) has already presented such an approach. The novel, set at a British university, concerns the relationship between an English professor who writes novels and a neuroscientist who works on A.I. (artificial intelligence) and neural networks; the pair often discuss connections between their respective fields. Towards the end, the English professor gives the keynote address at “Con-Con,” the (fictional) Consciousness Conference, where she talks about metaphor, Theory of Mind, phenomenology, and qualia in “The Garden.” In light of this imagined reading, I conclude with some thoughts about the relationship between fiction and “reality,” the implications for pedagogy and scholarship of such fertile cross-disciplinary grafting, and how literary cognitivism has ripened in recent decades, roughly four centuries since Marvell and the scientific revolution catalyzed our postlapsarian modern Information Age, for better or worse.
ABSTRACT. The study of emotions in Cervantes has been a surging theme in the past several decades (Jaén; Gretter; Wagschal). Emotions form a vital part of Don Quixote masterfully wrought by Cervantes, who maneuvers through Quixote’s seeming alexithymia and unreason and Sancho’s practicality and emotional intelligence. In this project, I look into the six basic emotions in Cervantes's novel--happiness, sadness, anger, disgust, surprise--in the contexts in which they are presented. Based on distant reading using digital text analysis tools such as Voyant, NVivo, and R, I provide a quantitative and qualitative analysis of the Cervantine presentation of basic emotions in Don Quixote. I argue that the Spanish author experimented with emotion-driven decision-making in his novel and challenged the popular belief in antagonism of reason and emotion anteceding philosophers and cognitive scientists like Spinoza, Rousseau, Damasio, and Pinker.
Panel: Reading Colors: Reflections from Poetics to Science
ABSTRACT. Moderator: Birgit Kaufmann
Panel abstract
The use of colors as a medium of communication has a long tradition. Coloring is a powerful means to illustrate, express information, moods, shadings, intentionality and extensionality. As such, it appears in science, art and literature. The reading of colors is the ability and action of decoding the information involving perception, logic and psychology. Modern color theory goes back to Newton, Boutet, Goethe and Chevreul. One aspect is the juxtaposition of the linear aspect of wavelength with the circular color wheel and additive versus subtractive colors. The relation between colors, their names, scientific, and psychological aspects also enter into the debate of the time. More contemporary aspects continuing this topic involve Itten's color theory and the RGB cube.
Speaker 1: Ralph M. Kaufmann
Title: Colors as a paradigm.
There are several approaches to color which exhibit a complex system of interrelations and compatibilities. The first basic dichotomies in the description of colors are continuous versus discrete and linear versus circular. This juxtaposes the physically correct linear and continuous scale of wavelength and superposition of light with a color wheel or circle that adds the possibility of opposition and intrinsic balancing. In this way, colors serve as an ideal model for complex phenomena. The possibility to choose a basis of primary colors gives the means to model how basic traits of features map out a full space of possibilities through superposition and mixing. The color-space itself is multifold and its coordinatization yields different notions of color dimensionality which can serve as model for higher dimensions. This vastness is what lends such immense and complex power to the color as a paradigm. An example from physics is quantum chromodynamics, that is the theory of strong interactions which are at work in nuclei.
Discretization deserves special attention as this primary step fixes the system as a model. The most common is to have the first mixed intermediates of the primary colors yielding six representatives. The Doric seven-fold subdivision of Newton is a curious historical model linking to classic thought. Color democracy increases with increasing the subdivisions, like the division into 72 colors by Chevreul. Richter’s “265 colors” provides a thought-provoking artwork that naturally draws the observer into this discursive realm. In narration, the colorfulness, which can be understood as a multitude of primaries, refers to the illustrative potential complexity of the mix.
In our digital age, coloring schemes are now ubiquitous to code information. One reason is to achieve higher density and faster recognition, for instance coding like pieces of information by the same color to provide syntax. Psychological aspects, as pioneered by Goethe, are often also bought to bear in this encoding to have a more primal decoding. For instance, “red” is important or aggressive, “green” is good or soothing. Going beyond the primaries, mixing is used to on one hand to illustrate intermediate positions and on the other to provide a plethora of individualized yet democratic possibilities. Indeed, the whole gamut of hues is utilized to furnish unique identifiers. This of course is not only recent but has a long tradition as vexillology and heraldry teaches us. It is as relevant today as in the Middle Ages. This is in the same arena as the naming of colors. Not every nm length can be named, and RAL numbers take over even in the discretized form. The mere numerics are devoid of the evocations and associations of vermilion, crimson, cobalt, azure, aquamarine, but provide consistent reproducible coordinatizing naming schemes. The color specified as #002FA7, (0,47,167) or (223°, 100%, 65%) which is the unmistakable Yves-Klein blue is an apposite example.
Speaker 2: Beate I. Allert
Title: Horaz, Goethe, and Klee: On the Move from Optics to Chromatics
Since Antiquity Horace’s formula “ut pictura poesis” (as picture, so a poem) has been much debated. A similarity between images and words, or even an indebtedness of poetic language to color may exist, yet their differences may even be more significant. When Goethe wrote his Farbenlehre, he noted in his introduction that he was skeptical of theorists because he felt that they have a tendency to do away with phenomena, the specifics for the sake of a whole. He advocated scientific and most precise and careful observation, rather than applying ideas. He was critical of words, although he was personally a path-breaking poet and theorist on color. He proposed, in response to Newton’s approach, a famous color-wheel to conceptualize the dynamic interactions among diverse colors and presented them as inclusive, complementary. In his later works, Goethe drew attention to the medium which he called “Trübe” (turbidity, opacity), a concept that challenged all previous optics for the sake what he then newly called chromatics. He investigated the source of color formation, or what he termed “Chroagenesis” and he amended parts of his earlier color theory by shifting attention to entoptic colors and colored shadows. He invented the notion of the evolution of new colors and the importance of the environment based on specific experiences in nature. I shall outline Goethe’s important contributions to debates on color in literary studies. Recently scholars of ekphrasis have also challenged the seamless ability to translate images into words, to describe pictures, or to name colors. They depend on readings, not only modes of production. Paul Klee in his Schöpferische Konfession argued that reading images or colors differs from reading texts or charts. It remains to be explored what the consequences of these discoveries are and whether, as Goethe believed when we close our eyes we could see with our minds something that reflects not only what we have seen, but the environment.
Speaker 3: Oswald Egger
Title: From Aristotle and Leibniz to Jean Paul: Poetic Coloring
Aristotle did not accidentally avoid the treatment of “poikilia” (diversity) in his Poetics, and Horace also had good reasons for opening his Ars Poetica with the momentous example of a monstrum ridiculum, a monster and monstrosity: A poem must be an organism, a unity and a whole, so that nothing can be taken away from it, added to it, or rearranged within it without destroying its unity and wholeness. The opposite of unity seems to be “multiplicity”, just as the opposite of simplicity is multiplicity or colorfulness: this is what Goethe had in mind when he associated Jean Paul's poetics with a “comparison”: He “looks around in his world, in the most oriental way, cheerfully and boldly, creates the strangest references, connects the incompatible, but in such a way that a secret ethical thread loops along, leading the whole to a certain unity.” The word poikilos now conveys not only the general idea of variation. It also conveys the specific idea of a static or moving image: related to the Latin word pictura and thus to the famous formulation in Horace's Ars Poetica: ut pictura poesis: “Every character, be it as chameleonically and variegatedly painted together as one wishes, must show a basic color as the unity which animatingly links everything; a Leibnizian vinculum substantiale which holds the monads together with force. Around this bouncing point, the other spiritual forces attach themselves as limbs and nourishment.” (Jean Paul) - Word by word is by and by all in all a picture.
Speaker 4: Petronio Bendito
Title: Communicating with RGB colors: Nuances, Properties, and Language
Computer technology has introduced new tools for selecting and working with colors for expression and functional applications. Expressive approaches involve art and design practices that convey a mood or feeling. In contrast, functional aspects of color include color as critical aspects of information visualization and usability approaches. The ability to perceive and describe the elements of colors is fundamental. The Munsell color system provides a perceptually balanced model for visualizing and reproducing color based on the triad Hue, Value and Chroma attributes. However, digital artists and designers use RGB colors to produce device-dependent color output on digital platforms.
In digital environments, colors are derived from the RGB color model and translated into different systems, such as the Hue, Saturation, and Brightness (HSB) system developed by Alvy Ray Smith. However, HSB colors are generated based on Red, Green, and Blue light output and lack the even distribution of color property relationships. While the ability to precisely reproduce colors, like in the Munsell and even the Pantone color system, ensures the accuracy of color reproduction, in the 21st Century, given the ubiquitous adoption of devices that use the RGB color model to generate and display colors, it is vital to embrace an idea of color generated by additive color mixing as a dynamic chromatic expression that mutates as it is translated from one device to another. The only way to control it is to produce and apply it contextually. By understanding RGB color limitations and potential, digital artists and designers use informed decisions to create color palettes in digital environments for digital dissemination. One aspect of learning to work with color in such a paradigm involves mapping the HSB color system in search of topological color patterns based on known color expressions such as vibrant, vivid, subdues, grayish, rich, deep, pale, etc. as proposed by Shignoubu Kobayashi. This understanding builds color literacy as one learns to negotiate the perceptual nuances and volatility of color expressions in digital environments.
Recognizing Futures in the Times of Surveillant Technologies
ABSTRACT. AI-based technologies promise a happier, healthier, and safer tomorrow. By monitoring our vitals, body language, or medical histories, these technologies seek to recognize and care for mental and physical complications before they arise. In the leap they make between “recognizing” and “caring,” these technologies become indistinguishable from surveillance. Rather than helping us recognize our problems and giving us the tools to care for ourselves, they regurgitate and reinforce biased perceptions that keep us captured in rigid categories.
This panel brings together three papers that think through the consequences of unquestioned “recognition” of the human in AI-based technologies. Each paper takes on different technologies interested in recognizing patterns of individual behaviors or predilections: recidivism algorithms, healthcare information technologies, and facial recognition. Macy McDonald’s “Data Doubles and Proxy Troubles” reads Bamboo Health’s ORS, a system already widely in use, through Philip K Dick’s “The Minority Report”. Racist, ableist, and classist biases coded in a medical prescription tool meant to recognize and treat recidivism only perpetuate further discrimination. In combining approaches from philosophy of technology, media theory, and science fiction studies, “The Technics of Care in Urobuchi Gen’s Psycho-Pass” by Himali Thakur theorizes the missing grammar of care between users that such surveillant healthcare technologies fail to conceptualize. Avital Meshi’s “The Tension Between Identification and Misidentification,” an intervention from performance studies, explores off-the-shelf facial recognition algorithms that aim to categorize human emotions. Meshi highlights the corporal adjustments we are forced to make to honor these surveillance systems and how we may resist their imperatives.
Our interdisciplinary panel combines approaches from critical code studies, performance studies, and science fiction studies, to highlight the ontological, ethical, and practical dimensions of being individuals in mass surveillance systems. In doing so, we hope to delineate a line between “care” and “surveillance”. Together, we ask: what logics constitute systems that aim to recognize our futures? And, more importantly, how can we reclaim these logics to articulate our individual struggles? In answer, we outline potential strategies for resisting surveillance. We speculate how individuals can use “care” technologies to interface with each other and care for ourselves beyond the categorical capture of surveillant machines.
Narrating the Fragmenting Brain: Digital Neuronarratives and Alzheimer’s Disease
ABSTRACT. The practice of implementing narrative medicine is increasing in the medical field. From this, a genre called neuronarratives, narratives that focus on a neurological condition or disease, have been created to provide another way for the public to learn more about the brain and mental health. These works also provide a means of self-expression and agency for the patients, their family members, and carers. This paper will focus on digital neuronarratives of Alzheimer’s Disease (AD) through the lens of hypertext. This approach allows the fragmentary nature of the illness to be expressed alongside the main narrative and scientific information. Reading these AD neuronarratives this way complicates perceptions of selfhood as a linear process and instead presents how the brain, environment, and social groups come together to create a sense of self. Sutu’s “These Memories Won’t Last” is a digital comic that focuses on his memories of his grandfather that requires the reader to scroll to interact with the rest of the text. “Before I Forget” is a game in the perspective of a person with AD and the player must investigate the environment and her history is revealed through mini-games within the narrative. Yue’s “Alzheimer’s: Memories” is also a game from the perspective of a patient with AD that relies on a series of mini-games for the player to put the narrative together. These works represent a wide range of interactivity and show how interactivity and images can enhance the understanding of AD and selfhood.
Unnatural Natural Histories: Lyell, Freud, and Narratives of Empirical Witness
ABSTRACT. This essay argues that for both the geologist Charles Lyell and the psychoanalyst Sigmund Freud, the fictionalizing work of what Lyell calls restoring in imagination—that is, of imaginatively fording the gaps left by what is unavailable to human witness—is essential to the realization of scientific aims. In the case of Lyell, such acts of restoration enable him to transform discontinuous geological processes into evental narrative phenomena, imageable arcs of motion that more fully render the natural world for the reader. In the case of Freud (for whom, indeed, the mind is a geological artifact), fictionalization and narrativization lend dreams dimensionalities that they lack, imposing forms of solidity that are necessary if psychic phenomena are to be treated as viable objects for empirical study and dissemination. For both authors, I argue, these moves to transcend the limits of empirical witness are necessary because the phenomena they attempt to narrate—namely, geological time and the unconscious—are what Timothy Morton has termed hyperobjects: that is, they both temporally and spatially outscale the human and exist in a perpetual state of withdrawal, inappreciable in their entirety. Restoring in imagination and the fictive discourse it enables thus function in Lyell’s Principles of Geology and Freud’s The Interpretation of Dreams to enlarge access to the withdrawing entities of geological time and the unconscious, serving to exemplify how with the dawn of hyperobjects, fictionalization becomes integral to nonfiction in protecting reader and author from the potentially annihilating sense of proximity to humiliating entities.
ABSTRACT. Life is experienced more intensely when we are enmeshed in stories – I narrate, therefore I am. But it is not only our own lives that are heightened by narratives; through narratives we are also able to transform individual experience into shared experience. To achieve this, our brains and the ways in which we tell stories must be attuned to each other. But how exactly does this happen? And what makes a story a good story?
In this talk, I will present data from serial reproduction studies (aka telephone games) to examine which aspects of narratives are dropped or transformed and which features remain at the core of narratives in transmission chains. The goal of the talk is to present a model of narrative thinking that combines three aspects of narrative processing, namely narrative emotions, multiversional processing, and the thinking in small episodes. The talk will summarize findings from my new book (German edition, June 2022; English and others are forthcoming)
Volitional unknowing: reanimating the queer potential of AI to resist an algorithmic determination of thoughtfulness on Bumble
ABSTRACT. By unknowing the determination of “man” proposed by the “standard interpretation” of the Turing test, this paper reveals a queer future for AI that is being thwarted by today’s dominant mode of AI-driven capitalism. This thwarted potential is exemplified by an algorithm Bumble claims can determine the thoughtfulness of its users, even while interviews suggest Bumble fosters engagement users feel is superficial and absentminded. Because this “thoughtfulness” algorithm portends a dystopian future where non-profitable thoughts are increasingly hard to fathom, I propose “volitional unknowing” could help resist this dystopian future. While AI-governed capitalism probes for queer, unknown correlations from past data to speculate about a future optimized for maximum profits, volitional unknowing cultivates an unknown ripe for fruitful speculation about a more just future, where it is still possible to think differently, non-standardly, queerly. On dating platforms, volitional unknowing allows historically embodied users to encounter unknown others worthy of entanglement.
Teaching Algorithmic Bias in an Ethics and Social Justice Course
ABSTRACT. The ubiquity and impact of computing technology in our time point to an urgent need for college students to consider ethical aspects of its design and use in both technical and humanistic courses. According to Select USA, the software and IT “industry draws on a highly educated and skilled U.S. workforce of nearly two million people, a number which has continued to grow during the past decade.”
Computing professionals must be able to understand ethical and societal impacts of the technologies they create and deploy to recognize and to analyze the complex, powerful, and sometimes negative ways computing influences society. Recognizing that many Georgia Tech students aspire to work in computing, PhD student Kera Allen and I collaborated on creating a teaching module for a course in science, technology, and gender, aiming to enhance students’ sense of their ethical responsibilities in designing, managing, and using computing technology. We incorporated the six-week course module in a semester-long course, Science, Technology, and Gender, to alert students to ethical dilemmas and to encourage them to implement ways to avoid bias in the development and use of computing algorithms.
The course also included readings and assignments promoting student learning about historical, social, and ethical dimensions of science and technology, including computer technology, with particular emphasis on gender and race issues. Students examined the historical exclusion of women from computational fields and looked at how gender and race affect the design and use of computational and digital technology.
The module culminated in assignments that raised awareness of ethical social justice issues such as disparate impacts of algorithms, the digital divide, and social media disinformation. While readings and the collaborative digital project in the new module highlighted contemporary ethical issues, students began during the fall 2021 term learning about historical cases and issues concerning social inequality in medicine, science, and technology. Considering women’s contributions to and inclusion in computer history illustrates that women were integral in the development of computing technology; such readings and discussions developed student understanding of how women’s roles are influenced by cultural values and social contexts as well as their reflections on how inequities raise ethical concerns.
The final assignment of the course called for students to work in teams on a collaborative paper analyzing a film about ethics, science, and technology such as The Net (Dir. Irwin Winkler, 1995) or Kimi (Dir. Steven Soderbergh, 2022). In future iterations of the class, students will also discuss The Circle by David Eggers. These texts depict the dangers of intrusive AI producing extensive surveillance, diminished privacy, and psychological harm, offering examples of real and hypothetical technologies.
Reading the readers: “dark sousveillance” and algorithmic literacies on social media
ABSTRACT. As social media users become increasingly aware of and literate about the data analytics that permeate online activity, many engage in labor-intensive negotiations of their social media experiences and their identities in active dialogue with platform algorithms (i.e. the Facebook or Instagram feed). A core feature of these interactions is users' cognizance of how algorithms “read” and then respond to their actions and affiliations. As algorithms iteratively respond to users’ online habits, those users in turn learn to read their platform feeds as expressions or oppressions of identity negotiations. Such negotiations are key to the way many individuals find representation, community, and empowering experiences on platforms that could otherwise feel “toxic” and that often perpetuate racism, ableism, sexism, sizeism, heterosexism, etc. in their basic architecture.
This talk makes a case for approaching what Simone Browne terms “dark sousveillance” through the lens of algorithmic imaginaries and algorithmic literacies in our current social media environment. How do individuals with marginalized identities generate new possibilities that “facilitate survival and escape” (Dark Matters, p. 21) by “reading” the surveillance practices of platform algorithms and reflectively “retraining” those algorithms? Seen in this light, we suggest “dark sousveillance” is a tactic through which individuals not only evade the surveillant gaze, but also make themselves readable on their own terms to algorithms within data infrastructures where their bodies are usually, in the words of Ruha Benjamin, “watched (but not seen)” (Race After Technology, p. 47).
Superreaders: Recognizing The Literary Value Of Language Models
ABSTRACT. The last five years have seen a deep learning revolution take the field of natural language processing by storm. Large language models capable of demonstrating complex emergent phenomena previously thought computationally impossible have arisen, spurring innovation across the sciences. Capable of inferring semantic meaning from natural language, researchers have used Transformer-based models like GPT-3 and BERT to develop new and innovative techniques in domains like translation, question answering, and other tasks necessarily dependent on models exhibiting some level of cognitive ability. Digital humanities methodologies have not taken notice of this development. Formal digital humanities methods have changed little since the mid-2000s, with digital humanists continuing to primarily conduct close readings on machine-extracted patterns. David Herman described this mixed-form methodology in 2005, while 2012 saw N. Katherine Hayles trace a similar approach in her monograph How We Read. Although their methodologies are fitting for a field straddling the intersection of computation and art, advances in natural language processing beckon from beyond the frontier. We argue a closer inspection of the postcognitive theories supporting Herman and Hayles’ methodologies reveal a common thread through which the digital humanities space may admit neural language models: their efficient modelling of the discursive sociolinguistic environments postcognitive scholars posit human cognition itself manifests from. Their ability to comprehensively model culture at scale affords these models a hermeneutical mind. The opportunities offered by machine reading via language models presents one step forward for the integration of new and advanced machine learning techniques into the digital humanities.
Utopian Vectors: Word Embeddings and Semantic Change in Speculative Fiction
ABSTRACT. This paper discusses the results of a machine-learning technique, called word embedding, performed on a large corpus of speculative fiction. Word embeddings are a machine learning method that represents each word in a text corpus with a high-dimensional vector. These vectors are created such that the angle between them describes the semantic relationship between words. The model is created by attempting to predict word co-occurrences and shared contexts—observing language patterns across large corpora. The last decade has seen an explosion of Artificial Intelligence (AI) and Machine Learning (ML) research, applications, and deployment, ranging from analyzing natural language to protein structure and drug discovery. However, when AI is trained on mainstream human language and semantics, it learns social biases, and its use in large-scale systems amplifies these biases. How might we artificially imagine worlds that do not perpetuate our current cycles of exploitation and inequality? Where might artificial intelligence fit into these worlds?
This paper discusses the assembly, scrubbing, and analysis of a new digital corpus of utopian speculative fiction, based on Lyman Tower Sargent’s bibliography, “Utopian Literature in English.” Word embeddings excelled at tracking semantic change, noting the associational shifts in meaning around words like “planet,” “future,” and “virtual” over time. We discuss the differences between the word embeddings created with our corpus and those created with other contemporary English language datasets. By creating a biased corpus, one that draws only from utopian speculative fiction, we speculate on how AI and ML techniques might be used in the future.
Human Processing of Machine Translation: On Warren Weaver’s Implicit Reader of Mechanical Translations
ABSTRACT. In his 1949 memorandum on Machine Translation, Warren Weaver famously outlined the idea that computers could be used to translate. While Weaver himself focuses on features of language that can or cannot be mechanized, in this paper I argue that his early ideas on machine translation cannot be properly understood without reconstructing the implicit reader of machine translations: a “technical” expert who can exercise “trained judgement” (Daston/Galison), someone who is aware of the limitations of the system that produced the translation and can rely on their knowledge to estimate how likely it is that the translation is correct. Drawing on Lorraine Daston’s and Peter Galison’s research on scientific reasoning on the one hand, and Schleiermacher’s and Frege’s ideas of translation on the other, I show that far from being a product of linguistic naïveté, the idea of machine translation stems from a notion of reading that is operative and not hermeneutic, a reading that generates meaning through interpolation rather than interpretation and requires a highly active expert reader who is not intimidated by textual authority. Finally, I argue that the failures of contemporary neural machine translation – especially in high stakes scenarios like health care and law enforcement – stem from the obfuscation of the fact that while machine translation seemingly reduces the task of the translator to a click on the “translate” button, it turns the act of reading in machine translation into a continuous evaluation of the reliability of the machine generated text.
Are You R(obotic)? Can Visiting an AI Mind Tell Us Anything About Our Own?
ABSTRACT. A recent essay (1) suggests that literary exploration about how machines think can offer insights into the workings of the human brain. On the other hand, AI researcher Rodney Brooks — echoing Thomas Nagel’s well-known paper “What Is It Like to Be a Bat?” — proposes that in fact we cannot even imagine what it would be like to be a conscious robot (2). Nonetheless, many authors have tried to do so, some going so far as to feature first-“person” robotic narrators. We will explore this issue in a roundtable discussion of literature extending over the last century. Works to be addressed may include Capek’s R.U.R. (1921), Asimov’s I Robot (1940-50), Dick’s Do Androids Dream of Electric Sheep?/Blade Runner (1968), Gibson’s Neuromancer (1984), Leckie’s Ancillary Justice (2013), Wells’ The Murderbot Diaries (2017-18), Newitz’ Autonomous (2018), Martine's Teixcalaan books (2019/2021), Ishiguro’s Klara and the Sun (2021), and others.
(1) Vint, Sheryl. 2021. “The Science Fiction of Technological Modernity: Images of Science in Recent Science Fiction.” In Farzin, Sina; Gaines, Susan M., and Haynes, Roslynn (Eds.), Under the Literary Microscope: Science and Society in the Contemporary Novel. University Park (PA): The Pennsylvania State University Press.
(2) Brooks, Rodney. 2017. “What Is It Like to Be a Robot?” rodneybrooks.com/what-is-it-like-to-be-a-robot/