RESAW19: THE WEB THAT WAS: ARCHIVES, TRACES, REFLECTIONS
PROGRAM FOR THURSDAY, JUNE 20TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-10:45 Session 11A: National Networks
Location: OMHP C2.17 (48)
09:30
The parable of IPerBOLE, the first Italian civic network (1993-1997)

ABSTRACT. Relying on primary and secondary sources such as private and institutional documents, newspapers’ and specialized magazines’ articles, and in-depth interviews with key actors, this paper investigates the history of the first Italian experimentation of internet use at public level during the early 1990s: The civic network Iperbole, an acronym which stands for 'Internet “For” Bologna and Emilia Romagna'. Launched in 1993, Iperbole was an idea of the Bolognese council member Stefano Bonaga and the philosopher of language Maurizio Matteuzzi who created, with the support of the major Walter Vitali, the second civic network in Europe and the first in the world providing free Internet access to citizenship. Thanks to an intense cooperation between public and academic institutions (e.g., the main access node of the network was provided by the academic supercomputing center CINECA of Bologna), Iperbole gained an outstanding success in its early stage and it received the attention of international organizations and institutions such as the European Union and the G7 meeting of 1995 (McDonough, 1995). By 1994, thanks to a free platform accessible through a personal username and password, each citizen could access the Internet from a set of computers placed in a close public building. Moreover, thanks to an user-friendly interactive interface, Bolognese citizens could participate and debate on specific forums, or directly interact with municipal offices to ask information about public services, traffic, municipal laws, etc. By 1996 more than the 15% of the citizens of Bologna had a personal Internet access and an email account, a very high percentage if compared for instance with the Internet access in G7 countries at the time. According to the ideator Stefano Bonaga, more than a simple public service, Iperbole was part of a wider political project aimed at using networking technologies such as the Internet and teh Web for the political and active participation of citizenship, in line with the political tradition of one of the most active Italian cities in terms of political participation and bottom up movements. Notably, through the Iperbole network, the Bolognese administration acted as an intermediary, we could say a 'medium', to realize the longstanding dream of the electronic democracy. Born at the very beginning of the Web's spread, the civic network enhanced social and political agency contributing to the imaginary construction of a networked citizenship that would re-emerge in socio-technical and political discourses 20 years later, especially with the emergence of the '5 Star Movement' guided by the comedian Beppe Grillo. The core-idea of Iperbole was to use the Internet as an instrument to realize a political transition from representative democracy to direct democracy; from delegation to first person action; from verticality to horizontality; from entertainment media to participative networks. Although the Internet, and a few months later the Web, were the chosen instruments for the actual realization of this process, these technologies were not seen as primary causes of social change; rather, they were seen as pragmatic tools for the realisation of a political and cultural process that had been at the centre of political debates since a long time. Indeed, throughout history and starting several decades before the birth of IPerBOLE, the leftist movements of Bologna, which is also called 'La Rossa - The Red One', made use and incorporate mass and telecommunication media such as radio, press and telephony into political activism like in the case of the pirate radio Radio Alice that, by combining radio and telephony, aimed at realizing the distributed model of communication particular to both the Internet and the Web imaginaries. Overall, the story of the initial success, and eventually the political failure, of the IPerBOLE network emphasizes the historical continuity and the intertwining of the Web imaginary with a wider 'network imaginary' taht characterizes not only a series of technological experimantions and visions, but also some key political projects at local and national level in the Italian context.

09:50
"The Web as my Friend":Social history of BBS and BBS culture in Chinese netizens' memory
PRESENTER: Shiwen Wu

ABSTRACT. Although a lot of Internet researches are being carried out in the global academic circles, the attention paid to the Internet’s history is still far from enough. Neil Brügger argued that the Internet history needs to arouse the attention of the network researcher, and for future historians, if they want to understand current era, it is a must to study the internet history (Brügger2009, 2012). Current researches on internet histories is concentrated on the United States or Europe, ignoring the west of the world. What do we know about Chinese internet history and how can we know? The study focuses on Chinese BBS and its history from netizens' memory. Although the BBS era is long gone, the use of BBS was an important network experience of early Chinese netizens, which is deeply imprinted in netizens' memory. Netizens' memory is one of the feasible ways to study websites in the past and the past of websites. These memories are based on the users' experience with a top-down perspective. Based on 213 netizens' memory articles retrieved from the Chinese search engine "baidu and sogou", this paper discussed the history of BBS in netizens' memory, BBS culture and BBS era with the media memory theory. The study found that netizens wrote the development history of BBS in China from the perspective of use, that is, social history of Chinese BBS, which was different from other dimensions of BBS history (such as technological approach and commercial approach). Using J. Derrida's "mourning politics" theory, it can be seen that netizens regarded BBS as a "friend" in their memory. As an important "spiritual friend" of netizens, BBS is remembered for its open, free and inclusive Internet culture, and its space for discussing and criticizing public affairs. It has an inherent spirit and a far-reaching influence on the Chinese Internet. Through the memory of this "spiritual friend", netizens recalled the Internet era of BBS from the dimension of nostalgia, and expressed the expectation and criticism of the current Internet. BBS is of great significance to the development of Chinese internet. The study can help us understand the social history of Chinese BBS and its social effects.

10:10
Internetworking Europe: Euronet and the rise and demise of a Europe-Wide-Web (1971-1993)

ABSTRACT. Internet history is still shaped dominantly by narratives tracing the origins of the Internet to the 1970s development of ARPANET and TCP/IP within US funded research initiatives (Abbate, 2003; Hafner & Lyon, 1996). This paper opposes itself to this search for the Internet’s origins, instead exploring the internetworking disparity that marks the historical beginning of the Internet’s development. Not just the Internet, but many and often competing internetwork initiatives emerged out of the convergence of communications and computing between the 1970s and 1980s in Europe, Asia, North and South America (Haigh, Russell, & Dutton, 2015). This paper will start by mapping the diverse landscape of European networking developments drawing upon news articles from trade magazines aimed at computer industry professionals, such as the Dutch Automation Guide (1968).

Reporting in these magazines conveys an indeterminate vision of a networked future in which it is absolutely not a foregone conclusion that the Internet in its current form—with TCP/IP as the standard networking protocol—will dominate world-wide computer networking. It is the opposite, as TCP/IP and the Internet are conspicuous by their absence in 1980s reporting on developments in computer networking in European trade journals. While the packet-switched Euronet is discussed extensively ("Euronet / Diane put into operation," 1980; "Euronet starts in 1979," 1978), it is striking that only in April 1990 the first feature article appears that reports about something called TCP/IP (Stiller, 1990) and equally remarkable that the first article devoted to the Internet is only published in June 1993 (Vanheste, 1993). Why did the European trade-press ignore Internet technology for so long even though it already experienced enormous growth in the US in the 1980s?

The paper will proceed by answering this question, employing a form of critical historiography to reveal unacknowledged power relationships in opposing approaches to internetworking in the US and Europe. In her influential book Inventing the Internet (Abbate, 2003) Janet Abbate uncovered such power relationships showing how American TCP/IP and European X.25 protocols following the Open Systems Interconnection (OSI) model became symbols of opposing approaches to internetworking within a fierce battle over setting a global internet standard. Tracing the history of the standards debate to the 1980s, Abbate highlights how various rival groups—Europeans and Americans, computer manufacturers and PTT carriers—shaped computer networking and the economic, political, and cultural issues underlying the standards they promoted. But Abbate's research focuses on the invention of the Internet, interpreting the standards debate, and European networking efforts, in the light of TCP/IP’s eventual domination. As such it fails to capture the specific interests of Europe’s standardization politics and the nature of the belief in the possibility of an internet economically, politically and culturally grounded in Europe.

To investigate the unique economic, political and cultural underpinnings of Europe’s internetworking developments this paper studies the European Commission’s (EC) first initiatives towards developing a public European internet, with the commissions archives as principal sources (e.g. EC, 1974). In 1971 the EC instituted the Committee for Information and Documentation in Science and Technology (CIDST) to work out a plan to set up a communications network in Europe to be known as Euronet (Giles & Gray, 1975; Voigt, 1976). Euronet’s development was shaped by European politics. So was the intention of Professor George Anderla—committee member and prominent advocate of a European information policy—to initiate European internet standardization as a counter to the dominance of American developments (Anderla, 1977). Euronet became operational in 1979 employing the X.25 packet-switching protocol, interconnecting datanets set up by the PTTs of the nine member-countries of the EC (such as the Dutch Datanet-1 and French Datapac). That effort proved short-lived as the EC supported X.25 protocol was swept aside in the 1990s by the American TCP/IP standard for networked communication, and applications based on it, such as the World-Wide-Web, forcing the EC to also support TCP/IP for its networks (Olsthoorn, 1993).

Narrating this missing net history of the ‘failed’ attempt to internetwork Europe, this paper contributes to a growing body of critical scholarship in the emerging field of Internet Histories emphasizing that the invention of the Internet is a much more diverse and pluralistic story than typically portrayed in the dominant US-centered teleological narratives (Campbell-Kelly & Garcia-Swartz, 2013; Haigh et al., 2015). The 1980s, as Driscoll and Paloque-Berges describe, ‘reflects the messiness of inter-networking’ (Driscoll & Paloque-Berges, 2017, p. 51), and it is this paper’s objective to clear some of this mess by writing the history of the development of a Europe-Wide-Web.

09:30-10:45 Session 11B: Shock, Meme, Desire
Location: OMHP C1.17 (48)
09:30
Discordant principles: memes, memetics, and epistemological disruption

ABSTRACT. This paper will pay attention the ways in which memes – internet or otherwise - can function as agents of discord.

Subject to semantic drift since coinage in 1976, the word ‘meme’ itself is at risk of misinterpretation. As such this paper will begin by defining a meme as a communicative artefact that is encountered by two or more ‘readers’. Subtending this, a meme qua artefact is produced by processes of proliferation wherein the meme is variously replicated and remixed, forming discrete meanings between interpretive communities of varying scales. Consequently, this paper distinguishes between: memetics – the cultural practice; and memes instances – the artefact produced.

In this context, this paper will examine how a little-read text authored a pair of American teenagers in 1963 came to have significant effects on nascent computer cultures, serving as a blueprint for discordant internet memetic practices, before considering what the epistemological consequences of discordant messaging in the contemporary networked environment may be.

***

Unsurprisingly, internet memes have been scrutinised as agents of both communicative consensus and of discord. This is in large part because internet memes function as sociolects; they cultivate accords across and between communities who share understandings of the cultural knowledge coded within the artefacts form and / or content. Inversely, it is this social function that proliferates discord, since those not privy to the cultural knowledge necessary for interpretation will find that the meaning encoded within them elides interpretation. This dialectic is the defining quality of an internet meme, and has been duly exploited by good and bad faith actors alike.

To better understand this dynamic, I will offer a historic reading of memes, meaning, and discord. This will be developed in reference to the Principia Discordia, a text written by the then teenage Kerry Thornley and Greg Hill in 1963 which describes the tenets of their invented parody religion ‘Discordianism’. Per the Principia, Discordianism holds that no forms of order (including apparent disorder, which speaks to a potential for order to emerge) exist. The text also describes a range of practices designed to perpetrate social and communicative discord, with the intention of exposing and aligning others to the thinking espoused in the document.

Possessing a cut-and-paste aesthetic, the first edition of the Principia was self-published by Hill on a mimeograph in New Orleans, USA. It subsequently proliferated within certain countercultural communities in the American south and west, via reprographic technologies, post, and word of mouth. (Notably, the Principia was the first explicit example of KopyLeft – explicitly encouraging others to replicate and remix its form and contents). In line with its limited means of circulation and esoteric subject matter, the Principia remained relatively unknown for just over a decade. However, it impinged more widely on public consciousness in 1975, when Discordianism was reversioned as the principle plotline in ‘The Illuminatus! Trilogy’ – the seminal and internationally successful series of conspiracy fiction novels written by Robert Anton Wilson and Bob Shea: a move which both showcased and consolidated Discordianism’s ludic philosophical position.

Subsequently, Illuminatus! was adapted by Ken Campbell into a 9-hour stage play performed at the Science Fiction Theatre of Liverpool, UK and the National Theatre in London in 1976: the sets for which were built and designed by Bill Drummond. Drummond went on to form the art and music collaboration the KLF (also known as the JAMS, or the Justified Ancients of MuuMuu, a name taken from Illuminatus!). Simultaneously, Illuminatus! was a significant point of influence for nascent hacker communities and early adopters of internet culture: inspiring Karl Koch (whose pseudonym Hagbard Celine was lifted from Illuminatus!) and groups such as the Chaos Computer Club alike. A network of nerds and geeks extolled the Discordian virtue of prankish, reality-challenging disruption, and encoded practices of remix and riff into the social and technological structures they populated. Discordianism has therefore travelled from mimeograph to the virtual, percolating through music, art and theatre, in the process.

Unsurprisingly then, Discordian traces are detectable in contemporary (internet) memetic practices; and are perhaps most successfully leveraged by trolls and shitposters implicated in the constellation of epistemological crises variously and imprecisely labelled as ‘post-truth’, ‘fake news’ and ‘irony poisoning’. In cognisance of novel factors - such as recommendation engines and visibility boosting bots - that contour the information environment, this paper will conclude by referencing Jodi Dean’s notion of ‘communicative capitalism’; proposing that if the current rubric derives value from mere exchange, the proliferation disruptive memetic practices lend themselves to reifying rather than opposing the dominant ideology.

09:50
Operationalizing (un)desirability: What Images Want?

ABSTRACT. Images are performed through spatial and temporal arrangements by search engines. Organised through grids they give form to infrastructural operations that respond to every search query, in which the search is managed by sets of operations that include ranking algorithms, crawling operations, database index, geographical locations, browser histories, real-time processing and many others. As argued by Hoelzl and Marie (2015), an image is embedded in (and embeds) a range of algorithmic operations and networked processes that constitute the temporal circulation and the presentation of an image. Their concept of 'operative image' alludes to the unstable algorithmic configuration, in which the status of an image and the experience of the world as mediated through the image, are constantly negotiated. Within the context of image search, the grid of images is a result of data management systems that influence how subjects are presented, and how selected images are operationalized and prioritized. The concept of operative image challenges us to consider how images operate to attract our attention through algorithmic gaze across time and space. Or to put differently we ask: what do images want?

Paraphrasing the question that has been asked before: “what do pictures want” (Mitchell 2006) we return to desire and reflect on it as operationalized in the image. This manoeuvre should not be equalled with some form of psychoanalysis performed on the picture in order to read what the artist wants with it, or to interpret an audience's response. Rather it is, as W. J. Mitchell argues, about recognising the image as subaltern other invited to speak. Starting with the question of what the operative image desires rather than what it does shifts its position from an object to the subject. Image as a subject appears as deeply entangled with a number of forces that include different scales of censorship, computational infrastructures and user interactions across geopolitical sites. As a result the image as a subject is ephemeral and constantly shifting its visible status. Our aim in this presentation is to speculate on (un)desirable forces that operate the image. But what is the image that we wish to speculate with?

The image is of a white web page. At times empty otherwise one thumbnail picture in it. One browser tab open. Google Image Search page. Google logo in the left upper corner of the page. “六四” as a keyword in the search bar. The usual grid that organises resulting images on the page erased. The only visible content is one image that moves across the page with each screenshot. Each screenshot frame recorded between January and December 2017 and replayed in a moving image. The screenshot would normally display a grid of images called out by the search query “六四”. This is a Chinese equivalent to the numerical values of 6 and 4, and it refers to the student-led Tiananmen Square Protest of 1989, around the date of June Fourth, in Beijing. All of the images are erased in the screenshots, but one. An artwork Unerasable Images (2018) by Winnie Soon, is organised by tabular and regular rows that contain mostly empty spaces except for a Lego reconstruction of the Tank Man facing a cordon of tanks. This peculiar version of the image was uploaded by a Chinese citizen in 2013, but it quickly got censored through political and algorithmic operations in China. After four years, this image is still searchable through Google search engine, and it occasionally and appeared on the first page of the search on some of the days in 2017. Unerasable Images is presented in the form of a moving image through the materials of screenshots, which is a poetic reconstruction of a new subject that examines and queers this complex configuration of what we refer to as (un)desirability.

This presentation will address the question of what images want through analyzing the temporal and spatial configuration of (un)desirable operations with the work of Unerasable Images. By introducing the notion of (un)desirability (Soon, 2018) we consider the dynamic architectural compositions and networked databases that manifest in the seemingly stable structure of a spatial grid while recognising that a networked image rarely settles in any defined and binary status of (un)desirability across platforms, users, geographical locations, databases and algorithmic processes. The image then is continuously constructing subjects while also it is simultaneously being constructed.

10:10
The Legacy of the Shock Site: Abject Images After Rotten.com

ABSTRACT. In the 1990s and early 2000s, Rotten.com was considered as one of the main harbingers of shocking online content. From the aftermaths of fatal accidents to depictions of extreme sexual obscenities, the visual offerings of Rotten.com did not shy away from the human body’s grotesque potential. Rotten.com is often considered to be the web’s first and foremost “shock site:” a website that “juxtaposes the titillating with the abject” for the sake of “repulsion and amusement” (Paasonen, 2011; 229). But despite its notorious reputation, Rotten.com is now mostly known as a relic of the pre-social media era. To many users, Rotten.com represents a version of the web in which extreme images were readily encountered through forums, search engines, and the sheer act of “surfing.” Depictions of the debased, the degraded, and the dead were not yet filtered by content moderators, at least not to the extent that is currently being practiced by platforms like Facebook (a form of labor that, in turn, has recently been criticized as being emotionally violent to its employees). The visual cesspool of Rotten.com, in short, represents an era of online culture in which abjection was still part and parcel of every-day virtual encounters.

In the social media age, however, shocking images are far less often encountered by way of ludic activity. Rather, users are confronted with extreme images when there is a clear political goal in mind, for example, the assassination of Iranian activist Neda Agha-Soltan (2011) that was uploaded onto YouTube as a way of raising global awareness of the political situation in Tehran of that year. Similar ethical and political intentions are a world apart from the original purpose of shock sites which, Steven Jones has argued (2010), “is to amuse, or to offend” (129). What does this altered approach of the abject image mean to the legacy of the shock site, as a cultural object of the pre-social media age?

The proposed paper explores the stake of abjection in the contemporary online imagination, and what this new understanding of the extreme image (as a political instigator rather than as a work of mere shock fodder) means to the way that shock sites like Rotten.com, as former venues of playful encounter, are remembered in digital media history. Through a brief analysis of two case studies, Tub Girl (2001) and the aforementioned Neda Agha-Soltan (2011), I consider the chronology of the abject image in online content circulation, and how the abject image has shifted from being a source of spectacle for amusement purposes, to becoming a tool for increasing moral engagement.

09:30-10:45 Session 11C: Material Semiotics
09:30
Windows, mirrors and prisms: Tracing interactive design from usability to UX

ABSTRACT. This paper explores the emergence of ‘User Experience’ (UX) as a form of discourse. It positions UX within the rising paradigms of post-desktop computing, ludic interfaces and the advent of intangible digital labour facilitated through social media platforms. UX is shown to be a way of seeing the work computation does in the world, moving beyond the modes previously associated with computers as an *addition to* and instead positioning itself as *the basis of*. In particular, interaction designers’ traditional ‘transparency myth’ (Bolter and Gromala 2005) is reversed: no longer should the interface recede into the background, to allow for flawless execution of computing tasks. Instead, the interface must enhance life and make itself present as an active participant in the way users experience are augmented by computation. The paper does so in two rhetorical moves. First, I present a historiography of the term and its association. Beginning from the early introduction of graphical user interfaces I examine the emergence of interactive design as a notion (Johnson et al. 1993) and its subsequent popularization by Don Norman (1988, 2002) Then, I trace the shifting focus from usability to experience, looking primarily into the then-rising business paradigm of the experience economy (Pine and Gilmore 1998, 2011) and contemporary HCI literature (Forlizzi and Ford 2000; Forlizzi and Battarbee 2004; Law et al. 2007; Hassenzahl 2008; Law et al. 2009; Law, van Schaik, and Roto 2014). I show how in moving beyond the notion of usability, attached to the late 1990s and early 2000s, web interfaces have become embedded in the production of experiential ‘interface envelopes’ (Ash 2015; Ash et al. 2018) which serve to modulate users’ spatio-temporal perceptions in order to generate economic value. In the second part, to substantiate UX as discourse, I perform an empirical study of it, and relative discursive network, as appearing via YouTube videos. Using a combination of digital methods (Rogers 2013; Marres 2017) and close reading of select text, I explore YouTube, the 2nd most popular search engine and arguably the modern web’s central polemical space. By looking at the way YouTube’s recommendation algorithms prioritise the platforms’ native content and idiosyncratic culture (Rieder, Matamoros-Fernández, and Coromina 2018) UX can be related to other YouTube cultures, or ‘spheres’. What emerges from such analysis is the UX-subject, who is predominantly western, male, solutionist, leans strongly on the side of science and technology in the culture wars and is precariously employed. The paper concludes by postulating what the change from usability to UX postulates for computer work and offers potential interventions for media scholars and web historians to remedy the situation.

09:50
Into the Unknown – the age of the uncharted web, the sitemap and the advent of search. A reflection on the beauty of navigation before the integrated search

ABSTRACT. It is a given today to type a keyword in a web-browser and land on the according page in a fraction of a second. Navigation on the internet changed immensely in the past 20 years as everything else did. As soon as it was facilitated to implement graphics on pages, interfaces became user friendly, sometimes challenging and complex – but definitely more visually appealing. When easy applicable vector animation software came along, the internet was suddenly swept in colourful motion. In my research, I will focus on visual sitemaps as they were implemented between 1998 and 2003 – a period of time in which the animated GIF (still around, thirty years later) was superseded by dynamic animations based on Action Script (long gone), the programming language of Macromedia Flash (derelict). Sitemaps still exist today, as structural representations to visualise the nodal connections between individual pages and as plot to plan the framework of said site – but in particular to optimise results for the crawling robots, dispatched by search engines. Once, sitemaps were much more, an attempt to quell the chaos of the world wide web, one corner at a time and to flash a torch into uncharted territory. As a former web designer in the heydays of the post-text-internet myself, I fondly remember the time before tab-browsing, when navigational aids were integrated into the design and visual appealing attempts directed the user to the desired pages. From graphic maps with hyperlinked GIFs to optical exciting but static or ever changing generative visualisations in Flash, I will provide an overview of my personal collection of screen-grabs, which I took during these days. I will also attempt to classify both the static and the generative approach, since the map as metaphor could be used only to a certain extent. Two-dimensional representation of a multi-dimensional network of informational chunks suffered from the over-simplification back then, but seems impossible today. A diagram to register every connection concerning a certain address was rooted in a physical understanding of the outside world, which never applied – the concept of visual organisation of entities on the net as tree, plan, map and creative metaphors of the content and context was not necessarily facilitating the browsing – but a pleasurable medium to design and to look at.

10:10
MORAL MAGIC COOKIES

ABSTRACT. The early web described in the conference call -- gone, clunky, idealistic, and naïve -- is not exactly how I would describe the aspect of web history I am currently pursuing: cookies. After closer examination, cookies seem perhaps the opposite. Far from gone, web cookies are arguably fundamental to the contemporary web and are experienced in various ways many times a day, everyday by users around the world. Cookies were not the technical product of an idealistic utopian goal or naïveté of young designers. Their use and specifications included a great deal of participation, discussion, and foresight, and were explicitly an ethical compromise. And cookies are not clunky... exactly. Cookies can slow down systems, and banners and pop-ups have and do impart a clunkiness onto the web, but cookies are considered efficiency techniques to help mitigate the sometimes-clunkiness of the web’s original stateless design.

The lore of cookies is that Lou Montulli, a Netscape employee plucked from graduate school at the University of Kansas where he worked on the Lynx browser, created cookies so that another group at Netscape could create a shopping cart. Cookies, named and developed by Montulli (possibly working with John Giannandrea on the specification), were certainly part of the early versions of Netscape Navigator browsers. But, solving statelessness or “maintaining state” had been a discussion (at least) on WWW-talk email lists and Usenet and continued to be up for debate after Netscape cookies were introduced. These were not simply technical discussions. There was rich debate about identification, tracking, surveillance, potential users and uses. Cookies were anonymous but not consensual -- downloaded to a user’s computer without mention or control features. This would change shortly after a dozen or so newspaper outlets raised the controversy in 1996 and later versions of Netscape browsers, as well as its nemesis Microsoft Explorer, played with various default settings, notifications, click-thrus, and banners.

For this project, I am particularly interested in normative cultural constructions of cookies, meaning I am interested in how some political cultures determined that cookies required the morally transformative power of consent, while others found consent’s moral magic unnecessary. Users move in and out of these different normative sociotechnical constructions, frequently inhabiting many at once. For instance, when submitting my abstract on EasyChair, I read:

[IMAGE ("EasyChair uses cookies for user authentication. To use EasyChair, you should allow your browser to save cookies from easychair.org")]

This is quite different from the banner that rests atop the content on the European Council homepage:

[IMAGE ("We use cookies to ensure we give you the best browsing experience on our website. Find out more on how we use cookies an how you can change your settings." Two large buttons offer "I accept cookies" or "I refuse cookies")]

The cookie notification is almost as old as the web cookie itself and constitutes an art form that comes in and out of web style with regulatory changes, public demand, and technical maneuvering.

While much of the web has moved on from the mid-1990s, cookies remain. In this way, investigating cookies can provide unique insights into the web over time, phases, trends, attitudes, and politics. An investigation into cookies continues to be possible even as platforms eat the world. Like early technical standards, cookies transcend moments of centralization and decentralization, while affording both.

But cookies are not easily captured. Although the HTTP Working Group at the Internet Engineering Task Force eventually developed a Proposed Standard for HTTP State Management, it was largely ignored. The actual cookies themselves are short strands of characters -- personalized, ilegible, and ephemeral -- which are recognized and accounted for by an inaccessible computer somewhere else. In this way, they are what Finn Brunton calls “digital middens,” or “the accumulations of by-products and junk and trash and bits and pieces of the working life of computers and communities.” Cookies are not easily studied, either. Beyond their technical varieties and designs, they are also international, cultural, and political, having become an odd lightning rod for global privacy debates, criticism of Silicon Valley power, and in-fighting among tech companies and national interests.

This paper will detail a work in progress. It will introduce the cookie as an object of study for web histories, describe the strategies, successes, and failures at gathering and organizing evidence, as well as methods for the presentation of web histories using RealTimeBoard. In doing so it will comment on a number of suggested themes including past futures and paths not taken, the changing structure of the web, periodization as well as computer ethics in web histories, legal histories of the web, and international versus local histories of the web.

09:30-10:45 Session 11D: Building and Maintaining Archives
09:30
Web Data Engineering: A Technical Perspective on Web Archives

ABSTRACT. Engineering has traditionally been the systematic application and combination of existing methods to build a desired system or thing. Data Engineering is different from this in that engineering here does not refer to creating something, but transform the data in a way that it is more useful for what should be achieved. As part of this, new tools and processes are developed to accomplish this transformation more effectively as well as efficiently in terms of resources and time.

Web archives are enormous collections of web resources and hence, offer huge potential for data engineers. Web data in particular features some very specific traits that raise new challenges to deal with when providing exclusive services based on the information contained. Due to the vibrant and every changing nature of the web, web archives also feature a temporal dimension, which enables even more possibilities to engineer, analyze and study its content. However, the challenge is to discover, identify, extract and transform archival web data into meaningful information. Therefore, the Internet Archive as the provider of the world's largest web archive just recently invested new, additional resources in their web data engineering efforts to provide added value from their web and digitized data to their customers and partners.

This talk will give some insights into the variety of web data engineering tasks being performed at the Internet Archive on a regular basis. These include many different areas from search over data extraction to data derivation and more. Some engineering efforts have resulted in automated workflows that are chained in pipelines to power mostly public services and APIs. Others are rather individual endeavors, such as special projects requested by single collaborators.

Information Retrieval on temporal collections such as web archives is an active research area and there is no satisfying out-of-the-box approach for web archive search available yet. As an archive, we are not only interested in providing the most recent and prominent results, but also forgotten or vanished information that are not present on the current web anymore. The temporal aspect adds an additional level of complexity to the mere application of existing retrieval models and requires the adaption of existing tools, such as Elastic Search, to run as needed. In our data processing efforts we make extensive use of additional exchange and intermediate formats to accomplish a more efficient access to the data of interest. While specialized indexes allow for fast lookups and efficient data extractions such as sub-collection building, tailored container formats support storage of derived information that can be efficiently loaded and read along with the actual data records.

This overview is meant to showcase the latest achievements and upcoming data services from the Internet Archive's web archiving and data services group. Details about the way we and our systems work will be presented together with APIs and programming libraries that are ready to use as well as new features that are to be expected soon.

Beyond the tasks that are very specific to us at the Internet Archive, we will outline Web Data Engineering as a discipline that is needed or already performed in many new economy businesses today as well, but has widely been neglected as a separate task independent of the analytical questions that are nowadays raised widely across disciplines. Thinking these through in a more systematic manner with a focus on efficiency can help to deliver the wanted answers in a faster and more effective way. Specialists in distributed systems with deep software engineering skills, supporting the analysts by providing better, more specialized interfaces and processes, are an important part in the goal-oriented work with big web data that is prevalent everywhere, not only at the Internet Archive.

09:50
Memento Tracer - An Innovative Approach Towards Balancing Scale and Fidelity for Web Archiving

ABSTRACT. Current web archiving approaches either excel at capturing at scale or with high fidelity. Despite various attempts [1], approaches that combine scale and fidelity remain elusive. For example, the Internet Archive’s crawler is optimized for scale and hence enables an archive of more than 655 billion web resources [2]. However, fidelity and quality of the captures vary and are often hindered by dynamic elements and interactive features contained in the captured resources. For instance, the CNN.com homepage has not been properly archived (and hence can not be replayed correctly) in the Internet Archive since November 2016 [3]. An example on the other end of the spectrum is Webrecorder [4]. While browsing a web page, this tool archives the page and captures all the elements the user interacts with. With this approach, Webrecorder provides high-fidelity captures but lacks the ability to archive resources at web scale as only individual user interactions with the single web resource trigger the archiving process, similar to a screen recording session.

As part of our “Scholarly Orphans” project we devised a pipeline to track, capture, and archive scholarly artifacts. These are web resources that researchers created in or uploaded to productivity portals such as SlideShare, Figshare, GitHub, or Publons. Our newly developed Memento Tracer framework [5] plays a crucial role in this pipeline and strikes a new balance between operating at web scale and providing high-fidelity captures.

Memento Tracer, with its conceptual view shown above, consists of three main components:

1) A web browser extension: A user navigates to a web page, for example, to a SlideShare presentation and activates the extension. By interacting with the web page (clicking through the slide deck, downloading the slides, etc), the user creates a trace that, in an abstract manner, describes the artifact to be archived. The extension does not record actual resources or URLs that are traversed by the user but rather captures user interactions in terms that uniquely identify the page's elements that are being interacted with, e.g. by means of their class ID or XPath. If the trace is created for a web resource that is representative of a class of resources, the trace can be applied across all resources of that class and not only to this specific resource since all resources of the same class (in the same portal) are typically rendered using the same template. In terms of the above example this means that the created trace can be applied to all slide decks in SlideShare and not only to the one slide deck that the user interacted with in order to create the trace. This is a major advantage in terms of scalability over the Webrecorder approach where all interactions are required over and over again for all resources, even if they are in the same class and portal.

2) A shared repository: After a user has created a trace, she can share it with the community via a publicly accessible repository, thereby crowdsourcing a web curator task. The shared repository allows for reuse and refinement of existing traces. Hence, anyone in the community can utilize a trace created by another curator (e.g., use the aforementioned SlideShare trace to capture other slide decks) , which further increases the scalability of the approach. In addition, the repository can also offer multiple versions of traces for the same class of resources created by different users since curators may disagree on the essence of an artifact [6].

3) A headless browser capture setup: To generate web captures, the Memento Tracer framework assumes a setup consisting of a WebDriver (e.g., Selenium [7]) that allows automating the actions of a headless browser (e.g., PhantomJS [8]) combined with a capturing tool (e.g., WarcProxy [9]) that writes resources navigated by the headless browser to a WARC file. When this fully automated capture setup comes across a web resource of a class for which a trace is available, the trace will be invoked to guide the capturing of the resource. This functionality guarantees continuous and constant high-fidelity archived resources, which is a major advantage over, for example, the Internet Archive’s automated crawling approaches.

With Memento Tracer a curator defines the essence of a web resource by creating a trace based on her interactions with the resource. Traces can be shared with the community for reuse and refinement and are used to guide an automated crawling framework to generate high-fidelity captures. With this functionality, Memento Tracer has the potential for a true paradigm shift in web archiving. However, challenges remain, such as the standardization of the language used to express traces and addressing limitations of browser event listeners for recording traces.

[1] https://github.com/N0taN3rd/Squidwarc [2] https://twitter.com/brewster_kahle/status/1016003169589981184 [3] http://ws-dl.blogspot.com/2017/01/2017-01-20-cnncom-has-been-unarchivable.html [4] https://webrecorder.io/ [5] http://tracer.mementoweb.org/ [6] https://doi.org/10.1109/IRI.2018.00026 [7] https://www.seleniumhq.org/ [8] http://phantomjs.org/ [9] https://github.com/internetarchive/warcprox

10:10
Arquivo.pt: a free open-access service to research the past Web

ABSTRACT. The web is the primary means of communication in developed societies. It contains descriptions of recent events generated through distinct perspectives. Web pages are the original artifacts that document the Digital Era. Thus, the web is a crucial resource of information for contemporary social and historical research. It is difficult to perform accurate scientific studies about societies during the past 20 years without analyzing born-digital information that has been published online.

However, every year, 80% of the pages available online disappear or are updated to a different content. On the other hand, there is a naive common belief that everything is online and if it is not online, it does not exist. Or even more worrying, it has never happened. The tremendous fast pace at which the Internet has penetrated modern societies without proper digital preservation, jeopardizes scientific research about our recent events. This problem will become even more concerning in the future, when Digital Humanists will look into the past to try to produce a common memory about our present societies and extract knowledge from it.

Arquivo.pt is a research infrastructure that preserves information gathered from the web since 1996 and provides a public search service over this collection. Arquivo.pt contains billion of pages collected from 14 million websites in several languages and provides user interfaces in English. In 2017, Arquivo.pt had over 100 000 users and 53% of them were hosted outside of Portugal. The archived data can be automatically processed to perform Big Data research through a distributed processing platform or through Application Programming Interfaces that facilitate the development of added-value applications (arquivo.pt/api). The Arquivo.pt team has also contributed with over 40 scientific and technical articles related to web archiving published in open-access.

Despite focusing on the preservation of the Portuguese web, Arquivo.pt has the inherent mission of serving the scientific community. Research and Development (R&D) projects use the web to aggregate and publish complementary scientific outputs such as experimental data sets, presentations or demonstrations. However, this precious, and sometimes unique information, quickly disappears after the projects funding ends. Arquivo.pt preserved 52 million web files (7 TB) related to national and European R&D projects. Arquivo.pt also accepts preserving websites worldwide related to research and education suggested by the community.

It is important to train researchers to explore the preserved data and to raise awareness to the need of preserving born-digital web data. Therefore, the Investiga XXI bursaries program was launched to promote the usage Arquivo.pt as a tool and resource for scientific research (arquivo.pt/research). Three projects in the area of Digital Humanities were developed addressing issues related to Communication Studies, Social Sciences and Information Science. A training program about web preservation was put in place afterwards (arquivo.pt/training) with the objective of raising awareness among web authors to the importance of preserving their digital heritage.

In 2018, the Arquivo.pt Award was launched for the first time with the objective of raising general awareness about the importance of preserving the Web of the Past and disseminating the usage of the Arquivo.pt research infrastructure. Contestants had to take advantage of the search engine openly accessible by Arquivo.pt to formulate their proposals. The project “Conta-me Histórias” (Tell me Stories) won the first prize (10 000 euros). It is an online service that provides a temporal narrative about any subject using 24 electronic news platforms. The second prize of 3 000 euros was given to “Framing the concept of “homosexuality” in 20 years of publication of the Expresso newspaper”. The third prize was given to “Arquivo de Opinião” (Opinion Archive) which is a web application that offers the user a digital repository of opinion articles published online between 2008 and 2016, in the most prominent Portuguese media outlets. After the success of its first edition, the Arquivo.pt Award was established as a yearly event. Thus, the Arquivo.pt Award 2019 will once again distinguish innovative works that use materials and information from the free open-access service offered by Arquivo.pt.

In 2019, Arquivo.pt will celebrate 12 years since the beginning of the project. This presentation will introduce the Arquivo.pt research infrastructure and provide an overview of its past, present and future activities.

11:00-11:30Coffee Break
11:30-12:45 Session 12A: Networked Cultural Production
Location: OMHP C1.17 (48)
11:30
“We didn’t really get the internet”: The early history of Hungarian digital media and online journalism

ABSTRACT. Inequality between the global centers and peripheries of digital cultural industries can be observed not only in the networks of business, but in academic research on the history of digital media as well. The history of digital media – from the internet to the web and social media – is a well-documented and -researched field in the Western world, particularly in the United States, with numerous academic articles and monographs available on various aspects of the subject (e.g. Boczkowski 2005, Fortunati 2009, Grueskin-Seave-Graves 2011). But as we move further from the central areas (The US, UK and Western Europe) towards the peripheries, less interpretations, analyses we have on the developments of local scenes, markets and industries. The history of internet and digital media in the Central and Eastern European (CEE) region – with some notable exceptions (Peters 2016) – is virtually unrepresented in global social sciences and humanities or in the public discourse, although research in this direction is much needed (Turner 2017).

In my talk, I wish to attempt to make the first steps towards filling in this gap, by presenting and interpreting the social and cultural aspects of the first eleven years of Hungarian digital media, and the birth of the profession of online journalism between 1994 and 2005. The opening of this roughly decade-long period is marked by the publication of the first Hungarian digital-only media, the ‘ABCD’ CD-ROM magazine; while the latter date marks the first year for the largest ever Hungarian online social network after its rebranding and rapid expansion.

In this time window, Hungarian (and regional) digital industry, media, and culture interacted and co-evolved with other branches of media and technology, and was influenced by innovations and trends coming mainly from the US, but with no significant structural connections to the North American networks. In the hope that this particular case study may provide a better understanding of the center-periphery dynamics in digital media history, I wish to give a concise introduction to the sociocultural history of early Hungarian digital media, using conceptual tools borrowed from media history and STS. Considering the interactions between the domains of technology, journalism, business and culture in creating the then-new segment of content industry, I focus on the process of professionalization in digital journalism that happened in parallel with the actual formation of the professional field in the country. In doing so, I would touch upon the subjects of the evolution of digital business models; the role of journalistic ethics and culture in framing the status of digital media; and also the interaction between different professional ideologies and traditions converging in the field.

The research is based on, and is part of the Hungarian Online and Digital Media History (MODEM) project, which, following the initiative of the Digital Riptide (2013-2015), is an oral history archive dedicated to the early history of the Hungarian web and digital ecosystem.

11:50
Net Art and the Performance of Images

ABSTRACT. Artists have been using the very processes of the internet as media of expression since its early days, to create dynamic, often interactive, works. Emphasising temporality, chance and intervention by users, such artworks are dependent on the enactment of a system to convey their conceptual and aesthetic content. Thus, the relation between the net and net art works at the time of their creation is highly contingent, as they are linked to the structures and processes through which they are performed. For this reason, traditional frameworks for understanding artworks as static, discrete objects fail to address critical aspects which defy those boundaries by being neither fixed, nor separable from their network milieu. This examination will focus on early net-based artworks exemplary of second-wave digital humanities as described by N. Katherine Hayles, namely artworks which prioritise the temporal, the spatial and the curatorial over the textual, the narrative and the quantitative, and which offer insight into the procedural emphasis of such practices. Due to their computational approaches, and to the medium whereby they were developed, these artworks fit the criteria of Timothy Morton’s hyperobjects: they are viscous, nonlocal, temporally undulating, phasing and interobjective. Exploring relations between net art works which employ spatial and visual procedures through the concept of hyperobjects, this paper will address some of the aspects that differentiate procedural artworks from other artforms.

Artworks which explore the process-oriented potential of networked media, employing the mechanics of web-based elements, such as loading, refreshing, clicking on or navigating between pages, actively engage the procedural aspects of the internet. For example, JODI’s Automatic Rain (1995), utilised the slowness of mid-1990’s internet to animate the “rain” as it progressively loaded, revealing temporality and procedurality. Current connection speeds are too fast for this work to display correctly, and the fact that it only functioned as intended for a brief amount of time acts as a reminder that data transfers are not instantaneous, as they may now appear to be. Similarly, the link rot of many net art works which have endured, or at least stayed online, for some time expose their existence as parts of larger media ecologies.

Though it may at first seem an unlikely connection, the characteristics of hyperobjects are also descriptive of those of the net-based artworks we are looking at. Thinking of these in terms of viscosity, nonlocality, temporal undulation, phasing and interobjectivity allows for a new approach through which to address their active and ephemeral features. A networked image may be “produced by thousands and thousands of end users on their laptops” (Galloway, 2015, 94), defying notions of artistic productions as singular, instead being as pervasive as they are dispersed. The temporality of net art works only further complicates things, vacillating between fleeting and enduring qualities. These works create phase-spaces that are often too complex to be perceived with a single instantiation or contact. They are algorithmic entities that reflect causality and process, and that as such flow into the future. These aspects are in accord with Hito Steyerl’s concept of the “poor” image, which she describes as prioritising performance, particularly transmissibility, over sharpness, resolution, mimesis and other factors considered constitutive of “rich” images. As previous technologies have altered artistic modes of production, and with them, systems for the evaluation of such productions, the internet, too, has radically changed what kinds of art may be produced, how they are communicated and the principles through which they are understood. With emphasis on the active, procedural nature of web-based artworks, this investigation delves into the properties which set these works apart from other kinds of artistic productions and which are becoming the dominant norms due to the pervasiveness of algorithmic processes in many artistic practices.

12:10
Networks of Sound: PLATO, Electronic Music and the Networked Construction of Computer Music

ABSTRACT. I will examine and analyse the development of music and sound software originating with the PLATO educational and social computer platform developed at the Computer-based Education Research Laboratory (CERL) at the University of Illinois at Urbana-Champaign (UIUC) in the 1960s and 1970s. Much of the planning, testing and implementation of these developments in sound synthesis and electronic music took place via PLATO’s online discussion forums. This research thereby contributes a little-known chapter in the history and development of electronic music and its networked roots. It also provides an examination of the sociotechnical environment in which that development occurred. It illuminates a pathway that is different from the usual and highlights the work of a technology (PLATO) and networked community of composers, musicians, audio engineers largely neglected in both the usual histories of digital culture and of electronic music.

In the late 1950s UIUC began building the Experimental Music Studio, the second electronic music studio in the U.S. after the one built at Columbia University. The studio was begun by Lejaren Hiller, a student of Milton Babbitt and Roger Sessions who earned a Ph.D. in Chemistry and a M.M. in Composition. Hiller also wrote a number of articles on information theory and his compositional technique was indebted to it. Hiller wrote the ILLIAC Suite with Leonard Issacson in 1957, the first use of a computer to compose music. Hiller’s 1959 article in Scientific American that opened with the question, “Can a computer be used to compose a symphony?” generated enormous controversy, to the degree that Hiller was banned from the New Grove Dictionary of Music and Musicians and Baker’s Biographical Dictionary of Musicians until just before his death in 1994.

One year after the Scientific American article was published, in 1960, the first version of PLATO (Programmed Logic for Automated Teaching Operations) was introduced at UIUC as a computer-assisted instruction system and ran on the university’s ILLIAC I computer. While not initially developed as a platform for computer-mediated communication (CMC) PLATO programmers quickly developed a message board and live chat system (Jones & Latzko-Toth, 2017). Largely used at first by system operators these tools were available to all users and discussions quickly sprang up. Among the topics discussed were ones related to the audio capabilities of the PLATO system, some of which connected to Hiller’s ideas; later these were to inform the development of some of the sound and compositional possibilities on the PLATO system. Subsequent developments included the Gooch Synthetic Woodwind (developed n 1972, and a precursor to MIDI and modern day computer sound cards), Gooch Cybernetic Synthesizer, music text editors, compositional aids and interactive compositional programs. Instruction in music theory and aural skills was available early on among the PLATO instructional materials as well.

By the 1980s some of the PLATO developers at CERL with an interest in sound and music began working at UIUC’s Experimental Music Studio as PLATO was supplanted by the IBM PC and, later, Apple Macintosh. Elements of their work with PLATO found their way to Kyma, a visual programming language for sound design that gained widespread use among composers and sound designers in the 1990s and later.

This presentation will explore the history of these developments and focus on their use of the PLATO system’s communication network as both a platform for discussing electronic music and for brainstorming the development of audio in networked environments. The research is based on interviews with PLATO developers and users as well as on a unique archive consisting of print-outs of a general PLATO user and developer forum in the 1970s that was recently digitized. Rhetorical analysis of the archive allowed examination of the role sound and music played in the system’s evolution during a critical period in its history. It also provided insight into the multidisciplinary and social nature of the environment in which these developments occurred and examination of the rhetorical dimensions surrounding music and sound in the social construction of a social computing platform.

11:30-12:45 Session 12B: Historicizing 4chan's Subcultures
Location: OMHP C2.17 (48)
11:30
Teh Internet Is Serious Business: On Historicizing 4chan’s Subcultures

ABSTRACT. Panel description

In terms of their unique technological infrastructure and design, community ethos and subcultural aesthetic, image boards like 4chan continue to flourish at the fringes of an increasingly hegemonic platform economy. In stark contrast to other popular websites, 4chan remained relatively untouched by the larger shift toward social media. In distinction to the hyper-individualizing and personalizing “face culture” of social media, the media practices that characterize the “mask culture” of what we conceptualize as an opposition between the “deep vernacular web” and the “mainstream surface web” are anti- and im-personal rather than personal; ephemeral and aleatory rather than persistent and predictable; collective rather than individual; stranger- rather than friend-oriented; radically public and contagious rather than privatized, filtered and contained. These deep vernacular media practices remain committed to the distinct cyber-separationist play ethos of the early .net culture of the nineties and early naughts, whose motto “teh internet is serious business” was meant to convey the opposite, i.e. that nothing what is said online is to be taken at face value, or taken seriously at all.

However, as the internet penetrates more and more aspects of our professional, social, and personal lives, online hate speech gains more political momentum, trolling and fake news become omnipresent and toxic, and edgy pop culture is appropriated for reactionary ends, what is the fate of this ethos of online play? How to understand the emergence on 4chan, of new right-wing movements such as the alt-right, which seems at odds with the former’s postmodern, nihilistic and ironic sensibilities? How to situate 4chan in the larger structural transformations of the internet from a space of ludic and anarchic identity play to the walled gardens of Facebook? How to capture, analyze and interpret the ever changing and highly ephemeral multiplicity of images and rhetorical tropes that form 4chan using both qualitative and quantitative methods? It is these questions that our panels aims to engage with and discuss.

 

On the Deep Vernacular Web Imaginary

In this position paper we propose a heuristic distinction between what we call the deep vernacular web of forums and imageboards and the mainstream surface web of social media platforms and apps. Apart from key differences in technological infrastructure, GUI, as well as policy and business model, we argue that these deep and surface webs represent two fundamentally incompatible social and cultural paradigms. Maintaining itself on the fringes of the new platform and big data economy that reached maturity in the first decade of the 21st century, the deep vernacular web remains committed to the “cyberspace separationism” of early .net culture as captured in the joke that “On the Internet, nobody knows you're a dog”, or what we refer to as its unique “mask culture”. Treated as a web imaginary whose essence the mask symbolically encapsulates, it conceives of the web as a space of (identity) play separate from the “serious”, “official life” of work, school, or family. The increasingly hegemonic “face culture” of social media platforms, on the contrary, aims to align and persistently link off and online activities and identities as part of a single personal profile and corresponding data double. 

There is, as we hope to show, a cultural politics to these competing visions on the internet that manifests itself on different levels. The web that Facebook wants, for example, must be actively created, and is necessarily exclusive of the alternative configurations of digital sociality and cultural production represented by the deep vernacular web, which insists on the gap between IRL (in real life) and OTI (on the internets). Related notions resurface in a variety of tropes that act as a kind of vernacular user manual for navigating these digital media networks: 1.) the inauthenticity of online expression, as expressed by Poe’s Law, which states that parody, sarcasm and the genuine expression of belief can never be sufficiently distinguished in a digital context; 2.) the lack of stable identity cues of digital interlocutors, who are often anonymous and with whom contact is ephemeral, as expressed by Rule 29, which states that ‘In the internet all girls are men and all kids are undercover FBI agents’; 3.) the very feasibility of the internet as a vehicle for dialogue and the ‘rational exchange of ideas’, as expressed by Godwin’s law, which states that all arguments inevitably devolve into comparisons involving Nazis or Hitler; and 4.) the peculiar mix of humor and play, raunch, avarice and satirical invective that have long characterized discussions on web forums, summarized in Rule 20, which states that ‘Nothing is to be taken seriously’. These laws and rules form the basis of the deep vernacular web's collective self-understanding that in turn shapes their arguably idiosyncratic subcultural praxis, and constitute a reservoir of web-folkloric wisdoms related to the social dynamics inherent to these weird and shadowy global mediascapes these self-styled “anons” inhabit.

Understanding these different paradigms and where they come from is crucial and may serve as a much required heuristic device for current debates on meme culture as a vehicle for hate speech, trolling and political radicalism. Whereas instances of taboo speech or imagery are often interpreted as hate speech or signs of political radicalization or moral depravity, we mean to comprehend an argument that would seem to view such extreme forms of speech through a radically different epistemological perspective. Axiomatic to this latter radical perspective is a profound suspicion of sincere expressions of belief or politics of any kind. Given the seeming absence of genuinely belief, the deep vernacular web imaginary thus apparently renders "true" expressions of hatred and bigotry impossible. What may thus appear as (and have the consequence of) hate speech in another context is authorized here in name of "play". Beyond reiterating the morally dubious stature of this ludic conception of the web, our objective here is to also consider how the laws and rules of the deep web may be interpreted as a kind of vernacular media critique of the surface web. For many, perhaps most, the transgression of moral and aesthetic boundaries is imagined as a vernacular form of critique of the very aesthetic criteria that demarcate "real life." While this perspective does not in any way excuse the impact of the forms of extreme expressions so common in this culture it does seek to provide an explanation. Understood in these terms, we might thus consider the vulgar expressions of internet trolls, for example, as aesthetic attempts (however misguided) to challenge the presumption — at the basis of cyber-utopian worldview as of the data extractionist business model of social media platforms — that the web is a legitimate forum for the expression of authentic identity or of serious belief.

 

4chan as an Archived Object: Extracting and Analysing the Sewer of the Internet with 4CAT

This paper explores the idiosyncratic challenges of archiving the ephemeral and anonymous imageboard 4chan and presents possible solutions to these challenges in the form of an archiving and analysis tool: 4CAT. 4chan can be considered a remnant of the past web due to its lack of walled-garden commercialization, absence of user profiles, and vernacular oddities akin to early 2000s net culture. Nonetheless, it remains a notorious and arguably influential sphere on the current Web, prompting 4chan historians to underline the importance of safeguarding the imageboard’s record to both prevent inaccurate depictions of its culture as well as to correctly estimate its user's political agency (Phillips et al., 2017). Archiving and empirically analysing the data on 4chan is one way of constructing and handling such a record. However, as a corollary to its old-fashioned, anonymous, and ephemeral setup, employing the imageboard for empirical research presents considerably different challenges in data archiving and analyses compared to those of the platformized web.

Our paper will first delineate these challenges by departing from a material approach to argue that 4chan as a data object is subject to a dichotomy of accessibility and inaccessibility. 4chan is inaccessible most notably due to its ephemerality: posts are deleted and lost forever unless they are archived on time, usually with a margin of just a few days. Therewith, in opposition to the vast data reservoirs of e.g. Facebook and Twitter, 4chan-archival forms a task of archiving the non-natively archived. Further, when such archiving does occur, subsequent analyses cannot rely on repurposing the usual suspects amongst “natively digital” objects: likes, votes, shares, profiles, friends, and followers are all absent. Resultingly, 4chan and its users have been presented as a difficult to study “amorphous entity” (Coleman 2014, 113). In spite of this reputation, 4chan’s free, extensive, and easy-to-use API renders it paradoxically more accessible and open to analysis than many other social media platforms. Additionally, despite 4chan’s popularity - its most popular board /pol/ usually receives three million posts per month - it is niche enough that all data can be stored and analysed with relative modest hardware. Together, 4chan thus radically opposes the limited and “walled garden” nature of the platformized Web.

To present possible solutions on how to handle 4chan’s dichotomy of accessibility and inaccessibility, we present the 4CAT Capturing and Analysis Tool. 4CAT continually queries any desired boards (4chan’s subforums) to store its post data before the usual purging. This live data capture can then be combined with imported partial archives of historical 4chan data (many of which are available online), thus offering a particularly complete collection of content posted on the platform, and affording particularly comprehensive analyses. Although 4CAT does not store web pages as they originally appeared (like the Wayback Machine), the saved (meta)data renders it possible to reconstruct these pages and the historical narratives contained therein, an archival strategy that can be seen as an addition to the “four dominant traditions in Web archive collection and usage” (Rogers 2017, 5-6). 

However, exactly how to reconstruct and analyse these narratives forms a challenge in itself considering the vastness and structure of its data. One could perhaps trace the microhistory of a particular narrative thread (Tuters et al., 2018), but quantitative approaches will have to resort to repurposing the metrics 4chan does offer, such as post counts, thread positions, keyword frequencies, and image hashes. To that end, 4CAT offers several filtering techniques. Apart from the usual keyword search, taking the post as a unit of distinction, 4CAT also filters on “keyword-dense threads” by returning threads containing a high occurrence of a query string. 

Beyond this, 4CAT offers filtering techniques that repurpose strategies employed by 4chan’s users to construct a “locus of memory” working against the quickly moving “volume of posts and responses” (Coleman, 2009). These strategies usually include reposting content, be it in the form of image-based memes or topics with a similar opening post (i.e. “general threads”; Hagen, 2018). Through allowing filtering based on such patterns of use, 4CAT makes it possible to create data sets that thoroughly capture discourse on the platform and go beyond the imagination of 4chan as an amorphous entity. Further, 4CAT offers general and query-specific data visualisations of the website’s metrics like post counts, thread counts, and geolocation appearances.

Concluding, we position 4CAT not only as a practically useful archive and analysis tool, but also as a generalizable example on how to handle the material archiving challenges presented by old-fashioned ‘forum culture’ that is marginalised yet ever-present, and of particular historical importance, on the platformized Web.

 

The Trees for the Forest: Mapping 4chan’s Publics by its General Threads 

The objective of this paper is to the make out the proverbial trees from the forest that is 4chan. Since 4chan is an anonymous discussion board, posts to the site tend to appear as though emerging from an amorphous mass. As opposed to social media users, posters to 4chan, known as "anons", have no persistent and portable reputational capital. As such, 4chan has been characterized as exceptional insofar as it appears to efface individuality into a broader and singular online collective, the latter which has sometimes simply gone under the multiple-use nom de plume "Anonymous" (Coleman, 2014). In practice, however, a number of affordances and site practices may be understood to "grammatize collective action" on 4chan (Tuters et al., 2018). Of particular significance for our analysis is the practice of the so-called "general thread". 

In addition to anonymity, a key affordance that makes 4chan unique is its ephemerality. The design of 4chan is quite simple — anons post comments to ongoing threaded discussions, which start with an "original post". The thread with the most recent comment appears first in order at the top of a given board, which results in the previous ones getting pushed down. The effect is like a torrent constantly washing everything away. While a thread is thus bumped to the top of the board by getting a constant stream of comments, boards are designed so that after a thread reaches a certain amount of comments (usually three hundred) it can no longer stay at the top of the front page, no matter how active the discussion is. As a workaround, the general thread involves an anon copying and pasting old content into a new thread in order to keep discussions going. Each general thread will thus represent an ongoing topic of conversation. The proposition of this paper is that the discussions going on in these general threads each represent a discrete "issue public" (Marres, 2007). 

As a discussion forum with one million daily posts, 4chan has subsections or "boards" dedicated to specific topics. While initially known mostly for its /b/ (‘random’) board, birthplace of many of the Web’s most successful Internet memes (Rickroll, LOLcats, etc), this paper is focussed primarily on /pol/, a board devoted to “politically incorrect” discussions. Although initially started to siphon politics from the rest of 4chan, /pol/ has lately become its most popular board. 4chan/pol/ is deeply intertwined with Internet subculture and, crucially, it is also characterized by the aforementioned technical affordances: anonymity & ephemerality. In combination, these factors afford an "anything goes" attitude on 4chan which has, in recent years, left 4chan to be identified with the rise of the so-called "Alt-Right" (Beran, 2017; Hawley, 2017; Heikkilä, 2017; Marwick & Lewis, 2017; Nagle 2017). While the tenor of discussion on /pol/ certainly appears puerile, there has been very little systematic empirical research conducted to support these arguments. This lack of empirically grounded studies is due in part to the site’s ephemerality — a shortcoming which this paper seeks to address by working with an archive of /pol/ through a combination of close and distant empirical methods.

Of the various approaches to empirically studying 4chan, we consider the general post to be the best showcase of the site’s unique features, in terms of offering potential insights into the various publics that make up an anonymous board. Methodologically then, our aim will be to periodize discussions on /pol/, by categorizing general threads historically. Using 4CAT, an archival tool for 4chan, to first identify the presence of general threads over time, we will proceed with a grounded theory approach in order to label the general threads into types of issue publics. The objective here will thus be to show the changing ratios of publics in the compositions of the board over time. As the available archive of /pol/ dates back to 2013, prior to the rise of the alt-right, we aim to provide empirical evidence to support (or else, to refute) those aforementioned claims that identify the board with the rise of this movement. In our approach we focus on the general threads and thus frame a portrait of 4chan, and /pol/ in specific, as a dynamic space in a constant state of flux, acknowledging that with 4chan “nobody can step in the same river twice” (Phillips et al., 2018). As opposed to the popular misconception of 4chan as an undifferentiated mass, portraying these general threads over time will represent this anonymous image board as a variety of different — and, perhaps, at times competing — issue publics.

11:30-12:45 Session 12C: Platform and App Histories 2
11:30
Geotagged Tweets: Yesterday, Today and Tomorrow

ABSTRACT. Studies using geotagged data available on Twitter have been very popular in the last ten years. Scholars studying disasters, social movements, and urban phenomena have found them very useful to follow mobility and trajectories and to observe the relationship of people with places and space. Yet, the ways people can geolocate themselves on Twitter as well as the format of data extracted by the researcher has not always been the same. On the one hand, the interface to declare his/her position has evolved. On the other hand, the geographic information that can be collected can vary based on the source of data (API, scraping, etc.) and on the date of the extraction. Focusing on this case study, this paper is meant to show that studies funded on digital native data generated by platforms or apps cannot ignore the histories of such data and of the related device, interface and API. Especially, when the researcher is willing to carry out historical studies by analysing long-term data corpora (ex. Tweets geotagged in Paris between January 2015 and December 2017) or by comparing different moments, distant in time (ex. Tweets geotagged in Paris in January 2014 and in January 2017), he/she also has to carry out a parallel study on the evolution of the digital environment generating such data. This task is surely not banal. If we consider the case of Twitter, the user interface changes at each platform update and the new interface replaces the old one without leaving any trace. Concurrently, APIs for obtaining data evolve by imposing new rules and limitations. As a result, geotagged tweets collected ten years ago don’t have the same structure and origin of the current ones. In particular, they were more numerous and more various. Which is the reason of such a difference? Are Twitter users today less interested in geolocation and less mobile in space?

The best way to answer to these questions is to carry out a history of the platform and of the API through web archives. If it is not possible to retrieve all previous interfaces of the mobile app and of the desktop platform, web archives allow reading old “help” sections of Twitter. Doing so, the researcher can rebuild the main functionalities of the interface over time. Similarly, if it is not possible to test old versions of APIs, the documentation available on web archives allows the researcher to identify main changes in data and metadata. In our case study, such methodology of investigation based on web archives helped us to identify two main changes related to geotagged tweets.

First, as regards the interface, starting in November 2015, Twitter has changed the way a user can declare his/her location. If before such date, once the GPS sensor turns on, the interface provided as default the possibility of communicating an exact location with the precise geographical coordinates (latitude, longitude) of the point where the tweet was sent, today the default choice if the place-name, for example “Paris”, and this name is automatically converted in precise coordinates chosen by the platform, in this case those corresponding to the Hôtel de Ville (the council headquarters of the city of Paris ). As a consequence, if we compare a corpus of geotagged tweets in 2015 and in 2017, it is normal that in the second case, we find an over-representation of tweets in the city centre around Hôtel de Ville.

Second, as regards the APIs for collecting geotagged Twitter data, their use and potential have been restricted year by year. In first versions of free APIs (2009), there was also a Geotagging API that could extract geolocation data, but it was interrupted in 2012. In addition, it should be noted that between 2010 and 2011 Twitter made available the GeoAPI, a very powerful service that not only allowed extracting the coordinates, but also to convert them into place-names. Since April 2011, this tool is used exclusively by the internal services of Twitter. As a result, a researcher who uses the free APIs today can only retrieve geographic coordinates for tweets where the user has decided to declare his/her position while a person who uses paid access will also be able to access tweets’ coordinates for enriched tweets (“Geo Profile”). For example, when a user uses the word “home”, the platform will attribute the location of the user’s profile to the tweet. This technical excursus can explain the difference between analyses that date back to 7-8 years, when the researcher could more easily conduct territorial analyses, and more recent studies (after the restriction of APIs) that have to deal with bigger problems of representativeness and quality of geographic data.

11:50
Platform historiography: Materials for platform and app histories

ABSTRACT. Recent work on web archiving foregrounds challenges that social media and apps pose in terms of dealing with their status as ephemeral objects, dynamic and personalised user content, third-party content and functionality, robots exclusion standards, and pages behind logins. While these may lead to partial or incomplete platform and app views, there are other valuable platform and app materials that have in fact been archived but which are underutilised. In this paper, we advance the case for platform historiography with a variety of publicly available, archived platform and app materials and reflect on their utility for developing multiple kinds of platform and app histories. Prior historical approaches have typically focused on end-users, their content, and their graphical user interfaces, which is problematic insofar platforms and apps now routinely personalise the user experience. While websites are core archival units, there are three essential differences between websites, platforms, and apps with implications for web archiving and histories. First, platforms operate multiple interdependent sides and facilitate interactions between multiple stakeholder groups, including end-users, developers, advertisers, marketing partners, and investors. These stakeholder groups each have their own interfaces and pages, which are available through existing archives. Second, platforms offer resources for application development and data retrieval, representing the infrastructural and programmable dimensions of platforms. Finally, apps are ephemeral objects that continuously overwrite their own histories when app stores release a new update. 

Based on this multi-sidedness and the programmability of platforms, we develop an argument for platform and app histories beyond end-users and their content. We employ a variety of unique, archived but underutilised, platform and app materials: developer pages, API reference documentation, product pages, earnings releases, partner programmes, and app directories. We then offer multiple approaches for developing platform and app histories with these materials to account for platforms’ multi-sidedness and programmability: platforms as technical architectures, data sources, revenue sources, companies, ecosystems, and infrastructures. Moreover, these unique materials offer entry points for tracing changes in platforms’ technical architectures, data strategies, and larger infrastructural and industrial changes that can historicise contemporary debates about social media, political advertising, data access, and external apps. Finally, we demonstrate that despite their ephemeral status there is a breadth of materials about apps that have been archived, which enable app historians to recount stories of their development and functionality as well as their data collection and sharing practices.

12:10
Tracing key moments on Tinder, Instagram, and Vine: An argument for app archives

ABSTRACT. Today’s platforms and apps are continuously being developed, updated, maintained, and discontinued without an accessible public record. Helmond (2015) has identified the progressive “platformization” of the web, as platform infrastructures and economic models have become dominant. This shift affects not only individuals’ social activity but also the production, distribution and circulation of cultural content, from news to games, music, and other media (Nieborg & Poell, 2018). Platforms are commonly accessible as apps: software programs on Smartphones that are relatively closed systems so as to facilitate ease of use (Burgess, 2012). Apps are often mundane and embedded in everyday practices, from exercise tracking to grocery shopping (Morris & Elkins, 2015). Rather than rendering them inconsequential, the ubiquity of apps and their role in daily life underscores the need for understanding their development over time.

Using examples gathered through the walkthrough method (Light, Burgess, & Duguay, 2018), this paper identifies key instances in the history of three apps to illustrate the importance of app archives. The walkthrough method is an approach for interrogating apps that involves two phases: first, the environment of expected use is established through an examination of an app’s vision, operating model, and governance. Then a technical walkthrough is conducted to examine the app’s features, functions, and interface, often involving screen shots and detailed field notes. I conducted walkthroughs on Tinder, Instagram, and Vine, examining materials from their initial launch up until 2016 and conducting multiple technical walkthroughs between 2014-2016. Cumulatively, this research resulted in an archive for each app, allowing for the identification of key moments that altered the app’s development trajectory and re-positioned users in relation to the app. These include:

Tinder Social’s introduction: This feature was intended to allow groups of Tinder users to swipe on and interact with other groups. In its initial 2016 pilot in Australia, an update automatically activated Tinder Social and revealed users to each other if they were Facebook friends. Through a rushed, subsequent update and a defensive blog post, Tinder asserted that the company had not jeopardized their users’ privacy but made the feature inactive by default, just in case. Tinder Social was quietly discontinued the following year as the app’s promotional materials asserted its ‘social’ uses in other ways.

Instagram’s photo ownership outcry: Shortly after Facebook purchased Instagram in 2012, the company made several updates to the Terms of Service to allow information sharing across the platforms and integration of advertising into Instagram. Users and the media interpreted a vague statement in the update as implying that Instagram would be able to use their photos in advertisements to generate revenue without sharing the profits. Uproar ensued and Instagram backtracked with wording about content “ownership” that persists in its official documentation today.

Vine TV and “Watch” integration: One year following its launch, Vine was re-branded as an entertainment platform. A desktop version was introduced to compliment the app but without any of its posting functionality. Instead, the website featured “TV Mode,” allowing for continuous viewing of Vines. This functionality was later integrated into the app as a “Watch” button, re-positioning produsers (Bruns, 2013) as passive viewers and audiences for a handful of aspiring influencers.

These moments were significant in multiple ways: they involved substantial changes to platform features and/or operating models; they affected users’ social interactions and cultural production; and they had an enduring effect on each platform’s development. However, without the archives constructed during my walkthroughs, it would be difficult to trace this history today. Indeed, for discontinued apps like Vine, it is nearly impossible to retrieve adequate images or descriptions of its features and interface. For these reasons, I argue that we must develop a movement toward app archives: consistent, detailed and in-depth accounts of mundane apps across time. Unlike the Internet Archive’s ability to crawl the open web and preserve a history of webpages, the closed and proprietary nature of apps impedes this archiving. Since many apps adopt policies prohibiting the re-creation and distribution of images displaying their interfaces, it could be legally hazardous to share archives generated through a walkthrough or similar approach. This paper raises these issues alongside examples that demonstrate the importance of tracing these changes over time. This highlights the need for open, accessible, and accurate app archives.

11:30-12:45 Session 12D: Digging/Archaeology 2
11:30
Scoops and brushes for web archaeology: Metadata dating

ABSTRACT. Reconstructing the early Amsterdam internet, De Digitale Stad, has largely been bespoke handwork. Starting from a full 1996 back up of the system both a replica and an emulated reconstruction were brought into operation. One might say that DDS revived. Following the handwork on DDS archaeology, can more systematic general tools be developed? Where are the “brushes” and “scoops” for web archaeology or more generally for software archaeology? Does a rather down to earth interpretation of the “archaeology” metaphor make sense at all?

Not unusual for the archaeologist in dialogue with a historical researcher is to be presented with a storage device containing a digital artefact claimed to be of interest. Knowledge is required to do the obvious thing: plug it in and start looking around. The more complex and rare the files are, the more knowledge is required to know what one is dealing with and how to meaningfully perform research on the data. Oftentimes this is tacit knowledge is only present with people who have worked with comparable systems and sometimes even only in the minds of the developers themselves. The more the access to a system relies on tacit knowledge, the less general tools will be of help.

What to do in the absence of system knowledge?

By looking at the files in a file browser one may extract some basic information. The file names, the extensions, the size, the date of creation and some other metadata. In some cases the used operating system may recognise the type of file, allowing it to be opened. However, opening individual files is a manual process; just infeasible when dealing with thousands of files. Needed are tools that aggregate the (meta-)data and display them in an understandable format without requiring system knowledge.

Software engineering offers a number of packages and general tools; these often require knowledge of the system to know if their use is appropriate (e.g WinDirStat https://windirstat.net/ “a disk usage statistics viewer and cleanup tool for various versions of Microsoft Windows”; or for counting lines of code https://github.com/AlDanial/cloc). For archaeology, by contrast, more subtle, and in a way more general, tools are required. In order to help researchers reach an easier understanding of the type of data at hand, simple tools were created that can be applicable to any type of system without the need for any system knowledge. There is a host of data is up for grabs even without system knowledge. And the grabbing may be automated so as to facilitate research into digital heritage. Ideally digital resources become accessible to researchers without requiring software skills and experience, without presupposing the tacit knowledge of recognising the types of datasets and the tools that got with the datasets.

In this paper the example of Metadata dating will be presented. One data aggregate of particular interest comprises the time-related metadata. Metadata dating is carbon dating for files. The metadata contain timestamps that show when a file was created and when it was last edited. Systems that utilise version control have a wealth of time related data for the files and there are tools that help analyse that data. However when no version control is present, or the existence of version control is not known to the researcher, there is a distinct lack of tools that serve the same purpose as version control analysis tools. Metadata dating is a technique allowing researchers to gain more insight into the age and development period of their datasets.

Being a very elementary technique, metadata dating qualifies for the metaphoric characterization of the “scoop” or the “brush” of software archaeology.

11:50
Using web archives to study web tracking: Tracking technologies on the Danish web from 2006 to 2015

ABSTRACT. This paper presents a study on the development of tracking technologies on the Danish web from 2006 to 2015. Tracking technologies (cookies, beacons, use of local storage, fingerprinting etc.) are virtually ubiquitous on the web today, and they are used for many different purposes, including authorisation, personalisation, analytics, advertising, and social profiling. Tracking technologies contribute to the shaping of the web and the experiences of web users, and they play an important role in the online economy. Several studies of tracking technologies ‘in the wild’ have shown a prolific use of different types of trackers (e.g. Altaweel, Good & Hoofnagle, 2015; Roesner, Kohno & Wetherall, 2012; Ayenson, Wambach, Soltani, Good & Hoofnagle, 2011; see also the review of existing tracking methods in Bujlow, Carela-Espanol, Lee & Barlet-Ros, 2017) as well as the vast reach of powerful companies like Facebook and Google with tracking technologies present on a large proportion of popular websites (e.g. Altaweel et al., 2015). But when and where did these technologies start appearing on the web? And how fast did they spread?

To study the historical development of tracking technologies, we must turn to web archives, where the web of the past has been collected and preserved. We know of only one study that has studied tracking technologies historically by using web archives, that is Anne Helmond’s study of the tracking technologies in use on the New York Times’ website (Helmond, 2017).The study presented in this paper uses the Danish web as a case and aims to answer the above-mentioned questions by studying the Danish web from 2006 to 2015 as it has been archived in the Danish national web archive Netarkivet. A historical study of tracking technologies on the Danish web offers new insights about the amount and types of trackers used, the uptake on different types of websites (e.g. from specific sectors) and the extent of tracking by companies like Facebook and Google on specific points in time.

However, the intent is not just to map the Danish development but also to develop a methodology for studying tracking technologies in web archives. Working with archived web materials always requires knowledge of the characteristics of archived web and the methodological issues related to archived web as an object of study (Brügger, 2018; Masanes, 2006; Schneider & Foot, 2004) but a study of tracking technologies will probably pose some new – and possibly significant – methodological challenges. As Richard Rogers points to in his book Digital methods (Rogers, 2013) the individual website has often been privileged in web research, so we try to archive websites, because that is where what we traditionally understand as the ‘content’ is. But a lot of what is embedded in, attaches to, or surrounds the website is left out (ibid.) – intentionally, unknowingly or simply because it is difficult to archive. Because this project is searching for technologies, which are traditionally not in focus in web archiving, and which are often difficult to archive in full (e.g. content based on JavaScript, Flash and similar) some of them might not be in the web archive. Thus, the study also aims to contribute to a discussion of how we think about what the “content” of the web is in relation to what is captured in web archives, and examine whether the web archives will allow us to answer these new types of questions.

The approach applied in the study is based on the following steps: 1) Identify different types of tracking technologies, their characteristics, and their ‘signatures’ (the code signifying that a tracking technology is/was in use on a website), 2) Identify where in the archived data we might be able to locate specific signatures, and thus the data sources we need (e.g. Heritrix crawl.log, seeds.log, hosts-report.txt, order.xml, WARC metadata, WARC files), 3) Create a corpus from each year, which comes as close as possible to a ‘snapshot’ of the Danish web at a given point in time, 4) Extract the files matching the corpus from the relevant data sources, 5) Create scripts to search for and extract signatures, 6) Analysis. The paper will discuss the main methodological challenges of the study, relating to the steps described as well as to issues like differentiating between tracking and other (benign) uses of similar technologies and assessing the impact of changes in archiving strategies and settings.

12:10
The language of “sharing” on Chinese social media: A historical and cultural analysis

ABSTRACT. In the context of the internet, the concept of ‘sharing’ has become a central and powerful metaphor: it is the term for the talk that defines therapy culture, defining a type of communication with emotional valence; it is a technological term with longstanding usage in the field of computers; and it is also a model of resource distribution that is both taught to American kindergarten children and marketed by Silicon Valley. Unpacking the operation of ‘sharing’ as a complex metaphor offers a mode of analysis that places social media in a broad historical, social and cultural context.

However, the degree to which arguments constructed on the specific meanings of the English word ‘sharing’ apply to its equivalents in other languages is a matter for empirical inquiry. The concept of ‘sharing’ as used in Western SNSs appeals to a therapeutic self and makes assumptions about how that self knows itself and maintains ties with others. Yet this might not necessarily be the case outside Western cultures, raising the possibility that work on ‘sharing’ to date is Western centric.

In this paper, therefore, we thus seek to examine the concept of online ‘sharing’ in a non-Western context: China. Given quite separate processes of language development, along with different construals of selfhood in the West and China, we ask: what connotations do the Mandarin equivalents of ‘sharing’ bring with them to the sphere of social media? How do they reflect the Chinese context, and how, historically, have they come to discursively construct social media in China?

Fenxiang and gongxiang—the Mandarin words for ‘sharing’—are central words in the context of Chinese social media and have profound socio-cultural connotations. Based on a historical corpus analysis, we present the meanings associated with fenxiang and gongxiang when they appear in the context of social media. With over 2,000 years of history, gongxiang is closely related to Confucian thought and has been employed continuously by Chinese rulers as the term for a political ideal of harmony, while at the same time it has more recently become the technical word for sharing in computing fields (time-sharing, file-sharing, etc.). Fenxiang, on the other hand, refers to sharing on an interpersonal level, its meaning having shifted from physical division to communication. It has also come to refer to the communication of one’s feelings in the therapeutic mode, starting from when China began to import therapeutic practices from the West in the 1990s.

This study uses web archive analysis to help understand the historical transformation and the current role and rhetoric of fenxiang and gongxiang in the context of the Chinese Internet. The Wayback Machine was used to gather historical data from 32 Chinese social network sites to track changes to the deployment of fenxiang and gongxiang. We first located the earliest archived front page of each SNS, then downloaded screenshots through to May 2018. Over 5,000 screenshots of those archived web pages were analyzed and coded.

Based on our historical analysis, we show that the Chinese translations for sharing—fenxiang and gongxiang—are indeed both keywords in Chinese SNSs. Fenxiang is the closest word to sharing in Western social media, while gongxiang is deployed in SNSs’ mission statements regarding their ultimate goal. Despite some similarities with the Western ‘sharing’, fenxiang and gongxiang actually draw on Chinese versions of selfhood and relations between the self, others and society, as well as reflecting the interplay between the individual, social media, and the state. In both cases, our observations of ‘sharing’ on Chinese SNSs resonate with the notion of the divided self, and relate to the deep roots of the words in Mandarin, from which they attain their rhetorical force in the context of the Chinese internet.

By focusing on fenxiang and gongxiang, this study sheds light on the discursive construction of the internet in China, while also offering a prism through which to analyze and understand Chinese SNSs over time. Following a comparative analysis of the context of sharing in American and in Chinese social media, we conclude by stressing the need to take language into account as we study the internet and its multilingual histories.

12:45-14:00Lunch break

Brown bag lunches will be available in the Allard Pierson Museum Café (Google Maps link). There is limited seating in the café itself, but plenty of nice outdoor spaces in the surrounding area. 

From 13:30-13:50, Lionel Broye and Marie Molins will demo their Minitel installation 3615 LOVE at the conference exhibition in BG2 (Google maps link). Space is limited, so please arrive early and follow the instructions of conference volunteers.

14:00-14:45 Session 13A: On Repeat
Location: OMHP C0.23 (26)
14:00
The Web Now

ABSTRACT. “A mind that thinks in terms of the future is incapable of acting in the present. It doesn’t seek transformation; it avoids it. The current disaster is like a monstrous accumulation of all the deferrals of the past, to which are added those of each day and each moment, in a continuous time slide. But life is always decided now, and now, and now.”  — The Invisible Committee (2017, 17)

By referring to the web “now”, the paper makes reference to Walter Benjamin’s essay “On the Concept of History” and the concept of “now-time” (or the presence of the now), which describes time at a standstill where the past, in a flash, enters into a constellation with the present, objectively interrupting the mechanical temporal process of historicism (Benjamin 1992, 398–400). So in rejecting a historicist position (“the web that was”), the idea is to explore the temporal dynamics of the web “now”—the web as a convergence and divergence of temporalities and concepts that demonstrate the potential to both explode and implode possibilities for change.

The overall context for this is that the present has become “thickened” and omnipresent, and the temporalities of past and future have collapsed, including the production of historical time—and its role in conceiving social transformation once imagined by Benjamin—has collapsed with it. In other words “now-time” now seems to collapse the transition from past to future into an inert “presentism” that the experience of the contemporary web seems to capture; our inability to act in the present is somewhat mirrored in the way the web acts for us through personalisation and so on. Ultimately, what is evidenced is “an avoidance of the now”, as the Invisible Committee claim (2017).

So how do we conceive of the perceived non-delayed correspondence between actions and their effects and communication on the web, between incoming data and its computation? What is experienced as present is actually defined by algorithmic calculations in the immediate past, meaning that the future has always already been “pre-empted” (Avanessian and Malik 2016). Moreover, the fact that computations seem to act in real-time, as with streaming (often experienced as buffering) indicates the illusionary forces that serve to conceal the multiple renderings of now-time that distract us from knowledge of contemporary conditions. Evidently, subjects and objects of the web operate within “mutated time-space regimes” that are no longer developmental in their temporality but more caught in an “implosion of forces”—an explosion in reverse—and as such are limiting possibilities for change (Barad 2007, 245–46).

The paper then asks whether the memory of what came before (the web that was) and the anticipation of what comes next (the future web) indicates the deferral of politics altogether.

14:20
Listening to the rhythm of the web – Examining the web’s history through sonic epistemologies

ABSTRACT. Where do we put the needle on the web’s record to understand its history? Is it the content, is it the protocols, is it the users, or is it a different soundtrack altogether? This paper argues that to examine the web’s history we need to take a step back and listen to the whole orchestra. Instead of focusing on one procedure, we need to examine the web as part of a larger project of media’s knowledge production. This paper proposes to examine the web’s history through a new theoretical approach focusing on media power. Specifically, the paper focuses on how power relations are enacted to reproduce people and territories through media, in this case the web.

But unlike most media scholars (and their sub fields), who use visual conceptualization, such as (in)visibility, seeing and black-box, to examine knowledge production and media power, this paper proposes a different approach. This is mainly because vision cannot capture the multiplicities within networked ecosystems. Contrary to such views, this paper argues that sonic epistemologies are better suited for multiplicities of actors (users, workers, and nonhumans), spaces, channels and temporalities. Synthesizing science and technology studies and sound studies, sonic epistemologies are practices that redraw boundaries of spaces and bodies. This ability is especially productive for multi-layered spaces like the web, where bodies, time and space are more fluid and flexible.

The (re)production of the web involves seven strategies that are linked to two main practices which I call processed listening and rhythmedia. Processed listening is the way media practitioners selectively tune into different sources through the media apparatus, by using several tools (which can be automatic or manual), in different temporalities, to produce different kinds of knowledge (mainly profiles) for various purposes (mostly economic and political). This process involved monitoring, detection, measurement, categorization and recording which are stored in a dynamic archive/database.

Rhythmedia describes the ways media practitioners use the knowledge in the archive produced by processed listening to (re)order people (bodies and behaviours) and the relations between them through media territories (analogue or digital). It is the way media practitioners conduct repetitious training on people through orchestrating the way they live in mediated spaces. These practitioners conduct the way architectures change according to the knowledge they gain from (processed) listening to people’s behaviour. This process involves (re)organization, exclusion, removal, deletion, filtering, and adaptation of noise.

The main argument is that media practitioners have been using processed listening and rhythmedia as part of seven sonic epistemological strategies to (re)produce subjects and territories. The first three strategies are associated with processed listening: new experts, licensing, and measurement; the next four strategies are related to rhythmedia: training of the body, restructuring territories, filtering, and de-politicizing. The outcome of these strategies is the production of subjects who behave in an efficient and economically desired way through media.

As a case study I show how these strategies were deployed in the web standardization process in the European Union (EU) in the early 2000s. By lobbying EU legislators and the Internet Engineering Task Force (IETF), the digital advertising industry and tech companies standardized different categories of behaviour, catering for their business models. This required configuring spaces and people on the web. To examine this, I have undertaken multiple qualitative methods such as: analysing legal documents from the European Union, analysing advertising association texts, and analysing the Internet Engineering Task Force (IETF) standards. I also conducted semi-structured interviews with European Commission representatives and digital rights NGOs practitioners.

This paper is part of a larger project that shows how these seven strategies were deployed by different media practitioners, in different time periods and different media. Although each of the strategies is conducted differently in each time and medium, this approach’s strength is in drawing on links and connections between different types of strategies. It also amplifies the usefulness of using sound, rather than vision, when examining multi-layered media. Shifting the attention from theories of vision allows media researchers (and others) to have a better understanding of practitioners who work in multi-layered digital spaces, tuning in and out to continuously measure and record people’s behaviours to produce a dynamic archive. This knowledge is then being fed back in a recursive feedback-loop conducted by a particular rhythmedia, constantly processing, ordering, shaping and regulating people, objects and spaces. Such strategies (re)configure the boundaries of what it means to be human, worker and social.

14:00-14:45 Session 13B: Infrastructure and Sociability
Chair:
Location: OMHP C2.17 (48)
14:00
Never-ending inbox: a comparative study of media arts mailinglists

ABSTRACT. While contemporary social media have been critiqued for their ephemeral effects on media arts, curatorial practices, and activist politics, the mailing list has proven an enduring venue for geographically dispersed communities and individuals to participate and remain in dialog over the course of decades. Lists such as Nettime, -empyre-, SPECTRE and CRUMB have played host to a community of artists, critics, curators, activists, and academics, helping to launch or establish the careers of numerous prominent figures in related fields. In spite of their historical significance, however, the currency of these mailing lists (and mailing lists in general) seems to have substantially diminished over the last decade with the advent of corporate social media. Based on empirical studies we conducted on these lists, this paper argues that mailing lists as discursive infrastructures, although being understudied, are objects worthy of inquiry and of particular interest for the field of media art history. Specifically we will present four lists to discuss as representative of early and 'mature' list cultures, yet all of which continue -- with Nettime representative of an early list, SPECTRE as a kind of threshold between periods and -empyre- and CRUMB being latter lists from the early 2000's that deploy different strategies of moderation and engagement.

In the paper, we present our analysis of these lists’ respective archives, which span almost two decades, periodising the discourses, cohorts, and events that have taken place on these lists with the aim of contributing original historiographical research into debates and issues in contemporary media art. Epistemologically and methodologically, the study addresses questions of how to write mailing lists historiographies, and argues that, from the perspective of the future, legacy systems, such as open crawlable mailing lists (GNU Mailman, Pipermail, Listserv, MHonArc, etc.), may retrospectively provide a more lasting historical record of digital culture than today's all enveloping corporate-guarded social media. Comparative mailinglist studies are important due to the overlapping cultural dynamics or resonance of lists. Topics, participants, moderation practices, etiquette and infrastructure could often intersect, and certain recurring themes or continuities can be clearly identified. Taking up an algorithmic approach is especially useful for tracing cross-list continuities and disjunctures, along with in general identifying the patterns of participation that have existed over time. It also raises questions about the relationships and dynamics between lists, sometimes in direct ways that might include collapse and rebirth (e.g., Syndicate to SPECTRE), but also in terms of splintering and perhaps even 'forking' (e.g., Nettime to 7-11 net.art list or Nettime to FACES cyberfeminist list).

Methodologically, studying mailinglists is not a straightforward task however. While we choose to follow a highly formal analysis, what might be considered in terms of computational bibliography (Kirschenbaum 2002), it is equally important to recognise the cultural and historical specificity of these materials beyond being machine-readable. This means acknowledging how they are attached to broader human-technical assemblages that often include off-list email discussions or chat, IRL ('in real life') events and workshops, paper print publications, and other modes of communication. To acknowledge these interlinked settings means also grappling with the methodological limits of computational methods, and especially as these are discussed in debates over research programs like cultural analytics (Manovich 2008) and digital methods (Rogers 2013), or throughout the digital humanities as distant reading (Moretti 2000), algorithmic criticism (Ramsay 2011) and macroanalysis (Jockers 2013). We work with these debates in mind, yet still aim to develop an additional level of engagement to hermeneutic, performative, archival or media art renditions of list cultures. So in order to formulate our algorithmic apparatuses to periodise these lists, we devised two types of reading methods, namely (1) a “quantitative/formal” reading of the dynamics of each lists over time and (2) a “comparative/qualitative/discursive” reading of the content of the lists, which we discuss in the paper.

14:20
COMMUNITIES AT A CROSSROADS. Material Semiotics for Online Sociability in the Fade of Cyberculture

ABSTRACT. Under what conditions is it possible to conceptualize online sociability in the 21st century? The presenter has asked this question while writing her latest book, on which this presentation draws. With the burst of the creative-entrepreneur alliance, the territorialisation of the internet and the commercialization of interpersonal ties, mid 2000s constituted a turning point for digital communitarian cultures. In that period, many of the techno-libertarian culture’s utopias underpinning possibilities for online sociability came to a crossroads. Foundational myths about the internet as an intrinsically ungovernable machine, the creative coalition between knowledge workers and internet companies and the spontaneous online interactions of individuals producing wealth, democratic processes and social equality faced empirical counter evidence.

Avoiding both void invocations of community and swift doomed conclusions, the presentation answers the above question by investigating the theories of actions that have underpinned the development of techno-social digital assemblages after the fade of the ‘golden age’ of online communities. It draws upon the empirical analysis of probably the largest archive on digital communities worldwide, Ars Electronica’s Digital Communities archive, and in doing so it returns a multi-faceted picture of internet sociability between the two centuries.

Epistemologically, Communities at a Crossroads proposes a radical turn, privileging an anti-essentialist, performative approach over traditional understandings of online communities. In order to deal with the transient nature of online sociability, it focuses on how heterogeneous entities are woven together – and the means whereby they are kept assembled – instead of postulating the essence of community. This anti-essentialist approach avoids defining beforehand what communitarian ties are, and rather draws on what they are said to be.

The presentation argues that in order to conceptualize contemporary online sociability, three conditions are necessary. First, we need to abandon the techno-libertarian communalist rhetoric. Second, it is necessary to move beyond the foundational distinction between gemeinschaft and gesellschaft, and adopt a material semiotic approach. It is only when the foundations of 21st century’s social theory are put into discussion, that it is possible to grasp and theorize contemporary techno-social assemblages. Third, it’s time to relinquish the effort to devise definitions of online/digital communities, and rather engage in more encompassing mapping exercises.

Eventually, Communities at a Crossroads engages in a double archaeology. From the point of view of 2009, it looks back to the genesis of network cultures in the 1980s and 1990s and compares those early discourses and practices to mid-2000s communitarian developments. The audience can thus follow communitarian accounts as they unfold. Alternatively, from the stand point of 2018 we can trace back contemporary developments to a period of profound internet transformations. The result is an experiment in the possibility to dive into digital communalism at different depths.

14:00-14:45 Session 13C: Digital Activism
Location: OMHP C1.17 (48)
14:00
Histories of digital activism - narrative formation in historical accounts and references to digital activism

ABSTRACT. The past decade has shown a wealth of literature on digital activism. Even so, the phenomenon has been little historicised, although history constitutes a significant way in understanding and defining prominent societal phenomena. This paper will present a critical history of digital activism as derived from extant scholarship (literature review). It will be argued that the collective (brief) historical references in digital activism articles reflect and shape through narrative framing how the concept is described and understood. Historical narratives are particularly relevant for a phenomenon such as digital activism, for which a lot of literature already exists, but where conceptual work remains relatively young. This is mirrored in the various descriptions and synonyms of the term, including cyberactivism (e.g. Carty & Onyett, 2006), online activism (e.g. Uldam, 2013), internet activism (e.g. Tatarchevskiy, 2011), web activism (e.g. Dartnell, 2011), net activism (e.g. Meikle, 2010), networked activism (e.g. Tufekci, 2013), e-activism (e.g. Carty, 2010), mobile activism (e.g. Cullum, 2010), social media activism (e.g. Miller, 2015), hashtag activism (e.g. Briones et al., 2016), interchangeable uses (e.g. Earl et al.2010), and, as applied here, digital activism (e.g. Hands, 2011).

This paper addresses these ambiguities through a review of historical accounts in digital activism literature. It presents two types of historical references: (1) digital activism on the timeline of internet history, and (2) digital activism by prominent historic examples or events. The paper will put forward that the two types of historical references highlight a range of issues in digital activism study. The first type of historical accounts demonstrates that digital activism scholarship is strongly tied to and based on technological development. This suggests that digital activism as a concept and practice is derived from digital innovation. On the other hand, many of the listed examples have been praised for their integrated uses of digital and traditional activism, which raises questions as to whether digital activism actually is or even should be explored as an immanently digital practice. While both historical timelines place a strong focus on technological affordances, there is a relevant difference here. That difference relates to an understanding of digital activism as either a purely or predominantly digital phenomenon. That offers two potential avenues for the exploration of digital activism. If digital activism should be explored as a primarily technological phenomenon, then the question remains as to which technology it represents (e.g. Web 1.0 or Web 2.0) as the historical milestones of digital activism such as the Battle of Seattle in 1999 or the Arab Spring in 2009 do not necessarily align with the major technological changes in 1995 (commercial web) and 2005 (the interactive web). If digital activism is not primarily about the use of digital technologies, then it suggests that existing narratives of digital activism are problematic. In that case, it remains to be explored what is meant by what is commonly called digital activism.

These conceptual issues are visualised via four narratives identified in the historical descriptions: (1) the technology narrative [digital activism on the basis of technological development], (2) the online-offline narrative [digital activism history based on its relationship to traditional activism], (3) the communications type narrative [digital activism history based on type and direction of communication], and (4) the engagement narrative [digital activism history based on its affordances for engagement]. The combination of the two types of accounts and the four identified narratives altogether suggests a range of underlying assumptions about digital activism: digital activism is a distinctly technological phenomenon; the relevance of digital activism activities is strongly tied to its coverage (and therefore success) online and on traditional media channels; digital activism is understood through its communicative potential, i.e. the 1-way potential of Web 1.0 and the 2-way model of Web 2.0 (or Barnig’s 2014 more detailed model), and based on that, digital activism is understood based on the potential it offers for individuals (e.g. Scholz’s 2010 participatory turn). Through such narrative formation, brief historical overviews on digital activism contribute to a distinct but also paradoxical understanding of digital activism as a phenomenon that is different from and inferior to traditional activism, but whose differentiation from traditional activism is blurred and analytically obscure.

14:20
Taking to the digital streets: A case study of hacktivism on the early web

ABSTRACT. As more and more of our daily communication happens digitally, marginalized and counter-public groups have often used the new media to overcome real-world limitations. This phenomenon can be traced back to the early days of the Web. This paper will provide some insight into hacktivism on the early web, using the deprecated Geocities service as a case study.

Yahoo Geocities started as a service called Beverly Hills Internet and similarly to projects such as the Amsterdam Digital City it owed its name to its organizational structure which was modeled after a city with streets, squares, lots and houses. While the Amsterdam Digital City was originally based in and around the city of Amsterdam, Geocities had a global scale and hosted over 100,000 websites making it the fifth most popular site on the web of 1997. When acquired by Yahoo! for $4.6 Billion in 1999, Geocities had upwards of one million users (Milligan, 2017: 139). The service was eventually taken offline in 2009 after the static content was considered not profitable enough for the newly dubbed web 2.0.

The history of preservation of Geocities shows the immense difficulty in identifying and preserving a project of that scale. Even though the shutdown was announced in advance and efforts to archive Geocities had been existing before that, it proved difficult to index everything, let alone to archive it (Milligan, 2018). As a result, most of the available archives estimate their crawlers have the majority of content, but there is no general index, no checksum or catalog to refer to. This puts the available archives closer to media archaeological investigations than plain storage of a deprecated service. Snapshots were taken throughout Geocities’ existence, but owing to the size of the project this produced more fragments than histories. We may know that in 1997, Geocities was the fifth-busiest website, but we are oblivious as to where that traffic went in detail.

Increase in hardware capability did not come with an increase in user's emancipation. The amateur home page was a form that required rather than allowed for creation from scratch. The structure was loosely defined by tags and connection speed. In what followed the static web 1.0, user interaction was more restricted in the form of content to fill predefined structures. As a result

“... new amateur pages don't appear at such amounts as ten years ago because the WWW of today is a developed and highly regulated space. You wouldn't get on the web just to tell the world, “Welcome to my home page.” The web has diversified, the conditions have changed and there's no need for this sort of old fashioned behavior. Your CV is posted on the company website or on a job search portal. Your diary will be organized on a blog and your vacation photos are published on iphoto. There's a community for every hobby and question” (Lialina, 2005).

This highly regulated space standardizes user expressions according to the specific protocol of the platform. Only appropriate content can be published in the specific community, whereby the content is simply the vehicle to mine user data or collect ad revenue. Looking at a future that could have been, Geocities offers a rare insight into development and community building on the early web.

Digital activism, and more specifically hacktivism is at the core of my doctoral research. This paper will present examples of hacktivist organization and activity as digital activists took to the digital streets of Geocities. Analyzing their footprints, the paper will also document challenges and opportunities arising from archived web content such as technical, ethical and archival considerations. The paper will conclude with an outlook on how platformization and the changing structure of the web has affected digital activism and how this change affects current and future research engagement with the topic.

14:00-14:45 Session 13D: Don't be Eviler
Location: OMHP C0.17 (84)
14:00
The changing face of Facebook. Confessions of former Facebook employees

ABSTRACT. “The idea of bringing the world closer together has animated and driven Zuckerberg from the beginning,” Vaidhyanathan (2018: 1) writes in the opening his recent book, in which he offers a blistering critique of Facebook’s devastating impact on rational online discourse and ultimately democracy. After a decade of overwhelmingly favorable press coverage, the 2016 US presidential elections seem to signal a shift among journalists and scholars about how to understand the company and the actions and utterances of its executives. The alleged influence on political decision-making by outside state actors, the emergence of filter bubbles, and the spread of “fake news” have become key tropes for criticasters to rally around. In Vaidhyanathan’s reading, Facebook’s socio-technical affordances are a result of Zuckerberg’s naïve optimism and his belief in the inherent benefits connectivity. Similarly, Hoffman et al. (2018) draw from interviews, press releases, and Facebook status updates by Zuckerberg to survey how he discursively constructs the platform. Recently, Rider Murakami Wood (2018) offered an in-depth reading of Zuckerberg’s 2017 manifesto “Building Global Community” to similar effects. This body of work shows that in order to gain a better understanding of the platform’s history and evolution we should pay close attention to the words of Zuckerberg as a key decision and opinion maker.

In this paper, we build on this scholarship but focus on those who stray from the discursive path outlined by Zuckerberg. By doing so we, first, offer an alternative discursive construction of the platform that challenges Zuckerberg’s utopic and naïve creeds, and second, offer an alternative history of the paltform. In our understanding of history, we follow Foucault, who, as Poster (1982) notes “offers historians a new framework for studying the past (knowledge/power), a new set of methods for doing so (archeology and genealogy), and a new notion of temporality (discontinuity).” Specifically, we focus on discourses that challenge the positive notions of connectivity by those who worked at Facebook but decided to leave. Our corpus consists of a small but important collection of public utterances of former Facebook executives and employees (Palihaptya; Smith; McNamee; Parakilas; and unnamed early ex-Facebook employee). The discourses that emerges from this collection points towards the discontinuities, breaks, and ruptures in Facebook’s business, which offer room for a very specific kind of criticism.

Two particular ruptures become apparent when former employees re-tell Facebook’s history. First, there are the aforementioned presidential elections and how Facebook can shape societies. The second theme comes from the difficulty of disconnecting from the platform. To exemplify the latter theme, consider the words of Sean Parker, Facebook’s founding President, who describes the history of connectivity by saying that: "When Facebook was getting going, I had these people who would come up to me and they would say, 'I'm not on social media.' And I would say, 'OK. You know, you will be.' And then they would say, 'No, no, no. I value my real-life interactions. I value the moment. I value presence. I value intimacy.' And I would say, ... 'We'll get you eventually.'" The issue of disconnectivity also surfaces in a public talk in 2017 by Facebook’s former Vice-President for User Growth Chamath Palihapitiya, who claims that “The short-term, dopamine-driven feedback loops that we [Facebook] have created are destroying how society works,” and he encouraged everyone to disconnect by saying that “I can control my decision, which is that I don’t use that shit.”

These examples and the discourses they represent signal that Facebook’s technologies may give the people the “power to share” but that this comes at the price of Facebook being able to “hack” the vulnerabilities in human psychology. These moments of absolution by ex-employees present an alternative history that challenges Zuckerberg’s. We argue that both histories are equally important as they express the different power dynamics and operations of knowledge production intimately related to the idea of making the world more open and connected with platform technologies.

14:20
The Land Before Time: Yahoo’s Acquisition of GeoCities in 1999

ABSTRACT. GeoCities, often dubbed “the first virtual community,” plays an important role in web history. Once one of the most trafficked sites on the internet, it grew significantly in the mid-1990s, quickly gaining 2.1 million members. In 1999, Yahoo purchased GeoCities in a $3.7B USD stock exchange. Purchased at the peak of the dot-com bubble, shifts in market thinking and tech investment highlighted the importance of internet companies in a global market. By 2010, GeoCities was offline and only accessible through web archive collections. How did this once-thriving community of people, organized around shared interests and connected by web-learning and identity creation, become a ghost town? Yahoo’s ownership of GeoCities precipitated a series of changes that the site underwent that contributed significantly to its decline. This paper examines the cultural moment when GeoCities was purchased by Yahoo through a comparative analysis of popular news media reporting on the transaction and the intentions indicated in GeoCities’ IPO, read in conjunction with protestations from GeoCities members that are found in web archives.

To understand why GeoCities made changes to their design, user policy and community standards, I conduct a micro-history of financialization to examine GeoCities’ process of “going public” (Elmer, 2013). Leading up to the 1999 transaction, we can see the ways in which the future of GeoCities was envisioned, as legal, political, and economic conventions for the site were established. The GeoCities’ IPO indicates a desire for increased advertisement opportunities, the development of quicker navigation tools, and facilitating marketers to target specific audiences. All of these strategies point toward massive infrastructural changes to the website, commodification of the member, and a resultant shift in value from community building to connectivity (Van Dijck, 2013). After Yahoo purchased GeoCities, many homesteaders reacted to the changes by evacuating their homes and leaving signs on the doors. We can access many of these door hangings on the Internet Archive’s Wayback Machine by looking at popular pages and their changes over time. Many sites that were once extremely active are completely stripped of content and replaced with simple text on a white background that states their reasons for leaving the site. Homesteaders expressed a range of reactions, many often focusing on different aspects of GeoCities’ mistakes that occurred after 1999. At its core was the realization from its members that the site was no longer something they wanted to support.

In news media, many were skeptical of Yahoo’s decision to invest heavily in a virtual community, struggling to see the billion-dollar value of pages dedicated to animals, cars, and poetry. Historically, however, we can read this moment as the beginning of the commodification of the web. What is striking is GeoCities member’s abilities to destroy the things that were being sold. This paper looks at this particular moment in time, on the eve of the millennium, as a series of shifts were made that would come to define the following decade of increased web commodification and commercialization.

15:00-16:30 Session 14: Keynote: Wendy Chun

Prof. Wendy Chun (Simon Fraser University)

Exit: The Web that Remains

This talk questions nostalgia regarding the web that was by tracing the links between 1990s visions of "cyberspace" and today's embrace of AI. Both seek to solve political problems technologically through promises of an impossibly autonomous sovereignty.

Respondent: Florian Cramer (Willem de Kooning Academy)

17:15-21:00Conference Reception

Please arrive at Mediamatic on time for our special event from 17:30-18:45.

Geert Lovink (Institute of Network Cultures) and guests

A History of the media arts content provider Desk.nl

Geert Lovink and guests will, for the first time, dig into the history of internet content provider for the arts desk.nl, which launched late 1994 to provide a workspace and internet access for Amsterdam’s burgeoning net culture and hosted early projects such as Rhizome, nettime and net.artists including Jodi.

The special event will be followed by a vegan buffet dinner.