RESAW19: THE WEB THAT WAS: ARCHIVES, TRACES, REFLECTIONS
PROGRAM FOR WEDNESDAY, JUNE 19TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-10:00 Session 5: Welcome to The Web that Was

Welcome message by Prof. dr. Richard Rogers, chair of New Media & Digital Culture, Media Studies, University of Amsterdam.

10:00-11:00 Session 6: Keynote: Olia Lialina

Prof. Olia Lialina (Merz Akademie)

Collections

On the photo that accompanies this abstract you see me posing with my collection of early web manuals. Each of the books in turn contains its author’s collection of links to good or useful websites, many of which were collections themselves: of graphics, sounds, code, scripts and further URLs, where somebody else has already found what you were looking for. Collections were a cornerstone of the vernacular web. This environment was more about spirit than about skills, and to accumulate and distribute was no less important than to create. Making your own site and building collections was often a parallel process. Because of the modular structure of web pages, even sites that never contained a collection section were, in themselves, collections, since each of their elements had its own URL and could easily be singled out and extracted. Still, some web masters made collecting the main purpose of their web site.

This talk will highlight the earliest known collections: Alan and Lucy Richmond Graphic Resources, Randy’s Icon Bazaar, the first Gallery of pureGIF animations started by Royal Frazier, Netscape’s animated GIFs, and BHI graphics. I address the role of these and other collections, and the subsequent transition from collections of items to sample pages, sets, and templates. Let’s wander through Sonya Marvel Creations, Mardi Wetmore’s Graphics on Budget, Moon and Back sets; look at the heritage left by GeoBuilder, Intel’s Web Page Wizard, Front Page, Yahoo Page Builder, and their templates. The GeoCities Research Institute archive and library provide quite some evidence to understand the logic, structure, and evolution of materials and tools available to web amateurs of the medium’s first decade. However, most of the time we deal with fragments, splinters, and web masters’ voluntarism. Creating our own collections helps to complete the picture.

 

11:00-11:30Coffee Break
11:30-12:45 Session 7A: Digital Art
Location: OMHP C0.23 (26)
11:30
Rogue sites, how Netart sneaked into the museum

ABSTRACT. With the launch of Tate’s first website in 1998 and the opening of Tate Modern in 2000 an initiative was started to commission works of Net Art for Tate’s new virtual visitors. Over the next ten years Head of Digital Programmes Jemima Rellie (from 2001) and Curator Kelli Dipple (2003-2011) led on commissioning 14 artworks created by artists such as Graham Harwood, Heath Bunting, Marc Lafia and Fang-Yu Lin, Young-Hae Chang Heavy Industries and others. These works were subsequently presented in the portal for the Intermedia Art programme. This curatorial programme was Tate’s first foray into engaging with art online. Early on artists were invited to respond to: ‘the art historical context of network art, to respond to the context of Tate as a media and communication system and to address works that involve significant elements of participation and interaction’.

In 2016, Frances Morris, Director of Tate Modern, identified Digital Art as one of the key curatorial priorities for the Collection, renewing Tate’s interest in these works. This was further catalysed by the opportunity to have this group of works as a case study in the project Reshaping the Collectible: When Artworks Live in the Museum, funded by The Andrew W. Mellon Foundation. This project focuses on recent and contemporary artworks which challenge the structures of the museum and with its heady mix of Net Art, critical texts, artist interviews, podcasts, broadcasts and documentation of events this website is an art collection and an archive in itself, but one that does not fit neatly into the institutional framework and processes.

Starting from November 2018 the team working in this case study will be looking at the following issues:

  • How do we maintain the integrity of this series of commissions?
  • How do we communicate its history and context in a coherent way online?
  • In what ways do we want to present a Netart collection in the future?
  • What are the options to presenting these type of works in the Gallery?
  • What institutional infrastructure and expertise must be built in order to manage and maintain these works in the future?
  • What are the options for preserving early websites that use obsolete technologies?
  • Which networks of knowledge both inside and outside the institution should we strive to contribute to, and be part of?
11:50
Platform curating: A brief history of networked strategies and their entanglement with web technology

ABSTRACT. This paper will present an overview of the development of curatorial exhibitions online in conjunction with the transformations that have occurred in web technology. Its aim is to bring to light a history of networked curatorial strategies that—while using the online platform as a space of production, distribution and engagement with art—have defied the logic of traditional exhibition spaces like the gallery display, as well as that of the products of the digital industry.

In the past three decades curatorial modes of work online have rapidly evolved, but their history is fragmented and discontinuous. On the one side, preservation has predominantly focused on artistic production online—such as, recently, Rhizome ArtBase and net.artdatabase—rather than curatorial work; on the other side, the critical hiatus that took place between the end of the nineties and the second decade of the twenty-first century has led to discontinuous debates about online curating in the context of both curatorial and media studies (Ghidini, 2015). This paper will demonstrate how, in the manner of interventionist practices, curatorial endeavours online are interwoven with the developments in web technology, and how they have also given life to new, process-based ways of archiving web-based art and projects.

The recent appropriation of web technology by galleries and art fairs to sustain the traditional system of the institutionalised contemporary art world—such as s[edition] (2012) and Philips/Tumblr Paddle ON! auctions (2013 and 2014)—has focused on scarcity-creation and authentication of digital artworks, through IP tracking, for example, thereby reinstating an age-old system of gatekeeping. However, the curatorial endeavours of the nineties “counterculture” (Turner, 2008), when specialised skills were still required to produce and publish content, and those of Web 2.0, when everyone could turn producer and publisher with a click or a swipe in a mass adoption of the web—becoming “prosumers” (Cloninger, 2009)—have provided spaces for the creation of different frameworks for artistic production, new systems of collaboration where “curators are nodes networked with others” (Cook and Graham, 2010), alternative modes of historicization and new models of community engagement. This paper will trace a history of curatorial approaches that supersede the mode of production of culture as proposed by the museum and the gallery—a mode “determined by oligarchic hegemony issuing forth from centres of capitalist, academic, and political power” (Lichty, 2002)—to capture the transformations in the understanding of the web space as a site of artistic production and cultural exchange. These transformations highlight how, through the web, artistic and curatorial production has become more interwoven with everyday life, with the socio-cultural function of technology, and with our online behaviours and their connection to ‘life offline’

In this paper, three projects will be identified as cornerstones of the history of platform curating, how it has evolved in connection to technological changes, and how it has incorporated offline spaces because of its distributed nature: the digital foundry äda’web (1994), founded by Benjamin Weil and John Borthwick, and specifically the project StirFry by Barbara London in that it intervened in the network to create a new framework for distributing artistic research; the online repository Runme (2003), initiated by Amy Alexander, Olga Goriunova, Alex McLean and Alexei Shulgin, which offers an example of an alternative, collective mode of historicizing digital arts and its discourses; the gallery space Gallery Online (2012), founded by Ronen Shai and Thomas Cheneseau, which exploited the features of the social media platform Facebook to infiltrate daily communication routines. These three projects, along with other curatorial exhibitions online that have employed further networked strategies for artistic production and cultural exchanges—such as The IDEA (2000–2004) by Shankar Barua; CuratingYouTube (2007) by Robert Sakrowski; Beam me Up (2010) by Reinhard Storz; Get>Put (2012) by Kelani Nichole; cointemporary (2014) by Andy Boot; and #exstrange (2017) by Marialaura Ghidini and Rebekah Modrak—mark significant changes in artistic and cultural production that parallel and critique the technical and cultural status of web technology.

Through presenting a series of case studies of exhibition projects—their workings, modes of audience engagement and archival methods—and the way they respond to technological developments, this paper will shed light on the way in which the web has transformed from being a technical tool for the few, removed from day-to-day activities, to being a networked space adopted by the many, facilitating the integration of alternative modes and formats of production, display and distribution—even offline.

12:10
Preservation of Net Art through ‘Networks of Care’: A round-robin conversation about its challenges and benefits

ABSTRACT. This paper will present the outcomes of a round-robin conversation about the preservation of an online artwork through a ‘network of care’, addressing both benefits, as well as challenges. Although the premise of ‘networks’ received quite some theoretical attention (Brown 2016; Lovink and Rossiter 2015/18; Munster 2013; Fitzpatrick 2011; Van Saaze 2013; Dekker 2018), little has been theorised about what it means to create and sustain a network of care, in particular in relation to digital art and preservation. There are still many questions about the construction and functioning of such ‘networks of care’. By bringing together individuals from diverse backgrounds, this paper will gain further insights in what is needed to develop preservation review processes and guidelines that detail specific actions for establishing and sustaining a ‘network of care’. It will address questions like: What are the different elements of a ‘network of care’? What and who would be involved when initiating such a network? What are benefits (or challenges) for the study of these artworks, when they are preserved through a ‘network of care’? And what could be the role of an established institution of, or preservation professionals, to persist and evolve such a network over time?

We will explore these questions through specific case studies, in particular, the preservation of the online artwork Brandon (1998-present) by artist Shu Lea Cheang. On June 30 1998, the Guggenheim museum launched its first artist project for the Web Brandon. This title refers to the life and death of Brandon Teena, a young transgender man who was sexually assaulted and murdered in rural Nebraska because of his gender identity. The artwork was released five years later as a collaborative platform, still undefined, inviting guest curators to illuminate Brandon’s story. The tragic story of Brandon Teena was kept alive with the intention that it could lead to a variety of social and political debates. Through the involvement of multiple authors, from different parts of the world, the artwork started to grow and expand in unexpected directions. Brandon became a multi-author and even a multi-institutional collaboration. As the artwork was distributed in a way that gave various parties control over it, what does this mean for how the artwork is preserved today? Moreover, and as also mentioned by among others Lovink and Munster (2013), most networks consist of online and offline components, agents and temporalities that cannot be studied and apprehended as merely (a set of) tools. These networks need to be analysed within the ecology in which they partake and are forming. This also reflects the key intention of Cheang, when she states that the medium (the Web) and in particular its collaborative aspects made the artwork possible. It follows that the process of studying, preserving and presenting Brandon must involve a network that includes the expertise of multiple people, as well as the involvement of non-human actors (i.e. the artwork and its contextual dependencies of the Web). In this research we aim to make a thorough investigation of the different roles of these stakeholders, or more precisely caretakers, which will provide insight into the political, technical, and social dimensions around the artwork. In other words, we will analyse the underlying structures of such a network of care, and how they are constructed, to show how sustainable a network of care can be over time.

11:30-12:45 Session 7B: Staying Power
11:30
A Place to Rest: BBS Communities and the Early Web

ABSTRACT. In the mid-1990s, the World Wide Web vaunted its novelty (Ankerson 2014). Early versions of Netscape Navigator shipped with a "What's New?" bookmark and Yahoo! featured a brightly colored "NEW" button at the top of its homepage. But not all users experienced the same sense of novelty when venturing onto the Web. Telematics enthusiasts in France were introduced to the Web as a “Super Minitel” and researchers at the University of Minnesota saw it as Gopher with pictures (Schafer and Thierry 2017; Frana 2004). The Web catalyzed a massive expansion in the visibility of the Internet but its culture bore traces of many earlier networks, from videotex to Gopher to bulletin board systems. As the users of those networks converged on the Web, they brought with them norms, values, and expectations about how social computing ought to be done.

This paper examines the responses of bulletin board system enthusiasts to the early Web through an analysis of trade publications, hobbyist magazines, and the archived pages of hybrid “Internet BBSs.” During the 1980s and 1990s, the bulletin board system (BBS) was a widespread form of PC networking in the United States (Driscoll 2014, 2016; Delwiche 2018). By 1993, tens of thousands of BBSs were accessible throughout North American (Scott 2001). At the end of 1995, however, the number of active BBSs plummeted. By 1998, dial-up BBSing had all but vanished from across the continent. Former users describe this period as the “end” or “death” of BBSing (Scott 2005); their hometown haunts abandoned in favor of the World Wide Web.

In 1996, BBS developer Rob Swindell spoke for many when he joked, “The Internet Killed the BBS Star” (Swindell 1996). But the transition from BBSs to the Web was more complicated than this narrative of succession suggests. While the technical apparatus of the BBS was abandoned, the technical culture of BBS users persisted (Haring 2008). Whereas the Web was first conceived as a tool for organizing information, BBSs were designed for organizing communities. As one veteran BBS user described his experience of the early Web, “It felt like a highway full of billboards. Lots of pretty pictures to look at but no place to pull over” (Mark 1996).

Rather than a death, the dial-up BBS underwent a metamorphosis. Self-described “Internet BBSs” added TCP/IP connections in the hope of attracting users from across the world. Hybrid Web-BBS software packages such as Worldgroup, Wildcat!, and TBBS blurred the lines between BBSs and the Web, providing access to BBS discussion forums from a Web browser and the option to host HTML homepages for dial-up users. Yet, in spite of these innovations, the BBS began to seem vestigial. Those Internet BBSs that survived after 1998 were almost universally re-branded as “Internet service providers.”

A period of brief, intense creativity followed the initial contact between BBSs and the Web. The technical and cultural developments of this period had a lasting, if largely unrecognized, influence on the growing Web. This paper offers a view of the early Web through the creative practices and technical opinions of long-time BBS enthusiasts. To these veterans of the modem world, the early Web was a lonely place when compared with their beloved BBSs. As one former user mused, “There was no resting place on the Web. I was there but where was everyone else?” (Mark 1996, x).

11:50
Missing the Old YouTube: Collective Nostalgia for Platforms Past

ABSTRACT. Over the last thirteen years, YouTube has evolved from a digital video archive to an expansive, highly lucrative economy of content circulation and production. Since its acquisition by Google in 2006, in particular, the site has been continually redesigned, heralding the introduction of longer clips, improved video quality, new feedback features, monetization schemes, and the ever-elusive YouTube algorithm. Though these updates have encouraged advertisers and investors, whose ability to profit from the platform has increased exponentially, they have often had a destabilising effect on the site’s community of users. With each change, creators scramble to identify their implications and adapt the style of their content to sustain a presence on the platform, while audiences must learn to enjoy new styles of content, welcome new generations of producers, and adjust the ways in which they engage with videos and each other.

As YouTube’s culture has transformed to reflect the platform’s shifting affordances and demands, a growing resistance to change has transpired among its users. Unsurprisingly, their dissatisfaction is most commonly articulated through user-generated videos. Following an update to YouTube’s algorithms in 2017, for example, over 23,000 videos were uploaded to denounce the “Adpocalypse” that reduced the income of creators who earn ad revenue from the platform. In especially controversial cases, the community’s opposition extends across other social media platforms, sparking trending hashtags on Twitter, or inspiring collective action, such as the Change.org petition which received 240,000 signatures when YouTube introduced the mandatory use of Google+ accounts for all commentators in 2013. Online backlash to YouTube’s updates have found a familiar rhythm, typically peaking and dissipating within a matter of weeks. However, rage is not the sole means by which users’ frustrations at the platform’s precarity are expressed. As YouTube has continued to expand and evolve, it has become increasingly commonplace for users to disguise their anxieties around the platform’s present and future as a collective nostalgia for “the old YouTube.”

This paper will consider how this nostalgia for the (relatively recent) old YouTube draws on collective memories to anchor its community within a digital environment under constant threat of change. Given the breadth and diversity of the platform, this paper will take as its subject a specific community of creators and fans whose yearning for “the old YouTube” has become a near constant refrain. Collectively referred to as the “British YouTubers”, the golden years for vloggers Zoe Sugg, Tanya Burr, Alfie Deyes, Marcus Butler, Jim Chapman, Caspar Lee, Louise Pentland and Joe Sugg can roughly be charted from 2012-2014, when their videos, friendships and celebrity skyrocketed in popularity. Together the British YouTubers offered audiences an array of content, with channels varied in theme (from beauty and lifestyle to comedy, life advice and parenting tips) that each attracted millions of video views and subscribers. However, the immense appeal of these vloggers owed less to the topics of their videos than to the inextricable threads of friendships, romance, and kinship that united the group: viewers were drawn to the intense, intimate relationships these vloggers shared, and the affective experience of this intimacy across “collab”(-oration) videos, social media, and the daily vlogs in which their lives frequently intersected.

Over the last four years, as the group has drifted apart, appearing less frequently in each other’s videos, favouring new styles of content, and pursuing new friendships and careers, a strong collective nostalgia for the hey-day of the British YouTubers has pervaded the YouTube community. This nostalgia is occasionally even indulged by the YouTubers themselves: in March 2017, for instance, Alfie Deyes posted a vlog entitled “I MISS THE OLD YOUTUBE,” featuring a thumbnail of himself reunited with fellow vloggers Caspar Lee and Louise Pentland. Of the 630 comments Alfie’s vlog received, many directly responded to the prompt in the video’s title, lamenting how much they, too, miss the platform as it used to be. Combining scholarship on digital communities, affect and nostalgia, this paper will offer a close textual analysis of the British Youtubers’ enduring impact to illustrate how collective nostalgia operates in digital communities, arguing that their audience's longing for “the old YouTube” offers a way for these viewers to understand their own role in platform’s past, and to displace their fears around YouTube’s uncertain future.

12:10
Craigslist and platform politics of the 1990s

ABSTRACT. Craigslist is an unusual platform. Because it’s privately held, Craigslist’s leadership has the freedom to determine platform size and monetization strategies. While some contemporary legal experts are advocating for the breakup of mega-companies like Amazon and Facebook (Ehrlich, 2017; Wu, 2018), Craigslist remains stubbornly small, with fewer than 50 employees. And in contrast to opaque mechanisms of selling user data to advertisers, Craigslist generates revenue solely by charging small fees for certain ads (such as job posts and real estate ads in certain cities). When Craigslist first launched as a website in the 1990s, it reflected commonly held ideas and beliefs about what the Internet is for and how it should be used. As the industry around it has grown and changed, the site helps us see the different inflection points and politics that have coalesced in the contemporary Web. Part of a larger project on the social history of Craigslist, this paper articulates key tenets of Craigslist’s platform politics, focusing on its aesthetic design, monetization strategy and financial history.

In order to develop a social history of Craigslist, I start with Newmark’s background prior to founding the site, and explore the role the San Francisco Bay Area’s 1990s ethos played in the development of Craigslist’s purpose and ideology. During this phase of the tech industry, democratic values of openness and access held sway, values that have shaped Craigslist’s look and feel ever since. Using interviews and textual analysis of Craigslist’s public-facing blog, I describe the site’s basic features and rules, as well as the company’s values and politics.

In my analysis of Craigslist’s platform politics, I focus on its aesthetic stability, its monetization strategy and its financial decisions. The decision not to update its appearance and the caution in rolling out new features contrasts sharply with contemporary patterns of continual redesigns and upgrades (See Chun, 2017). While selling user data to third parties and paid advertising became the financial lifeline of web 2.0 platforms, Craigslist continues its straightforward approach to charging users small fees to post certain kinds of ads. And while most startups dreamed of taking their companies public the cash in on the IPO of shares, Craigslist has stubbornly resisted outside investment. In short, Craigslist’s critique of the mainstream internet isn’t in the form of a manifesto or a political campaign. It’s in their design values, monetization strategies and industry relationships.

How does Craigslist speak back to the mainstream web? As Wired observed in 2009, “Craigslist is one of the strangest monopolies in history, where customers are locked in by fees set at zero and where the ambiance of neglect is not a way to extract more profit but the expression of a worldview” (Wired, 2009). At once appreciative and skeptical, Wired magazine’s critique of Craigslist points to the ideology behind its old school appearance and hands off approach to developing features. If a company can be profitable without selling user data, the fact that it exists at all, let alone remains popular, becomes its own form of critique. Craigslist’s platform politics can’t necessarily be copied successfully across the internet. But the fact that they can be deployed and be successful matters when views like “privacy is dead” and “move fast and break things” dominate an industry.

On its own, craigslist makes for an interesting case study as the internet’s longest running garage sale, but tracing the site’s history also presents a framework for considering how the wider internet has changed. It can help illuminate which practices have become accepted as the norm and the ways some forms of politics are accommodated over others in an online space. With its long history, stable business model, and almost unchanging aesthetic, craigslist is like an island that has mostly stayed the same while the web around it has changed. By looking at the politics and promises of craigslist, we can also reflect on how the web has evolved in last past 25 years, how it has stayed the same, what we might want to protect and what we should think about changing when it comes to everyday life online.

11:30-12:45 Session 7C: Archived (National) Webs
11:30
Unearthing the Belgian web of the 1990’s: a digitised reconstruction

ABSTRACT. Over the last two decades the Web has become an integral part of European society, culture, business, and politics. However, the short lifespan of online data (with 40% of content being removed after 1 year) poses serious challenges for preserving and safeguarding digital heritage and information. Moreover, in contrast to most European countries, the Belgian web is currently not systematically archived and websites that are no longer online today have a high risk of being 'lost' for future generations of researchers.

In this paper we explore how ‘web archaeology’ could help us to unearth the Belgian web of the past. Using two digitised paper-based web directories (The Belgian web directory (publisher: PUBLICO bvba) and the Net directory (a continuation of The Belgian web directory, publisher: Best of Publishing) as a starting point, we aim to answers questions such as (i) ‘What percentage of Belgian history is lost as a result of the lack of a Belgian web-archive?’, (ii) ‘What websites resisted time and are still online?’ and (iii) ‘How much of the Belgian web of the past can be reconstructed through other web-archives or using other ‘web archaeology’-techniques?’. It is anticipated that this research could provide a valuable evidence-base to inform the development of a long-term web archiving strategy for Belgium.

The Belgian web directory and the Net Directory are held, as physical volumes in the collections of the Royal Library of Belgium. First published by PUBLICO bvba in Summer 1997 (volume 1), it ran until summer 1998 (volume 5), with the individual volumes appearing every three months. From volume 6 (November 1998) onwards, it continued as ‘The net directory’, also published in three month intervals, but this time published by ‘Best of Publishing. The Royal Library of Belgium (KBR) continued holding ‘The net directory’ until volume 14 (2000). However, it is unclear whether publication continued beyond 2000.

For the purposes of this research, all 5 volumes of the Belgium web directory (1997-1998) plus volumes November-January 1999 (vol. 6), April 1999 (vol. 7), September 1999 (vol. 8) and December 1999 (vol. 9) of the Net Directory were digitised. The digitisation process included applying Optical Character Recognition (OCR). The remaining 5 volumes of the Net Directory (vols. 10-14), have not been digitised to date.

Using the text captured during the OCR process, a Perl-script with regular expressions was used to extract a raw list of 89,084 URLs from two paper-based directories which list Belgian websites online between 1997 and 2000. After data cleaning and deduplication 34,413 unique URLs were left. Of these a random sample of 10% (n=3441) was extracted. For each URL in this list an automated process was setup that polled whether the existing URL is still ‘live’. Also, existing web archives (including common crawl, internet archive, …) were searched to discover archived instances of these URLs. If found, additional metadata such as date and frequency of archiving was recorded for these URLs. A randomly selected subset of the sample (10%, n=344) was also further analysed in depth. Using a qualitative approach a group of ‘raters’ performed a quality assessment on each of the archived URLs. This included an assessment of the completeness of the material archived and of the ability to render the original form of the website. If no archived instance was found for certain URLs, these ‘raters’ took on the role of ‘web archaeologist’ and tried to find an archived version of the URL through other means (often this involved contacting the company/person and asking if they had a personal archived version of the URL that they were willing to share).

Our preliminary results show that, at least for Belgium, web archives are of utmost importance as data sources for (future) scientific research because other ‘web archaeology’ approaches yield few satisfactory results. Although web archives, to a greater or lesser degree, can only attempt comprehensiveness, and although that the processes involved in harvesting and preserving content from the web involves biases resulting from technical, resource and curatorial constraints, they preserve and can help in uncovering the histories of the early Belgian web. Furthermore, it is anticipated that the results of this study could be valuable for future web-archiving activities in Belgium.

11:50
Digging into Big Web Archive Data: The Development of the Danish Web 2006- 2015

ABSTRACT. In this paper we will examine how an entire national web domain has developed, with the Danish web as a case. In brief, we want to investigate the research question: What has the Danish web domain looked like in the past, and how has it developed in the period 2006-2015?

Studying national web domains and their development at scale is a novel approach to web studies as well as to the writing of media and communication history (studies exist, e.g. Ben-David, 2016; Hale et al., 2014; Rogers et al., 2013). Therefore, it is necessary to introduce a number of methodological issues related to this new type of study, including reflections on: a) how a national web can be delimited, b) what characterizes the archived web as a historical source for academic studies, and c) the general characteristics of our data source, the archived web in the national Danish web archive Netarkivet. Once this is in place the paper will introduce in more detail the data sources of the case study and how data were processed to enable the study. Then a selection of the analytical results and insights are presented and discussed, and, finally, possible next steps are outlined.

A number of the general methodological themes related to this type of study have been discussed in the literature (Brügger, 2017; Brügger & Laursen, 2018, 2019), and therefore this paper will have its main focus on how these general themes were translated into an analytical design, and on the resulting historical analysis, the first of its kind at this scale. The paper is based on an ongoing research project, of which the first phases have been very explorative and one of the aims was to become familiar with the source material, including developing the necessary methods to unlock the material and to make the first digs into large amounts of this new type of digital cultural heritage, the archived web (for a brief research history of studies of national web domains, see Brügger & Laursen, 2018: 415-416).

This study of the historical development of the Danish web is based on the material in the Danish web archive Netarkivet, and we delimit ‘the Danish web’ to what was present on the country code Top-Level Domain (ccTLD) .dk as well as the material on other TLDs that Netarkivet has identified and collected as relevant for the Danish web. That Netarkivet is used is also the main reason why the investigated period starts in 2006, since the first relevant crawl in Netarkivet is from 2006.

Working with this amount and complexity of data demands for an analytical design that is rigorously and thoroughly thought out to make the analysis manageable. We distinguish between three main phases: 1) Extracting, transforming and loading (ETL), 2) Selecting the corpus, 3) Translating research questions to code. Each of these three phases will briefly be presented.

In the research project on which this paper is based a large number of metrics were generated to get a better understanding of the historical development of the Danish web. To provide an overview of what this type of results look like we have selected five sub-research questions to be investigated in the paper: 1) The size of the Danish web, 2) Web Danica outside the Danish ccTLD, 3) Maintenance of the Danish Web, 4) The Degree of Restricted Access, 5) Content types: Written text and images.

12:10
Giving with one hand, taking with the other: e-legal deposit, web archives and researcher access

ABSTRACT. Outside the community of researchers and practitioners concerned with web archiving, if people are familiar with web archives at all, they will most often have heard of the Internet Archive (IA). This is perhaps only to be expected. The IA is ‘amongst the earliest systematic attempts at web archiving, operates at a global scale, and gives unrestricted access to its content via the Wayback Machine’ (Webster, 2017, p. 176). It has been archiving the web since late 1996, and at the time of writing makes available more than 325 billion historical web pages for browsing and limited searching. Much less well known is that archives and libraries around the world, from Iceland to Australia, have for many years been busy archiving the web. The nature, scale and scope of this archiving activity varies enormously, but unlike the IA these institutions are concerned either solely or primarily with national web domains (usually delimited by a ‘country code Top Level Domain’, or ccTLD, such as .fr or .uk) rather than with the web as a whole. This presentation will outline the different legal frameworks within which this national web archiving takes places, focusing on the impact of electronic legal deposit. It will discuss the vitally important enabling role of e-legal deposit, but also describe the challenges posed by the legislation – to access, use, re-use and publication. It will conclude by suggesting why researchers should concern themselves with sometimes complex legal issues, and how they might contribute their voices to ongoing discussions about access to our digital cultural heritage.

While the presentation will consider the international picture, it will discuss in particular the legislative frameworks that affect the shape of and access to web archives in the UK. There are two main organisations which have responsibility for archiving the web: The National Archives (TNA), which harvests and preserves the online presence of UK central government; and the British Library, which is charged with archiving the entire .uk country code Top Level Domain. The legislation which governs their respective web archiving activities is very different, and so too is the user experience. The National Archives began to archive government websites in 2003, under the terms of the Public Records Act. The definition of what constituted ‘public records’ was sufficiently broad to require no changes to the Act in order to accommodate born-digital data; and archived websites, as Crown copyright material, may be freely reused under the terms of the Open Government Licence. The result is the open UK Government Web Archive, which may be searched at http://www.nationalarchives.gov.uk/webarchive/ and through TNA’s main Discovery service (http://discovery.nationalarchives.gov.uk/).

The British Library has been archiving UK websites since 2004, but selectively and on a permissions basis. In order to archive the web at the domain level, it was necessary for legal deposit to be extended to include digital publications, broadly defined. The Library has now been undertaking an annual crawl of the .uk domain since April 2013 (this first crawl took 70 days to complete and resulted in the collection of 1.9 billion web pages and other assets, amounting to 31TB of data). The result is an extraordinarily rich and diverse primary source for historical research, combining many different types of media, from the records of government to personal blogs to online newspapers. All human life is present in web archives, and it is hard to imagine how you would write about life in the UK in the late 20th and early 21st centuries without access to the historical web. But the legal deposit legislation which critically allows for the collection of this data also places barriers in the way of its exploitation for research. The web is open and networked, but web archives are locked down and the unit of access is the single page. Digital data which could be made available to researchers in their offices or in the classroom is only accessible on-site in one of the UK’s six legal deposit libraries – three of which are in the south of England. In the British Library reading room, the legal requirement to prevent two or more users from viewing a page simultaneously closes down research conversations and makes it almost impossible to use the legal deposit collections imaginatively for teaching. Inequality of access is built in to the system.

11:30-12:45 Session 7D: Platform and App Histories 1
11:30
Where did that #snapstory and #instastory go? The role of YouTube as a cultural archive of ephemeral social interactions

ABSTRACT. Sociologist Nathan Jurgenson argues that ephemeral media like Snapchat and Instagram stories – videos and images posted on these apps that vanish after 24 hours – are similar to other forms of temporary art such as ice sculptures that “takes seriously the process of disappearance” (Jurgenson, 2013). These forms of ephemeral media are not meant to be classified and endure, continues Jurgenson, and hence they “feel more like life and less like its collection.” Indeed, users’ everyday life on social media is increasingly mediated by ephemeral content – Instagram’s daily active users of Stories surpass 400 million and Snapchat counts 191 million daily active users (Salinas, 2018) ¬– which makes temporary media like #instastories and #snapstories interesting objects of study to understand contemporary form of sociability. #Instastories and #snapstories capture transformations of media practices, but these digital objects fade in a matter of hours.

As media scholars we are left with a fundamental epistemological question: How can we study ephemeral media in opaque and proprietary apps like Instagram and Snapchat? Where do #snapstories and #instastories go after they have been created? One answer to these questions is YouTube. Since Snapchat launched Stories in October 2013, and Instagram introduced Stories in August 2016, users have been reposting these ephemeral media to YouTube. Once #instastories and #snapstories are transformed into YouTube videos, they become decontextualized pieces of content that are circulated outside their original platforms. However, in their new form, they also become collectable and archivable digital objects through which historians of the web are able to reconstruct a particular moment in time that is shaping social media practices.

YouTube is a public space containing a heterogeneous collection of videos: from musical clips, TV shows and films, to vernacular genres such as ‘how to’ tutorials, parodies, and compilations. YouTube as a cultural archive (Burgess & Green, 2009, 2018) opens up research endeavours to understand media archaeology. This article focuses on early archival practices of Instagram and Snapchat ephemeral media on YouTube and traces its evolution over time.

In order to test the role of YouTube as a cultural archive through which we can reconstruct and re-evaluate ephemeral social interactions, this paper explores videos tagged with the terms “#instastories,” “Instagram stories,” “snapchat,” and “Snapchat stories” over time. For the data collection we used the YouTube Data Tools (Rieder, 2015), which utilises YouTube’s API to extract lists of videos matching specific queries. In this data gathering, we were primarily interested in finding user-generated content rather than broadcast media videos talking about Instagram or Snapchat Stories, or other YouTube content such as ‘how to’ videos narrating tips to create Stories.

The aim of the paper is to build a taxonomy of the type of ephemeral media that is uploaded to YouTube (e.g. themes and aesthetics), the actors that transmediate these objects across platforms (e.g. YouTube channels uploading the Stories), as well as to conceptualise what the study of these videos mean from the standpoint of thinking back to the origins of these ephemeral media. For example, during the first exploration of the videos returned by YouTube matching our queries in the first six months after Snapchat and Instagram stories launch (from October 2013 to March 2014 for Snapchat, and from August 2016 to January 2017 for Instagram), we found a wide variety of instastories and snapchatstories’ videos and channels: from compilations of celebrity stories to ordinary users’ stories, most of which were uploaded by amateur YouTube channels. Overall, the paper positions YouTube as an object of study to understand ephemeral media practices, and studies its role in uncovering the histories of proprietary apps and its specific user practices.

11:50
On the preservation of apps and app data

ABSTRACT. Increasingly, smartphones, platforms, and mobile operating systems push users to create data and metadata that is born networked and stored in the cloud by way of internet-connected mobile applications. Most apps store and collect user data also known as “app data” in the cloud then transfer data to third parties, such as data brokers and technology companies. The increased use of apps on mobile phones—from social media, to mobile payments, to wayfinding—and the subsequent digital traces that are collected by developers and data brokers are influencing contemporary understandings of data, storage, networked infrastructures, and the platformization of the internet.

By leveraging the creation of app data over common digital formats, structured files, and internal storage, emerging (and competing) notions of ownership over data, long-term access, and digital preservation contexts can be seen amongst users, software developers, platform technologists, and law enforcement. Traditional software preservation models have focused on the development of software code, the standardization of formats, and the creation of unique works. But with mobile apps, few preservation models exist and instead, regulatory frameworks of data protection and personal information legislation guides the terms of service between developers and users. To use most apps and platform services, users typically agree to share their user data freely, without much claim about how it may be used once it is collected (Gehl, 2014). As a condition of use then, most apps require users to relinquish control over digital archives of personal and sometimes public data as it becomes aggregated and governed for access and future use. The rapid growth of the app ecosphere has led to increased—and intensified—collection of user data from apps and platforms, allowing companies to assemble profiles of users from app data gathered from across digital services (Binns et al., 2018).

What is the nature of data created from apps and stored locally or in the cloud and then reassembled in databanks of consumer behavior? How, if at all, is it different from user-generated traces from other kinds of software that tracks users? Where does app data live and who owns it? The answers to these questions influence contemporary digital preservation approaches to apps as software and as artifacts of the internet now and in the near future. In answering these questions, I will present findings from an ongoing investigation on the development and use of apps in mobile operating systems (Android and iOS) and subsequent app data created and collected and how this current landscape of the flow of app data influence traditional understandings of creating formatted data, owning files, and managing file directories in personal information management. I will report on how practices of mobile data creation, transmission, internal and cloud-based storage work together within situated courses of action at different scales and sequences of mobile computing infrastructure to produce new and competing understandings of data created and stored with apps, and as a result impact persistent storage and models of digital preservation.

The paper will begin by discussing how the storage architecture of mobile apps impact traditional understandings of the data lifecycle through the creation app data with mobile platforms. Then the paper will address implications that the software development of apps and app data have for contemporary understandings of digital preservation that have traditionally been driven by file format registries and local storage. A moral and political economy about how digital cultural memory operates within mobile platform software development is worth interrogating because the materiality of apps and app data are forward operating—operationalizing new archival visions of digital cultural memory governed by app stores, developer guidelines, user accounts, cloud architecture, data collection regimes, and social networking platforms that resell user data to third-party data brokers. More and more platforms and app stores throttle access, bringing new technical and legal challenges for researchers, web historians, digital stewards, and archivists to preserve access to these data. If app data represent a new form of digital cultural memory, how should such an archive of digital traces be ‘owned’, shared, known, and studied? With this paper I bring attention to the phenomena of storage in mobile computing infrastructures to the field of app studies and contemporary web archives. With this introduction to app data, I will argue that these trace data have the potential to be a rich space of study for scholars of web archives, data cultures, digital preservation and information infrastructures to attend to, theorize, and interpret.

12:10
Remember Vine? The Web Vernacular Between Remembrance and Preservation

ABSTRACT. Vine, the short-lived 6-second video format and platform, is a marvelous case study in amateur media cultures. Within just 4 years, this strange side-project of Twitter sprouted from an experiment into a mass phenomenon, prominently catapulting amateur video artists into mainstream meme fame, provoking new forms of narration, precipitously falling into obsolescence and getting discontinued suddenly in January 2017. Vine is a like compressed history of many other ephemeral services, defunct platforms, forgotten practices and long-gone places: it recalls mp3-rotations, Geocities, ICQ, AIM.

Vine was an epicenter of the strange. The platform ran on bizarre humor and high-octane, fast-mutating, iterative insider jokes. Porn and piracy were rampant, and genuine communities existed. Not unlike Tumblr, it occupied an ill-defined transitional space: not entirely on the fringes of the Web but decidedly off-center, irritating distinctions between amateur and professional, private and public, legal and illegal. In North America, the platform refreshingly highlighted the complicated racial politics of amateurism, becoming especially popular with black creators whose informal work often got co-opted by mainstream media culture. At the same time, Vine also played a crucial role as a tool for citizen journalism and for what Susan Murray called ‘accidental witnessing’(Murray 2012), for example during the Ebola crisis in Sierra Leone, the 2013 Boston bombings, and most prominently during the 2014 protests in Ferguson, where it enabled the extremely rapid proliferation of ‘shared political temporalities’ (Bonilla and Rosa 2015).

While academic attention was and remains scant, some empirical studies support the intuition that Vine operated on very different terms than other popular video sharing platforms. Unlike the aura of spontaneity and authenticity cultivated on social media like Instagram, nearly two thirds of videos on Vine were staged and scripted, with comedy accounting for 41% of all videos, nearly three times as much as YouTube (Yarosh et al. 2016). Even stardom followed a different logic on Vine, where the conventional web metric of follower count corresponded not with pre-existing offline celebrity status (as is the case with Vine’s former competitors Instagram, Snapchat or Twitter itself), but with creative output (Zhang, Wang, and Liu 2014). This somewhat peculiar cultural autonomy made possible the emergence of a unique, vernacular and non-commercial audiovisual culture, but also hindered many Vine creators from benefiting from their own creative labor.

Counting close to two years since its closure, the Vine website and app now only show a broken static screen despite previous assurances from Twitter that Vines would remain archived and accessible. Vines embedded elsewhere can still be played but are disappearing fast. Many of them survive in vestigial form on other online spaces that cannot fully recreate the circumstances in which the format was encountered. With their native platform gone, looking for Vines is difficult and frustrating. Perhaps fittingly for a platform which, according to criteria set out by Lobato and Thomas (2015), failed to formalize, amateur archiving practices have emerged. Reflecting on his own net.art practice in the 1990s, Vuk Ćosić noted that sometimes ‘digital piracy [is] the only possible gesture to preserve something in its natural habitat’ (Connor 2017, np.) and indeed, curated collections by ‘pirate-collectors’ (De Kosnik 2012) are now the primary way of accessing Vines. Even more interestingly, in some circles, a new custom of verbally describing remembered Vines has emerged – a form of oral history oscillating between remembrance and preservation.

How we can ensure that the unique dialect and political significance of Vine (and all its past and future siblings) don’t disappear? This contribution will retrace Vine’s history in order to think about the possibilities and limits of recognizing, studying and preserving fast-paced cultural phenomena. I will argue that when faced with short-lived ‘objects’ like Vine, commonplace methods of media research and preservation are inadequate. Instead, media theory and archiving need to be practiced like beta-testing.

12:45-14:00Lunch break

Brown bag lunches will be available in the Allard Pierson Museum Café (Google Maps link). There is limited seating in the café itself, but plenty of nice outdoor spaces in the surrounding area.

14:00-15:15 Session 8A: Local Histories of the Unknown Russian Web
Location: OMHP C0.23 (26)
14:00
Unknown Histories of The Russian Local Web

ABSTRACT. There is «a standard folklore» of an internet [Streeter, 2011] in Russia, similar to the ARPANET history in the USA — the history of Runet. This is a word from 1990s years, when an internet in Russia has just appeared. It is still used by both researchers [Ermoshina, Musiani, 2017; Nikkarila, Ristolainen, 2017] and the internet industry [RAEC, 2018] and encompasses both internet infrastructure and web. We argue that Runet and an internet in Russia are different things. Firstly, Runet historically included advanced journalistic, writing and political projects from Moscow, Saint-Petersburg and two or three other cities [Kuznetsov, 2004; Bowles, 2009]. Histories of an internet and web in other cities has never been a part of the «Runet» narrative. Secondly, the history of the Runet itself is an exclusive narrative of a rather specific social group consisting of early adopters: journalists, IT-specialists, researchers etc. In our research on the Russian regions we have discovered some other «-nets», which haven’t been parts of a standard historical narrative of an internet — e.g. Tonet (in Tomsk), Tatnet (in Kazan’, Tatarstan), Tyunet (in Tuymen) and so on. These turned out to be completely different objects — some of them have an infrastructural basis and are city-based (Tonet), others are projects related to national and linguistic identity and therefore are not associated with a place (Tatnet). All of them are less known than the Runet, and almost never became the object of research consideration, except for a few texts wrote by linguists and activists [Orekhov, Galliamov, 2012; Orekhov, Reshetnikov, 2016; Sakharnykh, 2002; Sibgatullin, 2009].

In our panel we take into consideration two cases and a theoretical issue: Tatnet (tatarian internet),Tonet (Tomsk internet) and one problematizing reflexive paper about imaginaries of «-nets» and research strategies. We suppose that these Russian internet histories encompass both internet and web histories and they are important not only for Russian context. When talking about postcolonial understanding of web histories we should also include local histories and narratives about the early ages of internet and discuss methodology and connections which we use to study it.

Tonet — web without «world wide»

From 1998 to 2006 in Tomsk there has been a Tonet, a city-based segment of an internet. Sites and services which were located in Tomsk were almost free for Tomsk citizens. So the citizens of Tomsk used local sites mostly. At the same time, the Tonet was connected to an internet, it wasn’t an intranet. This allows us to call Tonet «a Narrow Web» by an analogy with the World Wide Web. In this report we offer a description of the history and specifics of the city-based Narrow Web of Tomsk. Internet in Tomsk appeared in 1991 [Anniversary date. Tonet 20 years after] and in the first years it was distributed in much the same way as in other cities of Russia [Perfilyev, 2003] by a couple of an internet service providers (ISPs) and two internet centers in local universities. In 1998, ISPs agreed with each other to exchange traffiс and built an internet exchange point TSK-IX. This decision wasn't so common for the other regions. In the 1990s, traffic exchange was the subject of disputes between ISPs, they were even engaged in peering wars [Trubetskaya, 2006]. As far as ISPs have connected their networks, the city-based segment of an internet (in the technical sense) was managed to develop. This infrastructure solution had several consequences, one of which seems important to us. In those years, users paid ISPs not the same fee every month, but for the amount of traffic they used. In a new situation when a user from Tomsk went to a site hosted on the server located in some Tomsk ISP network, the traffic was free. Local network traffic was free for ISPs, and they didn’t take money from users for it. This was true not only for sites, but also for file sharing, torrent trackers and other resources. So Tomsk citizens used Tomsk sites and services. There was a strong connection between the Tomsk site and the Tomsk user. Pretty quickly own media, forums, chat rooms, sites for artists and writers, online auction and even a small Wikipedia appeared in Tomsk. The Tonet was connected with the «big internet» (this phrase was suggested by one of the informants. Hereinafter, under the «big internet» we will mean an internet in a technical sense, a connection of city, regional and countrywide computer networks connected to each other). Tonet combined the features of a local network and internet. On the one hand, there was free traffic inside local network and high speed file sharing between users. On the other hand, residents of Tomsk could go to any site on internet. This had lasted until the accession of unlimited internet tariffs in 2007, that made Tonet unnecessary to most users. Thus, based on the features of the infrastructure, content, and use of Tomsk web sites, we can speak of Tonet as a separate segment of internet. This type of the net as well as FIDO and intranet Kevin Driscoll and Camille Paloque-Berges, the researchers of internet histories, suggest calling The Net [Driscoll, Paloque-Berges, 2017]. However, for this research, it seems more appropriate to call it The Narrow Web by analogy with The World Wide Web, as it allows us to take into account at the same time the isolation of Tonet and its connection to the «big internet».

This report contains a historical description of Tonet as a city-based Narrow Web. It is based on the thirty in-depth semi-structured interviews with users, providers and other internet development participants. The analysis of archival copies of Tomsk sites using web history methods [Brügger, 2009] and media publications are used. The Tonet was the famous and significant case for ISPs in other cities of Russia. In Tyumen, for example, there was an attempt to create a similar internet segment. However, it was unsuccessful as IPSs did not an agreement on traffic exchange. At the same time, according to our data, Narrow Webs existed in several other cities like Nizhny Novgorod and Pereslavl-Zalessky. Our research and articles of colleagues from other countries [Nevejan, Badenoch, 2014] show that studying the histories of city-based webs and networks is an opportunity to get a more comprehensive picture of the history of an internet in a particular country.

Histories of web that might have been: Tatarian internet project

An Internet has for long been stirring up our utopian thinking. Nostalgic attitude to the early web now, in 2018, allows you to return to those old utopias that have not been fully (or at all) realized. Nevertheless, it is too easy to fall into the trap of emotions and don’t notice that your «web that was» and your idea of «web that could have been» are not the same for everyone. I would like to consider one of such non-mainstream utopias that has appeared in 2000s around the idea of Tatarian web. During our research of internet histories of the Russian regions in January-February 2018, we studied the Kazan’ internet. Kazan’ is a capital of Tatarstan Republic in Russian Federation. During our archival work, we have found a mention of «Tatnet» — «Tatarian internet». We have worked with the book «Tatarskii internet» («Tatarian internet»), written in 2009 by the first and most famous Tatnet activist, Ainur Sigbattulin. We have also have taken a series of deep interviews with internet pioneers of Kazan’ and some nowadays-active online-media producers, some of whom were national activists and others were not. What is known about Tatnet? the word «Tatnet» is mostly used as a name of a web sphere of sites, functioned in 2000s, written in tatarian language, about tatars or by tatars; most of the Tatnet activists has lived and made their projects in Kazan’ Tatnet wasn’t a state project many people related to Tatnet in the past, now say, that the project has failed or hasn’t been strong and big enough nowadays there are different projects, related to the idea of making a special part of the web for tatars, but authors of these projects don’t know much about the Tatnet idea.

Tatnet as an imagined community

I suggest a way of analysing the case of Tatar internet through the concept of Benedict Anderson of «imagined communities». In the end of 90s years, English-speaking researchers re-discovered Benedict Anderson’s theory of «imagined communities» as something that can be applied to different internet projects and communities, not necessarily nationalistic ones. There were articles and projects examining reddit, twitter, forums, IRC, etc. through this optics. However, there is no single way to apply his concept. I suggest a way of working with the case of such «-nets» as Tatnet through the Anderson’s theory, by highlighting the main characteristics of «imagined community» and then analysing the way, in which they, firstly, appear in a current imaginary, and, secondly, how they face limitations and capabilities of the web.

Conclusion: When we say that the web has changed or became not what it was supposed to be, it is important to understand that utopias, which were the context for the development of an internet, differ in different regions and cultures. We may distinguish a well-documented global utopia of an Internet (with a capital I) and a number of less known «local utopias» based on geographical, cultural or ethnic substrate. Researching local utopias as well as internet histories can help us to see which alternatives the web had. As we see in case of Tatnet, Berners-Lee's and Wellman’s utopias about globalization, inclusiveness and lack of borders are not so global — some other people had a need to build boundaries and create separate independent communities, separated web’s, as I’ve shown in my case — national ones. Nevertheless, the idea of globalization can be presented in a web imaginary in another way — as we see in the case of Tatnet, globality of an internet supposed to help in a process of forming a new, previously impossible national community. In the case of Tatnet, the internet is perceived as a special space where it is possible to overcome state borders and power relations and build a global national community that is impossible offline. The imaginary of national community in the web faces a variety of limitations. In this report I would like to highlight one of them, that I consider to be more important. As the Tatnet project comes to life and imagined community of a nation goes online, the «imaginariness» itself goes materialised. There are less spatial limitations, that divide groups from each other and different people with tatarian identity connect via forums and chats. As our interlocutors described it, such situation leads to different conflicts and new borders inside this utopian global community.

Fragmentation revisited: imagine the local webs and nets

An internet as an entity is connected with a number of imaginaries which include connectivity. Connectivity is often understood as the instrument to bring together people, things and ideas which could not have been connected otherwise. For example this is widely spread justification for global education projects like Coursera or Wikipedia. Internet connectivity in this sense is not a phenomena by itself, but it needs a wide support network of infrastructure, governance, utopian ideas, as well as academic studies (at least at the very beginning [Wellman, 2004]). But should this connectivity be global and universal? Let us turn to the listed elements. The internet infrastructure is largely global, so is the governance and the governing institutions. They mostly share the same imaginaries or confront on them [Mansell, 2012]. These imaginaries are also legitimated by internet history: it traditionally starts from the middle of the XX century in US, from where it further developed [Abbate, 2000]. In the last years internet historians started exploring internet histories [Goggin, McLelland, 2017] which oppose this idea of one universal global history [Haigh, Russell, Dutton, 2015]. At the same time there also have been local narratives and imaginaries [Barker, 2015]. However scale of this locality traditionally coincides with country boundaries. In our case in Russia this country-level internet is often called RuNet. RuNet, long before its association with Russian hackers and fake news had a different background. It became a symbol of some separate and rather independent space with changing structure, but rather constant essense [Asmolov, Kolozaridi, 2017]. Its history is sparsely documented also not as clear. Just an example: Russian sector of Livejournal was researched and visible, unlike comparable in popularity platforms like Diary.ru or Liveinternet (blog platforms which were developed in Russia at the same time with Livejournal). Why one cases and actors were chosen and others — missing [Driscoll, Paloque-Berges, 2017])? I suppose that the case was a connectivity issue: some of the connections are more visible than others. Probably the reasoning is in the same imaginary of connectivity which should connect distant, but not close things. This hypothesis comes from our fieldwork in studying «nets» in Russian regions. However when we started the fieldwork we’ve met several challenges I suppose might be important.

Firstly, the imaginaries we’ve revealed were not only corresponding to global utopian views but also to Soviet background of technologies development. There had been a mixture of cybernetics, ideas of building a new world dating back to 1920-th and Perestroika ideas. They coexisted with the ideas of business development, globalization and becoming a part of post-cold war world order. We argue that these imagiaries are not the same as the global utopia in Western countries of the same period, as they seem to be connected with different historical background. For example, our research participants mostly talk about themselves in plural and see themselves not as innovators, but those to whom technologies “had come” from abroad. However what is the scale of this locality? We have started with city-based «-nets», but what we’ve revealed is also global idea about post-Soviet context. On what scale should we state the difference and tell that here is the «other» internet or web? The second challenge is that whether we should distinguish internet from the web and what sense does (or does not) this connection make? This question has methodological consequence, as it influences whom do we interview or which documents do we study. In our research we have mentioned a significant difference between the narratives of technical directors of an internet service providers and their marketing directors. Finally the challenge is our position as the researchers. Dealing with local «-nets» or exploring internet in particular city/within particular social group we need to be responsible for our position. Do we need to preserve web being kind of ecologists?Or we are critical about world wide web? Should we just exclude the global context or compare them? The overall imagiaries of «local internets» have been different, sometimes not the positive ones. E.g. suspectios like «balkanization» and «sovereignization» are often connected with censorship and political regimes [Mueller 2017]. I hope that questioning different fragments of an internet might be helpful to understand it in wider perspective. For this reason questions we raise are important not only for those dealing with RuNet, but about any research of fragmented internet and web which is not only World and Wide.

14:00-15:15 Session 8B: New Archives, New Histories?
14:00
Occupying the Archive: Implications of the digital revolution for historical research

ABSTRACT. This research starts with the question: What are the implications of not archiving social media for historical research? Digital data is binary, mathematized code, so it seems logical that the medium would be ideal for preservation (Webmoor, 2008). Instead the opposite is true. The display, rendering and storage are beholden the capacities of servers controlled by private companies and distributed in undisclosed locations with few protocols in place to protect the future of our online communication. There are contractual, privacy and institutional challenges. Social media have become communication platforms for scholars, social movements and political and business leaders whose actions have repercussions on public life that scholars may want to study now and in the future. The priority has shifted to wholesale archival with little thought given to future uses. Currently the emphasis is on scooping up whatever digital material is available and digitizing any available resources, which gives a sense of completeness. This poses several obstacles to creating an archive that is well constructed and easily searchable.

As yet there is no coordinated, consistent archival of social media in the United States. National heritage institutions are trying to preserve online information, but their efforts have been focused on static websites. They have scarcely begun to capture YouTube, Facebook, Vine videos, Instagram and Snapchat and have no established standard procedures or ethics policies for the acquisition, appraisal, description and management of social media records (Wettengel, 1997). Nor have private social media platforms developed consistent archiving policies, and precedent has yet to establish the right to the content of our communications if Facebook, Twitter or other platforms shut down. To begin to understand the implications of social media archiving, I asked what would we know about the Occupy movement if The New York Times were our starting point in research about social movements of the early 21st century. And what would social media tell us about how social groups negotiate power, authority and representation? These questions were narrowed to two research questions:

  1. Did The New York Times coverage show more focus on disorder than the Occupy tweets?
  2. What tools can best help to provide relevant answers?

Advantages and Disadvantages

This research is presented as a case study using topic models, a computation method that tries to identify groups of words used together in a text. To arrive at answers to these questions I created a corpus of tweets containing five years of content, from Sept. 17, 2011 (the earliest days of the Occupy movement) to Sept. 17, 2016. I focused on a single Twitter account, @OccupyWallSt. The account had 215,000 followers and more than 37,000 tweets as of April 21, 2017. I seleted to analyze the corpus using topic modelling toolkit MALLET combined with the NLTK natural language processing library. Topic models use statistical techniques to categorize individual texts and discover categories, topics, and patterns that might not be visible to an individual (Nelson, 2011). MALLET creates clusters of words, or topics (Brett, 2013) which the human interprets in terms of what the models mean in context and to validate the findings. Processing power is a key advantage. However particular challenges exist for using tweets including data-cleaning conventions and corpus size.

I argue that taken together, topic models and simple natural language processing tools can be used for discovery as Yang, Torget and Mihalcea (2011) found in their examination of historic newspapers. Topic models used here signaled two patterns. One was the multifaceted discourse about domestic and international issues that extended beyond Occupy's original economic platform. The second was an indication that the movement remained engaged in these issues and new ones that arose five years after journalists used media frames that made the movement appear to have ceased (see Sorkin, 2012). NLTK was not useful as a tool to validate the findings but it did reveal other patterns that touched on the original research question. However, although some work is being done to normalize Twitter and other social media content, including stopwords (see Saif et al, 2014), any conclusions or comparisons remain problematic. Understanding social media's reflection in the public record may be better served by an approach like Documenting the Now 2 than text mining (see Jules 2016). Techniques that incorporate ethical capture with large scale analysis should continue nonetheless because, as Bijker and Law (1992) put it, "...technologies do not provide their own explanation."

14:20
Screens in struggles. Memories of social movements on French Web-TV since the late 1990s

ABSTRACT. This submission is based on a new research project launched in September 2018 in collaboration with the Web media legal deposit team at the French National Audiovisual Institute (INA). It covers many topics, aligned with the RESAW topics, including “Research methods for studying the archived web”, “Digital activism and web history”, “Web and internet histories”, and to a lesser extent “Identity, intersectionality and web history”.

In a previous research on the Maghrebi memories from 1999 to 2014 based on French web archives, the role of "Web-TV" was highlighted by studying the ways in which the past was represented online. In this new project, this term is defined according to a broad acceptance that refers to websites and/or channels of audiovisual platforms (Youtube, Dailymotion, etc.) that broadcast information mainly in the form of videos made freely available free of charge on the Web, with or without a programme schedule.

Often built in opposition to pre-existing media, Web-TV participates or relays multiple civic, social or political demands, as evidenced by the abundance of qualifiers: "minority" media (Rigoni, 2011), "suburban", "citizens", "activists", "participative", confessional", "proximity", "mediactivists" (Granjon, 2014). Characterized by a certain scattering, these "alternative" broadcasters still share a political anchoring mainly "on the left" (Ferron, 2016) as well as the desire to give a voice to those who are perceived as "voiceless", the "forgotten" of traditional media and history.

This project therefore aims to study the ways in which Web-TV archived by the INA have been presenting the memories of social movements on the Internet since the early 1990s. What are the memory references used and with what purposes? How do social events and commemorative time impact references to the past of struggles within the corpus? What are the scenographic choices ? To answer these questions, it is necessary to rely on the archives of the Ina Web Media, in which 6342 video accounts (Youtube, Vimeo, Dailymotion) and 610 "Web-TV" (as of September 2018) are archived, the earliest version of which dates back to 1997. Beyond content analysis, this exploration aims more broadly to pursue reflection on corpus building and data analysis methods in web archives for historians and researchers in the humanities and social sciences.

This research therefore has three objectives :

1. Historicize the Web-TV archived by Ina as part of the Web legal deposit : The aim will be to establish a typology of these media and to understand their evolution since the end of the 1990s. The initial hypothesis is that, during the period studied, online audiovisual practices are changing as a result of the rise of video platforms, digital social networks and the widespread use of smartphones. At the interface of television and the Web, "Web-TV" does not reject links to pre-existing media sites, particularly those of regional channels.

2. Experiment with data mining tools for a quantitative analysis of the use of the past in the corpus : The basic assumption is that the most recollected types of social movements are linked to media commemorations (controversies over memory laws, colonization and migration in 2005, May-June 1968 in 2008 and 2018) but also to more recent events, as in the case of the 2005 riots, whose memory has since been perpetuated online.

3.Investigate the production conditions of the creators of memory content in a micro-historical approach based on an emblematic site : The hypothesis formulated here is twofold: on the one hand, the sources of funding influence the choice of the mobilized past and its treatment, and on the other hand, the motivations of content creators are in line with the movements of the "free media" (radio and TV) of the pre-Web period.

During this presentation, the focus will be on methodological aspects such as the main research questions, the constitution of a corpus, the different access tools set up by Ina and the differences according to the more "traditional" methods of investigation of history. The problems on how to define Web-TV over time and the solutions that have been adopted to address "corpus bias" will be also discussed. In addition, the preliminary results obtained by using different data mining approaches on the sub-corpus related to the mobilization of May 1968, will be presented by showing the interpretations of the event according to the types of Web-TV and the forms of staging on the two commemorative times (2008 and 2018).

14:40
“‘Healthy people have bad days, too’: Narratives of AIDS and HIV on GeoCities”

ABSTRACT. In 1995, web hosting and development company GeoCities began offering free web hosting accounts to anyone who wanted to start their own web page. These new users, referred to as “Homesteaders,” could choose from one of six thematic “neighbourhoods” within which to build their website. Among these first six directories was WestHollywood, GeoCities’ gay, lesbian, bisexual, transgender, and queer (LGBTQ) neighbourhood. GeoCities achieved massive popularity and became the third most visited site on the web by 1998, before gradually declining in usage amid competition from other sites and platforms. Closed in 2009 by Yahoo!, GeoCities lives on today in the Internet Archive’s Wayback Machine. Recent scholarship on GeoCities has highlighted the web hosting service as not only a collection of individual personal websites, but a true community, based around similar interests and identities, and linked through webrings, awards, and users directories (Milligan, 2017).

It is significant that WestHollywood was one of these first six GeoCities directories. While staffing a gay crisis line as a graduate student at the University of Michigan, GeoCities’ creator David Bohnett recognized the isolation and mental health risks that often plagued queer youth and the importance of connection to finding a sense of belonging and self-confidence. Though Bohnett became an activist and leader in the LGBTQ community over the next two decades, it was the death of Bohnett’s partner in 1993 from AIDS that prompted him to once again revisit this idea of the importance of interconnection and community. Using his own savings, as well as the balance of his partner’s life insurance policy, Bohnett founded GeoCities with the goal of creating community on the World Wide Web. Given these origins, as well as the height of GeoCities’ popularity coinciding with the peak of the AIDS crisis, it is unsurprising that Homesteaders on WestHollywood used this new technology to document their own experiences with HIV and AIDS. These communities became framed around support, recovery, and how best to take care of themselves in a hostile and judgemental world.

This paper explores the ways that GeoCities became a platform for users living with AIDS and HIV (and their friends or family) to share resources, find support, and memorialize others who had been lost to the disease. GeoCities’ massive collection of self-narrative — Homesteaders posted autobiographies of their own experiences, as well as journals of their medical care and daily lives — serves as a significant documentation of the effect of the AIDS crisis on the LGBTQ community. The importance of the internet and the web to the LGBTQ community has been well-documented (Alexander, 2002; O’Riordan and Phillips, 2007; Siebler, 2016). Significant work has been done in the fields of sociology and literary theory in examining the role of science and fictional literature in constructing and presenting AIDS as a“gay man’s disease.” (Murphy and Poirier, 1993; Kruger, 2013) However, this GeoCities collection of narratives is a significant historical source which allows for the exploration of the experience of AIDS and HIV, in the individual’s own words, and presented for an audience of other LGBTQ individuals in ways that were often very different from these other forms of narratives.

This paper builds on scholarship which emphasizes the necessity of viewing the web as a functional whole, made of of connections between individual websites, in order to understand how these communities and users functioned in relation to each other (Brügger, 2009, 2010, 2017; Rogers, 2013). Accordingly, I employ a methodology which uses network graphing using Gephi (Jockers, 2013) in order to visualize hyperlinking relationships between GeoCities users, along with content-level analysis of language and self-narrative, based on topic modelling and keyword concordances. These macroanalytical methodologies allow for the identification of specific users, communities, and narratives that can be seen as influential within the WestHollywood neighbourhood as a whole, and help to target more traditional close reading. I argue that, through this combination of distant and close reading techniques, GeoCities WestHollywood emerges as a crucial site for the construction of the AIDS narrative as a story of survival and self-care. By framing these narratives around support instead of drama and tragedy, as the mainstream media did during the 1990s, these GeoCities websites provided examples of hope and recovery that were extraordinarily influential within the online community. This paper builds on the historiography of global sexuality and identity studies, as well as contributes to digital history methodology for performing research at multiple scales of analysis.

14:00-15:15 Session 8C: The Commercial Internet
14:00
Infomaré and Superinfovia; the sea and the highway: Brazilian imagination and the early commercial internet

ABSTRACT. This presentation attempts to comprehend how Brazilian culture interpreted the internet in the nineties. For that we propose to review the reactions of two distinct groups of cultural producers: On the one hand we look at how journalists, via traditional newspapers, interpreted the then new superinfovia, the information highway. On the other, we turn our attention to artists and critics that, with the increasing popularization of the internet and PCs, turned their gaze at the possibilities offered by this then new medium and argued in favour of a web or internet art. The reactions of these two groups, we believe, characterize the opposing instances of autonomy involved in the communication of early internet for Brazilians consumers. In other words, taking into consideration the restrictions imposed on each group, economic and symbolic ones, we attempt to reconstruct the opinions given and consumed by Brazilian society at the time. It is by analysing these different discourses that we attempt to picture the impact of this early internet on different classes of Brazilian society of that period: from the middle, managerial classes to the intellectual elites. Then, it is this competition, rooted in the cultural distinction of both consumers and producers, that is the object of this presentation. With this object in mind, it is important to contextualized Brazilian society at the time. Still recovering from the traumas of a bloody and repressive dictatorial regime, which left an enduring mark in the country, Brazil in the mid to late nineties had something that was especially rare in the decades preceding the arrival of the internet: a relatively stable country. We should not forget, for example, that by the end of the military junta that ruled Brazil from 1964 until 1985, optimism was rife in certain social segments. In that same final year, the country launched its first satellite, a relatively modern videotext network was implemented, the new civilian government promised increases in the education and health budgets, the economy seemed, at first, stabilized, and, perhaps most importantly, the end of that that bloody government signalled the normalization of Brazil´s relations with the world. This optimism, however, was short-lived. By the early nineties, the country had marched towards hyperinflation, an unsuccessful protectionist plan to develop local computing industries, a series of disastrous economic plans, and, some years later, at its first free elections in more than 20 years, an impeachment to toast the first few years of that new republic. It was in the aftermath of these turbulent times that the internet arrived in Brazil. Here we note that it was only after the impeachment of President Collor in 1992, which itself followed the confiscation of most savings accounts in the country, that the internet became a commercial enterprise in 1994. Recovering from hyperinflation and still testing the limits of its young democracy, Brazil early commercial internet was one of few users: In 1997, in a country with a population of roughly one hundred fifty million, only about one million individuals were connected. It was no coincidence, then, that the first registered “.com.br” address was done by a company entitled “Canal VIP” (The VIP Channel). These cultural producers then mirror the inherited inequality of Brazilian internet access. In a country with an incipient editorial market and an even smaller artistic field, these individuals engaged with that early internet embodied the first contacts of Brazilian society with the world wide web. The picture that emerges from this experiment is one of contrasts, mirroring the attitudes and temperament of an economic and cultural elite that is quite unsure about the benefits and dangers of this technological development. Fluctuating between utilitarianism to technological utopianisms, between the mundane and the magical, between hidden economical threats and infinite aesthetic possibilities, these descriptions illuminate not only Brazilian attitudes towards the web but also its attitude towards the economy, politics, society, and the world.

14:20
‘This is not how we imagined it’: Technological Affordances, Economic Drivers and the Internet Architecture Imaginary

ABSTRACT. The Internet architecture is perceived as an engine for innovation and as an enabler of rights and freedoms. This is based on a prevailing sociotechnical imaginary, a set of values, institutions, metaphors and visions that guide the co-production of policy and technology, that hails back to the beginning of the 1990s. This sociotechnical imaginary is still prominent in one of the main Internet technical governance bodies, the Internet Engineering Taskforce (IETF). In this paper, I answer the question how technological and economic developments have impacted the Internet architecture imaginary by drawing on Science and Technology Studies and International Political Economy. To do this I use a combination of qualitative and quantitative methods based on document analysis, interviews, and participant observation. First, I show how the Internet architecture is co-produced through a sociotechnical imaginary based on notions such as the end-to-end principle, permissionless innovation, and openness. Second, I indicate how the introduction of network management devices by network operators and network equipment vendors increased their control over data streams, violated architectural principles, altered the Internet architecture’s affordances and hampered innovation. Third, I explore the iterative attempts by content providers to innovate on the transport layer, reinstate the end-to-end principle and regain control over data-flows. From this analysis I draw the following conclusions: the sociotechnical Internet architecture imaginary and its self-regulatory model have not been able to safeguard the ability of researchers, small companies or individuals to innovate on the Internet protocol level. Additionally, playing into the earning models of trans-national corporations has become a first-order consideration in the development of Internet protocols. The prioritization of corporate interests has altered the Internet architecture’s sociotechnical imaginary and is producing an Internet that primarily serves the interests of transnational corporations.

14:40
Governing the Commercial Internet: Multistakeholder Influences on Clinton Era Governance of the Global Internet

ABSTRACT. In this paper, I examine the policies developed during the presidency of Bill Clinton regarding internet governance, starting in 1992 and ending in 2000. Efforts to commercialize the internet began prior to the presidency of Bill Clinton, but the decisions made by that administration accelerated that process. Today, the commercial nature of the web feels like a given, but in the mid-1990s, the structure and coordination of this international, extraterritorial system were a matter of great debate. Archival records at the Clinton Presidential Library indicate a strong U.S. preference for engagement with established capitalist states supporting a U.S.-centric, neoliberal approach to internet governance. The placement of internet governance issues under the purview of the Department of Commerce in the Executive Branch of the U.S. government focused attention on issues like trade, intellectual property, and taxation seemingly above democratic concerns like freedom of expression or the protection of individual privacy. Still, the U.S. had to manage these priorities within an international landscape that brought additional and sometimes conflicting perspectives to bear.

These sorts of decisions represent a continuity with previous eras of media regulation, and the decisions made about internet governance at that time have impacts to this day. In this paper, I examine the ways in which the Clinton Administration engaged with interest groups attempting to assert influence over internet governance. In particular, I focus on the private industry representatives engaging with the administration, the technological experts who were some of the earliest users of internet technology and who had a deep understanding of the technical infrastructure issues being debated, and finally on the ways in which traditional state governments were engaging with the U.S. Government.

The United States Government’s role in defining internet governance did not spring from a vacuum. The U.S. had long established itself in the history of computing, and specifically in the history of networked computing. It is this history that positioned them to take leadership on the topic of internet governance. In the private sphere, academics and technical hobbyists in the U.S. were heavily involved in developing the standards that allowed for networked computing to take place, and many of these academics and hobbyists saw the internet through to its development as a mass medium. Still, this was a global network, and the commercial users of the internet benefitted from having access to this global audience.

In 1998, the United States still dominated the internet globally, with 70 percent of all websites coming from there, but it was outside of the U.S. where growth rates were the highest. The Organization for Economic Cooperation and Development (OECD) cited North America, Europe, and the Asia-Pacific region as the places with the most electronic commerce activities and therefore the focus for inclusion in discussion around internet governance. I argue that the ways in which these interests were presented in U.S. policy on internet governance follow a neoliberal logic, prioritizing market concerns and asserting that the utopic visions of the internet were best achieved through a system that supported stable electronic commerce. In an effort to appear inclusive, we see an attention to multistakeholderism – an interest in including a variety of voices in the discussion, which goes beyond any multilateral or state-to-state policy making to include private interests as well. Though this discourse frames that attention to commercial interests as inclusive and even democratic, including commercial perspectives undercuts the ability for other governments and any form of non-commercial public interest groups to set policy while advancing corporations’ ability to do so.

Regardless of how democratic, or at least inclusive, the discussion over internet governance appeared at times, the goal here is to remain focused on identifying the various interests, and by extension, the different values that were carried into the process of defining internet governance at the time when the internet was firmly established as a commercial network. At its best, multistakeholderism ensured that people with greater technical expertise were able to contribute to the conversation and that the new stakeholders who were to build up electronic commerce became familiar with the intricacies of the governance process. At its worst, multistakeholderism brought so many voices to the table that it was easier to forget that having that many voices did not actually indicate that all potential internet users’ interests were being reflected or even considered. It is that history that I examine here as a way to think critically about the issues of internet governance that persist to this day.

14:00-15:15 Session 8D: Researcher Access and Support
14:00
Lowering the Barrier to Access: The Archives Unleashed Cloud Project

ABSTRACT. Mellon Foundation, aims to make petabytes of historical internet content accessible to scholars and others interested in researching the recent past. We respond to one of the major issues facing web archiving research: that while tools exist to work with WARC files and to enable computational analysis, they require a considerable level of technical knowledge to deploy, use, and maintain.

Our project uses the Archives Unleashed Toolkit, an open-source platform for analyzing web archives (https://github.com/archivesunleashed/aut). Due to space constraints we do not discuss the Toolkit at length in this abstract. While the Toolkit can analyze ARC and WARC files at scale, it requires knowledge of the command line and a developer environment. We recognize that this level of technical expertise is beyond the level of the average humanities or social sciences researcher, and our approaches discussed in this paper concern themselves with making these underlying technical infrastructures accessible.

This presentation presents the Archives Unleashed Cloud: both to introduce it to researchers, but also to stimulate a conversation around where the work of the researcher begins and the work of the research platform ends. It also discusses the problem of long-term project sustainability. Researchers want services such as the Cloud, but how do we provide this service to them in a cost-effective manner?

Stage One: Learning to WALK

In 2016, we launched the Web Archives for Longitudinal Knowledge (WALK) project thanks to support from Compute Canada’s Research Platforms and Portals competition. Our goal was to bring Canadian Archive-It partners together into one search portal and with analytic tools so that researchers could extract datasets of interest. By working with collections from six Canadian universities (Alberta, Dalhousie, Toronto, Victoria, Simon Fraser, and Winnipeg) we were able to develop our infrastructure, explore edge cases in terms of strange data and errors within ARC and WARC files, and provide public search access to web archive collection using our “Warclight” portal (i.e. http://dalhousie.archivesunleashed.org).

Yet our goal to provide researcher access to under-utilized web collections was only partially successful. First of all, we did not interact directly with researchers or provide a modern interface for them to choose what they wanted to analyze. Secondly, everything relied on manual work from the project team. For the next stage of the project, we wanted to develop a self-service portal for users to interact with web archives.

Stage Two: Enter the Archives Unleashed Cloud

The Archives Unleashed Cloud thus aims to facilitate the analysis of web archives through a modern-web based UI. It bridges the gap between these easy-to-use curatorial tools and developer-focused analytics platforms. In short, right now, it is relatively easy to create a web archive – but it is still too difficult to analyze one! Growing out of the “Filter-Analyze-Aggregate-Visualize” cycle developed for the Archives Unleashed Toolkit (Lin et al, 2017), the Archives Unleashed Cloud is a web-based platform for analyzing web archives. It allows users to do the following:

  • Sync their web archive collections with the Cloud using the Web Archiving Systems API (WASAPI). Currently we support Archive-It collections but as other archival institutions adopt WASAPI our platform can speak to them;
  • Transfer ARC and WARC files into the Archives Unleashed Cloud;
  • Process ARC and WARC files and generate:
    • Full-text search for text mining;
    • Hyperlink networks for network analysis;
    • Other statistics on the shape of the collection. Researchers and institutions can use the canonical Archives Unleashed Cloud at https://cloud.archivesunleashed.org or, as it is an open-source project, can run their own local versions.

Sustainability

As we develop the working version of the Archives Unleashed Cloud, one of the main concerns of the project team is the future of the Cloud after funding ends in 2020. While we are currently exploring whether the Cloud makes sense as a stand-alone non-profit corporation, we are still unsure about the future direction. How do services like this, that meet demonstrated needs, survive in the long run? Our presentation discusses our current strategies but hopes to engage the audience around the state-of-the-field and how to best reach web archiving practitioners.

Conclusions

Projects and services like WebRecorder.io and Archive-It have made amazing strides in the world of web archive crawling and capture. The Archives Unleashed Cloud seeks to make web archiving analysis similarly easy and straightforward. Yet the scale of web archival data makes this less straightforward.

14:20
Supporting Computational Research on Historical Web Collections

ABSTRACT. The Internet Archive (IA) has been archiving broad portions of the global web for over 20 years. This historical dataset, currently totaling over 20 petabytes of data, offers unparalleled insight into how the web has evolved over time. Part of this collecting effort has included the ability to support large-scale computational research efforts analyzing this collection. This presentation will update efforts within IA to support computational use of its web archive, approaching this topic through description of both program and technical development efforts. The talk will outline different business models in use within the community for supporting computational research services for historical data, program evolutions, support structures, engineering work, and will detail these areas through discussion of numerous specific projects done in collaboration with researchers. Points of discussion will include:

  • Researcher support scenarios
  • Program structures, funding and sustainability
  • Data limitations, affordances, and complexities
  • Extraction, derivation, and access methods
  • Infrastructure requirements
  • Relevant tools and technologies
  • Collection development and augmentation

In covering these topics through the lens of specific collaborations between IA and computational researchers performing large-scale analysis of web archives, this presentation will illuminate issues and approaches that can inform both the implementation of similar programs at other web archiving institutions and also help researchers interested in data mining web collections better understand the possibilities of studying web archives and the types of services they can expect to encounter when pursuing this work.

15:30-16:45 Session 9A: Imaginaries
Location: OMHP C1.17 (48)
15:30
Digital Utopianism and Network Music: The Rise and Fall of the Res Rocket Surfer band

ABSTRACT. Computer network music is frequently championed on the grounds that it offers opportunities for experimental forms of social organisation rooted in ‘radical democratic’ principles (Knotts & Collins 2014). Some of the commentary on the genre mirrors the discourses surrounding digital utopianism (Turner 2006), where communitarian-like qualities of bottom-up self governance and anti-hierarchical organisation are taken to be a natural product of information technologies themselves, rather than something that takes effort and cooperation to bring about. Yet it is telling that the majority of this literature takes electronic art music as its focus, a field that inherits ideas about decentralised control and flattened hierarchy from 1960s experimentalism and free improvisation. What models of social organisation does network music augur when the style of music it supports bears no relation to these traditions?

This paper will focus on a group of computer musicians, studio engineers, software programmers and dotcom entrepreneurs who, in 1995, created the first commercial system for geographically-distributed music production on the public internet: the Res Rocket Surfer project. Although technically the system did not afford real-time performance, the near-instantaneous transmission of midi files afforded a novel form of loop-based studio performance wherein internet-enabled studio musicians could collaborate on the same studio session together from different locations simultaneously. But if the early web ideals of net-enabled democracy and virtual community were central to Res Rocket’s technological and musical ‘imaginaries’ (Born and Hesmondhalgh 2000), then this was at variance with both the musical ambitions of leading figures in the community and the changing model of governance regulating users’ ownership of data and modes of interaction. This paper will draw out these competing layers of technological, social and institutional agency as they played out in the Res Rocket surfer community, paying particular attention to the oppositional and counter-hegemonic practices that developed among members as the company became increasingly integrated with major commercial audio softwares.

References

Born, Georgina, and David Hesmondhalgh. Western music and its others: Difference, representation, and appropriation in music. Univ of California Press, 2000.

Knotts, Shelly, and Nick Collins. "The Politics of Laptop Ensembles: A Survey of 160 Laptop Ensembles and their Organisational Structures." NIME. 2014.

Turner, Fred. From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. University of Chicago Press, 2010.

15:50
Bugs: viral thinking in the early internet

ABSTRACT. This paper argues that scholars of computing, networks, and infrastructures must reckon with the inseparability of “viral” discourses in the 1990s.We present a co-assembled archive, which documents the reliance on viral metaphors honed in the HIV/AIDS crisis and its massive loss of life, widespread institutional neglect, and comprehensive technological failures. As the 1990s mark a period of intense domestication of computing technologies in the global north, we document how public figures, computer experts, activists, academics, and artists used the intertwined discourses surrounding HIV and new computer technologies to explicate the risks of vulnerability in complex, networked systems.

In 1991 the National Research Council (NRC) published a report titled Computers at Risk: Safe Computing in the Information Age. The report detailed the myriad ways that American information infrastructures were in peril because of their increased dependence on computers. One source of potential risk was the way vulnerability multiplied in a network structure: “The most damaging change in the operating assumptions underlying the PC was the advent of network attachment.”

In 1999, the NRC published a second report, Trust in Cyberspace, which promoted institutional and personal responsibility for protecting networks. William Flanagan, a co-author of the report, likened the sharing of data through “open networking environments” to the “spread of AIDS, noting new concerns about the trustworthiness of the people who constitute one’s social network and the dire consequences that could result from the indiscriminate expansion of one’s contacts.”

These two passages book-end the 1990s and frame a story that was frequently told about computing in this period: using a computer is risky, being connected to other humans is risky, and the more we rely on computers the more we put ourselves at risk. Flanagan’s concerns about the “indiscriminate expansion of one’s contacts” reflect perfectly what Cindy Patton calls the “National Pedagogy” of Safe Sex Education that had developed in response to HIV/AIDS. The national pedagogy presented monogamy and abstinence as solutions to the epidemic, as in statements made by Reagan-era Surgeon General, C. Everett Koop, as the U.S. government’s official standpoint on HIV. Koop employed a model of networked contagion in his oft-repeated dictum: “When you have sex with someone, you are having sex with everyone that he or she has had sex with in the past."

This paper is organized around three figures contemporary with the beginning, middle, and end of the 1990s,

In part one, “the virus,” we examine the late 1980s, when the overlap of the HIV/AIDS epidemic and computing was largely confined to explicating risk: the portrayal of HIV as a high-tech alien invader, the mixing of biological and technical meanings of virus, and fears around sharing either digital information or bodily fluids with a potentially endless network of anonymous others. Here we begin with the first example of “ransomware,” a floppy disk that masked itself as a pedagogical tool for protecting oneself against HIV.

In part two, “the network,” we examine how HIV and computing were used to explicate each other during a period of rapid technological adoption in the mid-1990s. Many people in the global north were experiencing a commercialized internet for the first time and some people living with HIV/AIDS accessed new pharmaceuticals that allowed them to live with the disease as a chronic illness. Here, the homology serves to explain systemic interdependence within the emerging everyday-ness of networked computing and health management.

In part three, “the infrastructure,” we examine the end of the decade when the “Y2K crisis” took on prominence as a subject of fear and outrage. In this period, we see AIDS activists and those living with the virus respond to the crisis as experts in surviving institutional failure; if the fear of contracting HIV compelled a crisis of trust in the 1980s, an almost identical set of concerns were articulated in the 1990s as the Y2K crisis tested a complex society built on computerized infrastructure.

In her landmark study of the early consumer internet, Wendy Chun argued that “sex and sexuality dominate descriptions and negotiations of the thrills and dangers of networked contact.” Building on this work, we argue that to understand the emergence of the so-called network society scholars must grapple with the fact that HIV/AIDS offered a convincing template for understanding and explaining the dangers of interdependence in networked relationships. Likewise, networked computing provided a powerful way to understand HIV/AIDS, especially the ways networks of human connection joined disparate populations while simultaneously providing a potential structure for survival.

16:10
The Evolution of Parafictions in Contemporary Media Art 1998-2018

ABSTRACT. Considering current technological infrastructure in relation to the web’s past this paper will trace the legacies of parafictions in net.art, demonstrating how these strategies have adapted and evolved in contemporary media art to align themselves to our current experience of technology and anti-politics. This paper will examine and discuss why and how parafictions have become an important mode of practice in contemporary art and how these practices have altered. For the purpose of this paper, Carrie Lambert-Beatty’s definition will be used which defines a parafiction as a fiction, which is experienced as fact.

With net.art and early forms of digital art, parafictions created had the aim of being accepted as real and were often considered to be forms of activism, this is apparent in UBERMORGEN.COM’s [V]ote-Auction (2000) and Eva and Franco Mattes’ Vaticano.org (1998). However, post the 2008 financial crisis, leading to ten years of austerity and the use of social media becoming ubiquitous, which has opened up polarising chasms of opinion and separated us into self-reflective silos, artists have begun to break with the real. This paper will examine how parafictive practices have shifted from the plausible to the implausible. Parafictions are less interested in the simulation and reproduction of truth and have begun to offer alternatives truths, this includes works such as Ian Cheng’s Emissaries (2015-2017) and Rachel Maclean’s Spite Your Face (2017) and is reflective of the social political shift in which people have become anti mainstream politics.

Evidently, our relationship to computation has altered significantly since the internet and World Wide Web’s (WWW) inception and widespread implementation from the early 1990s, with Tim Berners-Lee’s WWW was built upon egalitarian principles and intended as a force for good, however naïve this may have been. On the 29th Anniversary of the WWW’s inception Berners-Lee has expressed fears for how the Internet is being used. He is concerned that although half the world’s population is now online, this is not enough and that the inability to access the Internet is enforcing poverty (2018). Yet his main concerns lie with the Internet’s uses, as it is controlled by dominant platforms, which cement existing boundaries, concentrating power among the few, and has resulted in the Internet being more readily weaponised (2018).

Following the line of enquiry, which proposes the internet today as a space which has shrunk in size and scale through our usage, is affected by biases, is largely undemocratic and is not function to its full potential this research will consider and employ Benjamin Bratton’s concept of The Stack (2015). Today software is integrated into our contemporary culture and society, this research investigates how software and hardware merge to affect our lives day to day and seeks to use Bratton’s definition of The Stack as a way to rationalise our current situation and demonstrate the space where parafictions exist. This research will refer to the internet and digital technology interchangeably with the overarching term technological infrastructure and argue that this is now at a planetary-scale. Here this means that technological infrastructure today affects all aspects of our planet, both its physical geography and its human residents.

The evolution of parafictions in contemporary media art will demonstrate how the social and political situation between 1998 and 2018, alongside the development of computation to its current planetary-scale has affected our relationship with truth. This has subsequently led to the growth of artists engaging in parafictive acts, which exploit and reflect our era of so-called post-truth and fake news. It will examine why we often choose to accept a narrative over a truth. It will discuss how artists have created forgotten pasts, potential futures and alternate realities through the use of digital media and the structure of the web both that was and is to be.

15:30-16:45 Session 9B: Digging/Archaeology 1
Location: OMHP C0.17 (84)
15:30
Public personal documents: tracing trajectories of the first Polish bloggers’ digital identities

ABSTRACT. The paper draws on a multiple-case (8) study of some first Polish bloggers experimenting with the genre of an online personal diary and the weblog technology of the later 1990’s and early 2000’s. Using the biographical method (experimental mixture of biographical interview and blogs-as-personal-documents) it attempts to capture the unique and decisive moment of the relatively short history of the internet, defined on the one hand by a rapid adoption of the ICT by the users, but on the other - also by a considerable anonymity and freedom to define oneself online - due to relatively few people engaging in the first forms of online sociability. The users would engage enthusiastically in blogging and the possibility to write themselves into being in an unprecedented social configuration of public and anonymous at once. The blogs used to be public then, due to the unlimited access and asymmetric relations with the blog audiences but at the same time, they would be anonymous, because of the pseudonymous genre and the absence of the offline social ties in these online spaces. Therefore it was both a creative challenge to produce one’s meaningful online presence via weblog and a threat of acting in front of an unknown publics in a way that was had not had any social scripts to follow.

The study reconnects to Erving Goffman interactionist sociology of the self/identity that is at the same time produced and confined and to the philosophy of late Michael Foucault, writing about the technologies of the self as the social enacted by the individual in the making. This moment of the internet history presents a unique opportunity to study the human adoption of the technology of “social media” before they existed as platforms governed by the corporate business and hence develop a critique of Facebook and the likes. The social media platforms quickly found themselves in a position to capitalize on the internet sociability. The ideologies of connectivity, platformization and one identity erased the freedom and ease of the blogging sociability and led to standardization and quantification of the social presence online. The public spaces (and documents) online ceased to be private spaces, or they became privately owned commons, still defined as personal. This contradiction makes it critical to study the histories of personal-digital identities online to examine the intersection of creativity and relations with the online audiences of individual users. The study tracks the blogging trajectories, the relations with the (imagined) audiences, the creative process behind the pseudonyms and blogging itself, producing and revealing oneself and asks a question if this idiographic study of a specific intersection of social and technological circumstances can facilitate reimagining the online sociability and identities in post-platformization times of hyper-accessibility, oversharing, surveillance and exploitation.

15:50
The Misconstrual of Block Quotation: A Web Stratigraphy

ABSTRACT. Block quotation, traditionally known as excerpt, extract, or long quotation, is a feature of academic writing in the humanities and social sciences that is largely unknown to writing in science and engineering (Hyland, 2004, p. 26). When, in the early nineteen-nineties, the blockquote element was being formalised as part of the earliest HTML specifcations, block quotation had been supported and publicly documented for more than a decade in virtually every application of SGML, the markup language that HTML is modelled on. By the time HTML was being created, academic writing – including norms of quotation and citation – had also been subject to intense standardisation for roughly a century, through the eforts of university presses, scientifc and scholarly associations, as well as tuition in academic writing.

Today, HTML’s blockquote element is hardly ever seen in its standard academic form, except in scholarly publications. In general usage, the blockquote element now bears only distant resemblance to the academic norm it from which it descended.

This paper, working from archival materials, peels back and documents two distinct layers of the process in which the blockquote element came to diverge from the academic norm of quotation over the quarter-century of the web’s existence. The paper asserts that in two separate acts of collective misconstrual, features extraneous to the academic norm of quotation were confated with the blockquote element: the style conventions of the pullquote, which is a conventional design element in magazine production; and the epigram, a form of quotation located outside the running copy, associated with loose attribution that eschews the rigours of academic citation.

16:10
‘The web is not print’: tracing historical influences on changing web coding practices

ABSTRACT. While web developers have been the subject of studies related to their working conditions and worldviews (Kennedy 2011), and the websites they have created have been discussed and situated in a cultural and economic context (Ankerson 2018), there has been little attention paid to the historical development and diffusion of the coding languages that web developers use day to day to create websites. Web developers and designers are constrained by code (Engholm 2002, Brügger 2013), and the limitations of the web browsers that display it, so investigating the history of coding practices and their browser implementations has an important role in the literature of histories of the web.

Each of the three main languages used to create websites – HTML, CSS and JavaScript – have their own origin stories and phases of development, inflected by historical changes, economic patterns, and varying levels of browser support. These have been described and debated in the industry literature but have not yet been the subject of academic analysis. These languages can encode graphic design ideas that existed pre-web: CSS is a web-specific way of specifying and controlling typography and layout, a graphic design process integral to the discipline. Finding the commonalities and differences in approach between disciplines, mediated by the code web developers must employ to create effects and layouts, will open up further avenues of analysing the online outputs of web designers and developers.

Even if web design is informed and influenced by other design disciplines, it is still located in a nexus of code, browser support, varying levels of education, and the diverse backgrounds of web developers and designers. By observing the dialog between different design disciplines, the discourse within the web design ‘community’ and the process of specifying new coding practices, the aim of this research is to detect patterns and rhythms in the historical development of web development, quantified by studying code stored in web archives alongside analysis of blogs and printed publications aimed at web developers and designers. For example, in the late 2010s web browsers began to support a new form of page layout, referred to as ‘CSS Grid’ (Andrew 2017). This was a re-formulation of the process of creating layouts controlled by an implicit grid pioneered in the graphic design context by Josef Müller-Brockmann (2011) in the 1960s, and extremely influential in graphic design practice since then.

By analysing the names chosen for the different aspects of creating and controlling the grid in CSS grid layout, and comparing these to the common terms used by graphic designers, it should be possible to observe what was considered relevant and understandable for transporting the process from one medium to another. What is missing? What has been copied? Does the implementation of these ideas in code control the same kind of variables and visual properties available to offline designers? Some years before CSS grid was formulated, Müller-Brockmann’s grid layout processes had been brought to the web by web designers like Mark Boulton and Khoi Vinh (2011): however these early efforts relied on prototyping ideas outside of the browser, then (mis-)using CSS properties like float, display and position to realise the layout in the browser. Continuing wider adoption of grid principles in these early years also seems to have solidified what had been a very individual practice for graphic designers into a more rigid process online - grid ‘frameworks’ like 960.gs (a forerunner of tools like bootstrap) reduced grid layout to easy to implement ‘off the shelf’ options. Does this process point to a difference in the background for designers in these two disciplines, or a medium-specific requirement?

Twitter also provides useful historical data on the formation of the grid specification: exchanges between designers and the codifiers of CSS grid show that the process of deciding on the naming of particular CSS properties (e.g. ‘grid-gap’ rather than ‘grid-gutter’) caused tensions. Collecting and analysing these sheds valuable light on the different conceptions of grid layout between disciplines. The web industry often protests that ‘web is not print’, but it seems worthwhile to attempt to explain exactly why and how this axiom has gained popularity: does it expose the tensions between ‘creative’ and individualistic graphic designer practice versus a more ‘mechanical’ and collective web codification of the same ideas in CSS? Could it also point to the role of transitioning graphic designers ‘taking their talent to the web’ (Zeldman 2001) in the early days of web design and their part in the history of the web, helping to ‘prototype the future’?

15:30-16:45 Session 9C: 90s Web Roundtable
Location: OMHP C2.17 (48)
15:30
Analysing and reconstructing the Internet and Web of the 1990s: A round table

ABSTRACT. The round table will be organised with the authors and editors of a special issue of an academic journal that will be published at the end of 2018, which incorporates several themes identified in the call for papers: Web and Internet histories, archives and access, digital activism and Web history, historicising the Web and digital culture, and social imaginaries of the early Web. Several of the authors and editors of this issue dedicated to the 90s have agreed to exchange and compare their views, methods, problems and expertise, not by presenting the papers they wrote for the issue but rather by engaging in a dialogue based on four central questions that will shed light on the history of the “digital turn”:

  • How and why were the 1990s a pivotal decade for the Internet and the Web?
  • Why do we link/intertwine the Internet and the Web: is this relevant? What are the limits and advantages of this approach in analytical terms?
  • What methods, sources, issues and limits come into play when we attempt to reconstruct the history of the 1990s?
  • What type of approach is the most relevant and effective: a bottom-up or top-down approach? A study that explores the fringes or one that remains rooted in the mainstream? A US-centric or a more global, or local, or decentralized approach? A discipline-based approach or an interdisciplinary one?

Themes explored by the contributors include in particular Perl and the technology and culture of the early Web, the development of the digital rights movement in France, the emergence of a cyberculture in Amsterdam in the 1990s, early fandom communities, integration of myths into the Internet and Web’s popular histories, writings and debates on spiritual and ethical implications of “cyberspace” in the 90s, development of an analytical infrastructure to rebuild the history of the 90s. This round table will provide a rich seam of historiographical and methodological perspectives, through the interaction of authors with highly diverse approaches, whether in terms of sources (grey literature, technical guides and handbooks for the general public, legal or state reports, press and audio-visual archives, oral histories, web archives), methods (examining the portrayal of the Web and the Internet in speeches or contemporary representations, analysing controversial issues of the time, incorporating STS notions, etc.) or perspectives (European or North American approaches, the study of software, infrastructures, online content, etc.). This in turn will foster a dialogue with the audience on the writing and shaping of histories of the Internet, the Web and also digital cultures.

15:30-16:45 Session 9D: Archives and Migrant Communities
Location: OMHP C0.23 (26)
15:30
Analysing and archiving the web presence of migrant communities in the web archives of the Netherlands and the UK

ABSTRACT. For recent migrant communities, the web has served a vital role in relation to sustaining and creating relations both in the new country of residence and maintaining diasporic networks elsewhere in the globe. While the role of digital technologies in connection to migrant communities has become a growing area of research interest in recent years, this is largely focused on newer platforms and apps, while the open web and in particular archived web materials are often overlooked.

At the same time, patterns of wider social and political marginalisation facing many migrant communities potentially make their web presence more vulnerable to erasure or disappearance. For less permanent or established communities, access to the infrastructures for preserving their own web presence may be limited or at least not be considered a priority in light of other challenges they may face. At the same time, the use of languages other than the official language of their country of residence may also mean these communities’ web presence is overlooked in more generic web archiving and searching practices.

In light of both the valuable histories of migrant communities recorded in the open web and the potentially vulnerable nature of these histories, this panel focuses on the creation of Special Collections on specific communities within the national web archives of the Netherlands and the UK. The speakers will draw on examples of archived and contemporary web materials to illustrate what they reveal both about the communities who created them, as well as the wider social, cultural and political histories of the countries in which these communities are located. With archiving understood as a critical and reflexive practice, speakers will discuss their own processes of selection and curation, and how these implicate the researchers in the representation and (re)creation of the community in the archive.

17:00-18:15 Session 10: Keynote: Fred Turner

Prof. Fred Turner (Stanford University)

Machine Politics: The Rise of the Internet and a New Age of Authoritarianism

In 1989, as Tim Berners-Lee dreamed up the World Wide Web, a deep faith in the democratizing power of decentralized communication ruled American life. Even Ronald Reagan, the Great Communicator of the Hollywood era, could be heard to proclaim that “The Goliath of totalitarianism will be brought down by the micro-chip.” Today of course, we know better. The question is, how did we go so far wrong? To try to answer that question, this talk returns to the 1940s and shows how our trust in decentralized communication was born in the fight against fascism during World War II. It then tracks that trust through the counterculture of the 1960s to the Silicon Valley of today. Along the way, it shows step-by-step how the twentieth-century American dream of a society of technology-equipped, expressive individuals became the foundation of today’s newly emboldened and highly individualized form of authoritarianism.