CEPE/ETHICOMP 2017: CEPE/ETHICOMP 2017
PROGRAM FOR WEDNESDAY, JUNE 7TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 12A: Privacy
Location: Room H3: building D4, 2nd floor
09:00
Digital Privacy: Leibniz 2.0
SPEAKER: Wade Robison

ABSTRACT. In 1963, Chief Justice Earl Warren called the ‘fantastic advances in the field of electronic communication’ a danger to the privacy of the individual. If we use the privacy torts as developed in American law — intrusion, disclosure, false light, appropriation — we can see how dangerous those advances have been regarding our privacy. We will see how readily so many can do so much more to invade the privacy of so many more. We will also see a thread running through the privacy torts that was not readily visible before: in-vasions of privacy treat us as objects to be observed, revealed, manipulated, and used.

09:30
Security, Privacy'); DROP TABLE users; -- and Forced Trust in the Information Age?
SPEAKER: unknown

ABSTRACT. A famous xkcd comic strip portrays the dialogue between the mother of a school-aged child named Robert’); Drop TABLE Students;-- and a school official. This rather individual name–that ends in a SQL-injection attack (c.f. Halfond et al. 2006) is actually a successful security attack against the school’s information system. In the strip, registration of the name caused the loss of all student records. As the official expressed his disapproval, the mother in turn hoped she had succeeded in teaching the school to sanitize their database inputs. While Randall Munroe’s xckd comic strip is an amusing small story told in four panels, it is also an illustrative conversation piece about trust in the modern information age. Information and software systems have become the cornerstones of our society. Each system has its producers, operators and users, and these parties are forced to trust each other to some extent. As a concretizing example, the producer of the comic strip’s information systems trusted that its operators do not try to hack the system. Similarly, the unlucky operator trusted that users do not try to hack the system. As the infamous mother showed, even one weak link can cause remarkable losses to trust relationships, stored data, and information systems. Trust and its absence are widely discussed topics in several disciplines; economics, sociology, computing and philosophy all have their own take on the concept of trust. For instance, computer and information system scientists as well as software engineers have been discussing trusted platforms (Spinellis, 2003), users trust to preservation of digital information (Hart and Liu, 2003) and influences of customers’ trust to web pages and services (Gefen et al., 2003). Economists have examined both interpersonal and institutional trust. Examples include trust in electronic commerce transactions (Manchala, 1998) and organizational trust (Mayer et al., 1995; Schoorman et al., 2007). Schneier (2012) examines trust as an essential part of functional society, and provides a framework for modelling trust in society. Observing the sheer number of definitions and viewpoints on trust, it becomes evident that a single definition of trust is hard to come by. Marsh and Dibben (2003, p. 468) note that as each discipline focuses on their own relevant aspects of trust, trust as a subject is inherently obscured. Trust in the discourse of IT and ethics is a widely studied field. A quick search in two main journals reveals this quite handily, as dozens of relevant articles can be found amongst the hundreds that are offered. However, in this context in which we are looking at the topic, the conclusion which Giustiniano & Bolici (2012) draw that there really is no consensus on what the definition or characteristics of trust are, we can figure out what ICT enabled trust requires in specific situations. This paper looks at one such specific situation. Examples of building ICT enabled trust include (but are hardly limited to) for example trust through using a trusted third party (reference withheld), reputation building and tracking (De Laat, 2010, 2014) and being reliable and consistent in ones behaviour (be the one behaving either a person or a corporation) (reference withheld; Rendtorff & Mattson, 2012). When we consider potential adversaries attacking information systems, we can group them into different categories based on their motivation (Tanenbaum, 2001). Curious non-technical users are exactly what the definition says. Motivated generally by curiosity instead of malice, they explore resources to which they have been given access. Insiders are technically capable users who take it as a challenge to break the security of a system. Financially motivated attackers range from individuals to organized criminal groups, but all have the same goal of making financial profit. Espionage, whether private or state-funded, refers to attackers with clear motive, serious funding and capability to compromise systems. There is no such thing as perfect security (Schneier, 2003). The only cryptosystem that actually offers provable perfect security – the one time pad – is notoriously difficult to implement (Shannon, 1949). We must accept some insecurity in information systems for them to be usable, and if something is usable it can be misused. This leads to organizations having to accept a certain amount of risk with information systems and data. For example, banks accept a certain loss margin with credit card systems. While fraudulent users and technical errors cause a nontrivial financial loss, in the end the benefit (and profit) from using the system far outweigh potential losses due to malicious activity. Information systems’ users are often guardians of information, for they have access over information that should be guarded. Thus the old question by Socrates is ever more relevant: who guards the guardians? We must place trust in these guardians, at least in matters that are not of utmost importance – such as (electronic) voting, national security, medical records etc. If the systems are built as secure from the inside as from the outside and the users are either heavily restricted or intensively monitored, a lot of resources are lost (Vuorinen, 2014); resources that could be used either to improve the system or the service or to improve the overall security of the system from outside attacks. It must be noted that there must be some security in the system itself – at least against the most basic attacks and malware, but in some areas it is more important to direct scarce resources to the fields that require them the most. Input sanitation increases the workload, provides an extra layer of complexity in system design, and slows things down – this holds for practically all security measures (Vuorinen, 2014). If there is no need for added security as there can be believed to be trust, we would not necessarily need the amount of security put in place, irrespective whether the users could be caught or not should they choose to do restricted actions with the system. We must remember however that while not the most common, often the most serious security risk for an organization is an unhappy former employee who misuses this trust (Schultz, 2002; Warkentin & Willison 2009), and thus keep in mind that even if we do trust our workers, fired workers are a different matter all-together. We depart from the extant literature, which frequently focuses on voluntary trust relationships built around or enabled by ICT services and products, by focusing on trust relationships that are forced by, e.g., official, superior or governmental stakeholders. This is clearly a case of trust as despair (Mayer et al., 1995), a type of situational trust where the trusting agent has no other choice but to trust the solutions given to them by official parties. Here, on one hand, ‘voluntary’ refers that a user of an information system or an ICT product are able to decide or not to trust another party. The forced trust, on the other hand, illustrates a situation where a user is forced to use, and to trust, an information system or an ICT product. We will model these relationships using the concepts of negative trust: mistrust, distrust and untrust (Marsh & Dibben, 2005). Our goal is to illustrate the trust landscape of both public and private information systems. Finally, recent developments on global network surveillance leads us to expand our perspective. The methods used for surveillance by actors such as the United States National Security Agency and other equivalent agencies extend to compromising and exploiting basic infrastructure of the Internet, in turn making the Internet less secure for everyone. This is achieved by controversial methods such as leveraging the physical layout of the backbone network (Clement, 2014) and compromising the operating system of backbone network routers (Gallagher, 2014). The dilemma of trust in information systems thus can be rephrased in the context of global communication infrastructure; instead of asking whether we can trust an individual information system, we must now question whether we can trust the underlying infrastructure of the Internet itself, and to what extent Internet users are in the desperate position of forced trust. In the full paper we examine through various examples and case studies where and when the trust is forced, and in which cases the users just cannot be trusted.

10:00
Whose Data is it Anyway? What Should be Done with “Personal Data”
SPEAKER: Kevin Macnish

ABSTRACT. When it comes to dealing with data relating to individuals, many commentators have assumed a libertarian stance (Kish and Topol, 2015; Newman, 2016; Sax, 2016). Their position is that data belong to (are owned by) the person or people to whom the data relate. This has largely gone undefended and unchallenged, even by those who think that the data could be of benefit to society (e.g. in terms of health care analytics) but who phrase the debate in terms of giving to society, rather than a more Marxist alternative which might assume that the data belong to society or the state (Ganesh, 2014; Rumbold and Pierscionek, 2016). I believe that this position reflects a recognition that harms can arise to individuals and communities from the use of their data by third parties, and a sense that the model of ownership is the best way to protect against this.

In this paper I challenge this view that data belong to the person or people to whom the data relate. I go further than this, though, and argue that the concept of ownership is, at least in the first instance, unhelpful and incoherent when it comes to describing the relationship between data and the people to whom the data relate. Instead I hold that the best approach to considering this relationship is to adopt a Rawlsian process of reflective equilibrium. This will, I argue, lead to a position in which data are not owned, at least in the first instance, but the use of data is subject to checks and balances.

My argument starts by considering data as private property. This leads to an examination of the difference between normative and descriptive approaches to the concept of ownership. However, I argue that even relatively easy cases are beset with problems owing to the variety of examples that can be described as “personal” data and the uniqueness of data that relate to the person which sets such personal data apart from standard examples of private property. For instance, is the fact that I need oxygen to breathe (datum which relates to me) an example of personal data that I own? Is my approximate height, weight or hair colour an example of data that I own? Such freely-observable, or downright obvious facts concern data which relate to me, but seem nonsensical for me to “own” in any meaningful way. Alternatively, is it only data that I am able to restrict that I own? In which case, as soon as I am no longer able to restrict that data then I no longer own it. It would follow that once someone else acquires those data they are no longer owned by me (although given that there are now two parties that could restrict the data, does it follow that the data are now jointly owned?). As such the analogy with more commonly accepted examples of private property is unconvincing.

The second part of the paper considers whether, these problems aside, people might own data in similar ways to those in which I might be said to own my eye, or my right hand, as suggested by Cohen (2010). However, I argue that there are sufficient distinctions between that which people intuitively possess by natural right (such as body parts) and data about them. For example, if I lose my hand or my eye then I am deprived of a part of what it hitherto meant to be “me”. If, though, you gain access to my medical record I am not so deprived. While someone might think about and describe me as having two hands and two eyes (or one hand and one eye), they are far less likely to think about or describe me as having had my medical records accessed by another person (although they might were those records to then be published in the media).

In the third part of the paper I examine classic justifications of ownership (private or common) such as those provided by Locke, Hume, Smith, Marx and Rawls. Once more, though, I demonstrate that data relating to people does not, in the first instance, fit with any of these models. Locke’s theory of acquisition through mixed labour, for instance, suggests that I would not own my genetic code, but by processing my saliva sample 23andMe (or a similar genetic processing service) would own that data. Likewise, I would not own my medical record, but through recording and compiling it my physician would. That is, data about people can come to be owned according to Locke, but are not initially owned by anyone. An alternative position could be a more Marxist angle which would see the data as being the default property of the state. This could lead to interesting conclusions, but none which would be readily accepted in a liberal democratic environment. Here I also draw on Hettinger’s critique of intellectual property to demonstrate that common approaches to property and property acquisition are at least as problematic for data relating to people as they are to intellectual property (Hettinger, 1989; see also Fisher, 2001; Martin, 1995; Ostergard, 1999; Paine, 1991).

Drawing on parts two and three, I ask in part four whether data could somehow sit between Cohen’s position of self-ownership and any of the positions regarding the acquisition of ownership described in part three. Are personal data therefore more like apples (acquired private property) or eyes (self-ownership)? Again, though, problems arise here from the unique nature of data relating to people and how we intuitively react to the use of that data by others. As such, I conclude that the ownership model cannot coherently or usefully be applied to data relating to people. It neither works as an analogy, nor do people possess their data by natural right, nor do they acquire their data by standard theories, nor is there a compromise position which can be adopted which somehow sits between the alternatives. In short, the ownership model simply does not work once one scratches the surface.

Finally, I argue in part five that, absent the flawed model of ownership, data relating to people can and should nonetheless be subject to intuitively convincing safeguards imposed by society. The means to arrive at these can be reached through a Rawlsian process of reflective equilibrium. These will draw on intuitions regarding the nature of different types of data relating to the person coupled with the value that these data could have for society and the risks that sharing these data pose to the individual or groups to whom the data relate. These safeguards will ensure the protection of the individual or group threatened by third party use of data which relate to them without relying on conceptually incoherent and ultimately unhelpful models of ownership.

References Cohen, G.A., 2010. Self-Ownership, Freedom, and Equality. Cambridge University Press, Cambridge ; New York : Paris, France. Fisher, W., 2001. Theories of intellectual property. New Essays Leg. Polit. Theory Prop. 168. Ganesh, J., 2014. Big Data may be invasive but it will keep us in rude health. Financ. Times. Hettinger, E.C., 1989. Justifying Intellectual Property. Philos. Public Aff. 18, 31–52. Kish, L.J., Topol, E.J., 2015. Unpatients [mdash] why patients should own their medical data. Nat. Biotechnol. 33, 921–924. Martin, B., 1995. Against intellectual property. Philos. Soc. Action 21, 7–22. Newman, L.H., 2016. Obama Says People Who Give Genetic Samples for Research Should Own the Data. Slate. Ostergard, R.L., 1999. Intellectual property: a universal human right? Hum. Rights Q. 21, 156–178. Paine, L.S., 1991. Trade secrets and the justification of intellectual property: A comment on Hettinger. Philos. Public Aff. 247–263. Rumbold, J., Pierscionek, B., 2016. Why patients shouldn’t “own” their medical records. Nat. Biotechnol. 34, 586–586. doi:10.1038/nbt.3552 Sax, M., 2016. Big data: Finders keepers, losers weepers? Ethics Inf. Technol. 18, 25–31. doi:10.1007/s10676-016-9394-0

09:00-10:30 Session 12B: ICT and the City
Location: Room H4: building D4, 2nd floor
09:00
Introduction to the ICT and the City track

ABSTRACT. This is a placeholder.

I would like to give an overview on the current discussion on the topic of this track. The introduction is likely to be co-authored with my co-chairs. After giving an overview, we will situate the individual presentations in that broader discussion.

09:30
Technological Environmentality: bringing back the ‘world’ into the human-technology relations

ABSTRACT. Introduction I aim to discuss what can be called the ‘environmentality’ character of technology. The concept of environmentality is rapidly gaining influence in technology studies, in order to conceptualize the increasingly ‘environmental’ role of technology, as it is present in developments like Ambient Intelligence, Smart Cities, and the Internet of Things. I argue that a broader perspective on technological agency is necessary to understand what is at stake in these kinds of innovations. To make this perspective available I will expand postphenomenology, a theory within philosophy of technology that understands human-technology relations not in terms of mere causality, but rather of co-constitution. Postphenomenology describes in detail the existential implications that technology has for us humans, but its analyses are based on first-person methods and thus present difficulties when trying to be coherent with naturalized perspectives. After discussing the above, I will move on to considering what changes will occur if we keep moving towards technological environmentality. How will these transformation of the environment will ultimately change us human beings? How could the ‘engineering’ of human-environmental agency contribute to make our homes, public spaces, and cities a better place to live in?

Overall Structure What does it mean that technology has become environmental? Environmental technologies seem to constitute a specific kind of environment that needs to be distinguished from the environmental role any technology can play, and it requires us to consider the difference between an analysis focused on object/artifact agency and one focused on ‘technological environmentality’ agency. Following postphenomenology’s intentionality structure, I-technology-world, it is possible to understand the mediation processes that occur from both sides, but some adjustments and specifications are required.

In the first part of this paper I discuss two important shortcomings in postphenomenology that explain why it cannot fully grasp technological environmentality (TE). One is that postphenomenological analyses tend to focus on the “user” experience of individual technologies. Given that TE is not artifacts in use, the postphenomenological I-technology-world structure fails to address how technology is mediating our experience of the “world” since in a relevant sense, we could say that TE is becoming the “world”. The second reason is that TE is hidden from the everyday phenomenal experience. Therefore, addressing the human-technology-world structure from the “world” as a starting point does not make sense from a postphenomenological perspective.

Considering these shortcomings, in the second part of this paper I aim to discuss possible ways to understand how the “world” mediates the “I” by means of technology. Postphenomenological research has been clearly given attention to the equation related to “human-technology”, but not a lot has been said about the “world” in this equation. Would asking this question equal to asking how technologies disclose the “world as a whole”? Is “world” equivalent to environment? What is the role of “technological presence” in the constitution of technological agency? Is such presence referring to the built environment in general or could it deal with the emerging technologies above mentioned?

Philosophical Discussion Within postphenomenology, the process of co-constitution refers to the blurring boundaries between people and things. When Actor Network Theory and social constructivism started decentralizing agency two decades ago, they argued for a symmetrical analytical treatment of entities: no primacy of human agents over non-human agents should be accepted on a priori grounds. But their approaches were mainly about causality, basically agreeing that things have an influence on us humans. In contrast, conceiving technologies in terms of their constitutive role was quite a radical statement that postphenomenology has put forward. Within postphenomenology, Verbeek (2005, 2011) has argued that technological mediation does not occur as a single, one-way track. Technological mediation does not presuppose the asymmetry between humans and non-humans, but rather is interpreted as a two-way track on which human beings are also mediated and transformed by technologies: humans are not just extending their bodies and minds through the technologies they use, but are also being shaped by them. This means that technologies are not just exerting an influence on humans, but rather being part of the processes that give rise to our agency: they make us forget and remember, guide our everyday actions, channel and signify social experience, and allow our embodied routines (Malafouris 2013).

The degree of complexity in which different theories (i.e. actor-network theory, postphenomenology, and material engagement theory) have discussed the idea of co-constitution varies in both subtle and radical ways. Despite their differences, several of these theories have been accused of putting forward a “bland interactionism” that attempts to deal with the idea of co-constitution in a rather superficial way (Godfrey-Smith 2014). To not fall prey to such accusations, it is useful to draw from the notion of “mutuality relations”—in order to explore how we are shaping our environments, but mainly to understand in which sense and degree they are shaping us back. Therefore, I aim to explore whether postphenomenology can be enriched by a naturalized perspective that frames technological mediation as a process of “bio-social becoming” (Varela 1991, Ingold 2010).

The point of bringing in the notion of “mutuality relations” is to emphasize how any dualism between an agent and an environment should be understood as an analytical distinction, for what an organism does in and knows about the world should not be reduced to mere “information”, but rather be understood as a process of sense-making that implies a strong notion of “situatedness” and embodiment (Di Paolo 2015). Sense-making occurs through the activities we actively engage in, which nowadays are almost always technologically mediated, as postphenomenology has emphasized (Ihde 1990). It is important to note that technologies are actively shaping our actions and experiences of the world not solely through the ways in which we “use” them: sense-making is not only enacted in terms of actual or “effective” engagement with the affordances technologies provide, but also in terms of virtual or “potential” engagement that involves risks as well as opportunities (Ingold 2010).

Empirical Importance and Final Considerations When we consider that human agency and experience are intimately intertwined with an environment, it becomes clear that the biological and socio-cultural dimensions of it have become indistinguishable. As agency emerges as a dynamic process in interaction with the (in) materiality of our environments, what appears as achievable or affordable needs to be treated as a social matter: designing a smart city, a hybrid public space, or an open living lab, require us to evaluate what these places are affording to people. Technological environmentality will reshape the niches we inhabit by mediating our possibilities for actions as well as for sense-making. How should we analyze the “world-making” capacity of these technologies? What role does efficiency and persuasion play and should play in these kind of environments? What kind of affective and emotional behaviors should they foster? What is the importance of distinguishing between (a) reaction and adaption and (b) engagement and interaction to/in technological environmentality?

References ⁃ Barandiaran, X., Di Paolo, E. & Rohde, M. (2009), Defining Agency: individuality, normativity, asymmetry and spatio-temporality in action (v. 1.0.) in Journal of Adaptive Behavior. ⁃ Godfrey-Smith, P. (2014), Philosophy of Biology. Princeton: Princeton University Press. ⁃ Ihde, D. (1990), Technology and the lifeworld. Bloomington: Indiana University Press. ⁃ Ihde, D. (2012), Experimental Phenomenology. Multistabilities. Albany: State University of New York Press. ⁃ Ingold T. (2000), The Perception of the Environment Essays on livelihood, dwelling and skill. Londres: Routledge. ⁃ Malafouris, L. (2013), How Things Shape the Mind. A Theory of Material Engagement. Cambridge: MIT Press. ⁃ Michelfelder, D. (2015), Postphenomenology with an Eye on the Future, in Postphenomenological Investigations: Essays on Human-Technology Relations (Eds. Rosenberger, R. and Verbeek, P-P.) Lexington Books. ⁃ Van Est, R. et al. (2016). Rules for the digital human park. Two paradigmatic cases of breeding and taming human beings: Human germline editing and persuasive technology. Background Paper for the 11th Global Summit of National Ethics/Bioethics Committees. Berlin. ⁃ Varela, F. J., Thompson, E. and Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press. ⁃ Verbeek, P-P. (2005), What Things Do: Philosophical Reflections on Technology, Agency and Design. University Park, PA: Pennsylvania State University Press. ⁃ Verbeek, P-P. (2011), Moralizing Technology. Chicago: Chicago University Press.

10:00
Ethical Dimensions of User Centric Regulation

ABSTRACT. In this paper, we consider the role of technology designers in regulation, questioning both their ethical and legal responsibilities to end users. We assert that human computer interaction (HCI) designers are now regulators, but as they are not traditionally involved in regulation, the nature of their role is not well defined, and state regulators may not fully appreciate their practice(s). They need support in understanding the nature of their new role, particularly ethical dimensions, and we use our concept of user centric regulation (UCR) to help situate and unpack a closer alignment of technology design and regulation. We focus in particular on the Information Technology (IT) law and HCI communities, where to move forward together, supporting and complementing each other, a deeper understanding of their respective roles is necessary. The ethical dimensions of HCI designers as regulators need to be unpacked, considering how a technology mediates an end user’s legal rights in context and responding to their responsibilities to legal compliance and ethical practice.

We focus on the application domain of the internet of things to situate our discussions, considering the regulatory challenges designers need to respond to both in the context of the home, and when extended to smart city applications, for example around energy or mobility. From a user perspective, ambient, interactive domestic computing increasingly automates and anticipates key elements of the end users’ daily lives. Such technologies are normally designed to be mundane and unremarkable, embedded into the routines and everyday practices of users. Many domestic IoT systems rely on ambient human data collection, processed to draw inferences and understand user behaviour through patterns. These can be scaled up and contexts linked together in smart city initiatives, using aggregated patterns of daily life, especially around user movement and energy consumption. Accordingly, IoT technologies, and smart city applications, pose a range of regulatory compliance challenges. Controlling access to flows of information, as they move inside and outside the home, are a particular challenge.

To work together in practice around these regulatory challenges, HCI designers and IT regulators firstly need to develop a shared epistemic foundation. In this paper, we use our concept of ‘user centric regulation’(UCR) to help understand the ethical dimensions of designers as regulators. UCR conceptually situates overarching shifts in human computer interaction and information technology law, explicitly linking these two fields to start the process of building a solid, shared conceptual basis. We briefly introduce these here.

Within HCI, there is an increasing turn to reflecting on human values and societal implications of computing in design. Designers are not currently sensitised to regulatory concerns, yet they have the practical tools and approaches that can be re-purposed to understand how regulatory interventions impact user rights. Designers are already incorporating human values in design, using methods and theories like value sensitive design, responsible innovation and participatory design. We argue this turn in HCI to human values could be extended to engaging with information technology law and regulation, and by extension, ethical dimensions of their practice. HCI practitioners, as those most proximate to the interests of end users, are best placed to understand and respond to their concerns. Furthermore, any regulatory interventions they embed into technologies can be built from a situated understanding of the user, their practices and environment, managing how their legal rights are impacted in context.

In IT law, there has long been an awareness of the importance of system design for regulation, and legal compliance. Similarly, there has been a broadening of the concept of regulation and an explicit turn to technology designers and system design as regulatory actors. This is seen most clearly in the concept of privacy by design, as instantiated in Article 25 of the new EU General Data Protection Regulation 2016. However, there is a significant gap between the legal understanding of design, and the practices and epistemic commitments of HCI. Theoretically, this is problematic as IT law still sits very much within abstracted, top down social systems led models of how technology users (as regulated actors) are understood. In contrast, HCI is very focused on building situated perspective of users, their interests, practices and how the technology mediates their experience. Aligning these two disciplines under the concept of UCR, a shift from systems theory is necessary. As mentioned above, whilst designers have a role to play engaging with legal concepts and compliance, for example with privacy by design, the broader ethical dimensions of designer responsibilities to users need to be considered too. In this paper, we examine how UCR, as a concept, accommodates and interacts with the broader ethical responsibilities of designers as regulators.

09:00-10:30 Session 12C: Professional Ethics
Location: Room H5: building D4, 2nd floor
09:00
Is professional practice at risk following the Volkswagen and Tesla Motors revelations?

ABSTRACT. Each day society becomes more and more technologically dependant. Some argue that as a consequence society becomes more and more vulnerable to catastrophe. With the world in economic crisis the headlong drive for efficiency and effectiveness (and resulting profit) is the watchword. Such pressure might have resulted in real gains but has also led to unscrupulous or reckless actions. The tempering of such drive with ethical consideration is often neglected until there is a detrimental event causing public outcry. Such an event will usually attract media interest which in turns places more and more pressure on the actors to account for the reasons the event had occurred. This cause and effect map is commonplace. Consider, for example, transport which is a fundamental element of the fabric of society. In this area there have been two recent events which illustrate the drive for efficiency and effectiveness without proper ethical consideration.

The first example is the Volkswagen emissions scandal which came to light in September 2015. The company installed software into millions of vehicles with diesel engines so that impressive emission readings would be recorded in laboratory conditions even though the reality is that the diesel engines do not comply with current emission regulations. The repercussions of this unethical action continue to reverberate across the world. The Financial Times (5 September 2016) reported that the EU justice commissioner believed VW had broken the consumer laws of 20 EU countries through marketing the diesel powered cars as environmentally friendly. She is recommending that all countries pursue VW in the courts.

The second example concerns Tesla Motors and the beta testing of the autopilot software in their cars. In May 2016 there was a fatal accident when a Model S Tesla under the control of the Tesla Autopilot software drove at full speed under a trailer. The driver of the Tesla was killed. The Guardian (1 July 2016) reported that Tesla had confirmed that “against a bright spring sky, the car’s sensors system failed to distinguish a large white 18-wheel truck and trailer crossing the highway … The car attempted to drive full speed under the trailer”. Tesla further stated “As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing. Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert.”

Both examples centre on the use of software and it is that which is the focus of this paper. The development of application software does not occur in a vacuum. Within VW and Tesla there will have been a complex network of individuals involved in decision making at different levels resulting in the production of application software which achieved a particular goal. The software engineers who wrote the software may or may not have been privy to higher level decisions and the associated reasons why such decisions were taken. But it is the software engineer who can ultimately be identified as the creator of the software and so rightly or wrongly can be held responsible for any unfavourable outcomes. Such was the case in the early days of the VW scandal when senior executives in VW claimed it was a small group of rogue engineers who had written and released the software. Such claims were subsequently withdrawn as the decisions taken for the action were at much more senior levels. In the Tesla case one of the interesting issues is the testing regime. The autopilot software is safety critical software and yet beta testing was, and continues to be, undertaken in the operational environment i.e. the public highway. This is the same as beta testing software which controls a nuclear energy plant in an operational plant. The latter would not be condoned and yet the former was accepted as reasonable leading to a fatality which is unacceptable collateral damage. The two cases raise many questions relating to professional ethics. Here are just three. How much sway did the software engineers have in both cases? Indeed did the software engineers realise the risks attached to the software applications? If the software engineers realised there were problems did they articulate their concerns?

The aim of the paper is to undertake an ethical analysis of each case study using existing published accounts (see, for example: Wikipedia, 2016; and Lambert, 2016). This is done from a software engineering perspective through performing a professional standards analysis within the case analysis method as defined by Bynum (2004). The Software Engineering Code of Ethics and Professional Practice of the ACM (see http://www.acm.org/about/se-code) is used in this analysis as it is regarded as the most applicable set of principles for these cases. These analyses highlight a set of key issues which need to be addressed if professional integrity within software engineering is to be protected and promoted. The findings are compared with previous published analyses (see, for example: McBride, 2016; Musso & Cusano,2015; Mansouri, 2016; and Ragatz, 2015) to ascertain common and conflicting outcomes.

A grounded position is achieved on which to develop some recommendations as to how professional practice might be strengthened to reduce the risk of events such as VW and Tesla being repeated. These concluding recommendations are addressed from two sides of application software development. One side focuses on resisting the temptation to perform unethical practice whilst the other side focuses on reducing the opportunity of performing unethical practice. Recommendations are either reactive where action is in response to a particular event or circumstance or proactive where action is an attempt to promote desired future behaviour.

Initial references

Brunsden, J. & Campbell, P. (2016) Volkswagen faces fresh EU claims over emissions scandal, Financial Times, 5 September. Available at http://www.ft.com/cms/s/0/59d57584-737b-11e6-b60a-de4532d5ea35.html

Bynum, T.W. (2004) Ethical Decision-Making and Case Analysis in Computer Ethics. in Bynum, TW & Rogerson S. (editors) Computer Ethics and Professional Responsibility, Chapter3, pp 60-86.

Lambert, F. (2016) A fatal Tesla Autopilot accident prompts an evaluation by NHTSA. electrek, 30 June. available at https://electrek.co/2016/06/30/tesla-autopilot-fata-crash-nhtsa-investigation/

Mansouri, N.(2016) A Case Study of Volkswagen Unethical Practice in Diesel Emission Test.International Journal of Science and Engineering Applications Vol 5 Iss 4. pp 211-216.

McBride, N.K. (2016) The ethics of driverless cars ACM SIGCAS Computers and Society (ACM Digital Library); January 2016, Vol. 45 Issue: Number 3 p179-184

Musso, E. & Cusano,M.I. (2015) The Volkswagen case: What shall we learn? International Journal of Transport Economics. vol XLII no 3. pp285-289.

Ragatz, J.A., (2015) What Can We Learn from the Volkswagen Scandal? Faculty Publications. Paper 297. available at http://digitalcommons.theamericancollege.edu/faculty/297

Wikipedia, (2016) Volkswagen emissions scandal. available at https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal

Yadron, D. & Tynan, D. (2016) Tesla driver dies in first fatal crash while using autopilot mode. The Guardian, 1 July. Available at https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk

09:30
The Agency of Software: The Volkswagen Emission Fraud Case
SPEAKER: unknown

ABSTRACT. Introduction Introducing the Volkswagen case and our interest in agency

In this paper, we analyze the Volkswagen emission fraud case with an eye to understanding the agency of computational artifacts. Agency is traditionally defined as the capability of an entity to act. In the most common accounts of agency, a distinction is made between behavior produced by mental states (intentional action) and behavior that can be explained in terms of material causal relations. Traditionally, the former kind of agency is seen as exclusively human, and the latter is attributed to artifacts as well as humans. This distinction is fundamental when it comes to responsibility because human agents can be considered responsible (accountable, duty-bound, praiseworthy/blameworthy) for their actions, while artifacts cannot. The concept of agency as applied to technological artifacts has become an object of heated debate in the context of Artificial Intelligence (AI) research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Ascription of responsibility is particularly important when one needs to understand what went wrong in a technological mishap. We want to use the Volkswagen emission fraud case as a real-life example of a case in which software is part of the story.

The Volkswagen Case An account of the case and the actors involved

In 2015, the U.S. Environmental Protection Agency (EPA) discovered that diesel engines sold by Volkswagen in the U.S. contained a defeat device, that is, a device “that bypasses, defeats, or renders inoperative a required element of the vehicle’s emissions control system”, as defined by the Clean Air Act.[1] According to US officials, the defeat device had sensors allowing the software to detect when the car was being tested. The software included instructions that activated equipment that reduced emissions by adjusting catalytic converters and valves when the car is being tested. The same software turned such equipment down during regular driving, possibly to save on fuel or to improve the car’s performance [2], thereby, increasing the emissions above legal limits, even 40 times the threshold values. On September 18, 2015, the EPA issued a notice of violation of the Clean Air Act to Volkswagen. The cars in violation included Volkswagen, Audi, and Porsche models, all manufactured by Volkswagen AG, a multinational company headquartered in Germany (VW from now on).

Because a defeat device was used to fool EPA, there was no question as to VW’s intent to violate EPA standards. The device stood as evidence of the wrongdoing; the software had to have been intentionally designed and installed by people working for Volkswagen. In terms of accountability, there was no question that VW was the responsible actor, that is, VW was the entity responsible for explaining what happened and why. VW is also the liable actor, it is liable to pay fines and compensation. Of course, attributing accountability, liability and even blame to VW does not settle the question of the agency of the wrongdoing.

In the blame game that ensued after the fraud, the following actors were identified as possibly at fault for what happened: the CEO and Board of Directors, top management, and the engineers. Interestingly, the software was not identified as one of the actors even though according to some accounts of agency, it might have been.

Agency Two concepts of agency and their role in the VW case

As already indicated, we acknowledge the traditional distinction between acts by human agents who are considered to have intentions and those by artifacts or machines who are not considered to have mental states or intentions. Nevertheless, in recent years the discourse on agency has evolved so that these two categories are seen as part of a general notion of agency. In the Stanford Encyclopedia of Philosophy [3], Schlosser distinguishes a broad and narrow meaning of agency:

“In a very broad sense, agency is virtually everywhere. Whenever entities enter into causal relationships, they can be said to act on each other and interact with each other, bringing about changes in each other. In this very broad sense, it is possible to identify agents and agency, and patients and patiency, virtually everywhere. Usually, though, the term ‘agency’ is used in a much narrower sense to denote the performance of intentional actions.”

The broad notion of agency has been taken up both by Science, Technology and Society (STS) and AI scholars. STS researchers focus on the efficacy of artifacts considering artifacts as actors (sometimes referred to as actants) that play an important role in the causal chain that produces states of affairs. AI researchers also treat computational artifacts as causally efficacious agents but they speculate about a future time in which some computational artifacts will become agents in the narrow sense of agency, that is, they will have the same kind of intentions as those attributed to humans.

Sorting out agency in the VW case From the agency of the defeat device to the responsibility of human actors

In this paper, we will illustrate the role of the defeat device in constituting the VW fraud. The device had causal efficacy but not intentions, though the fraud could not be explained without referencing human intentions. Human intentions came into play when VW people set the goal of meeting the EPA standards within technical constraints that made that achievement highly unlikely. Human intentions also played a role in the design and implementation of the defeat device. In the broadest sense of agency, the fraud was committed by the combination of the agencies of the decision makers, implementers, and the device. Nevertheless, when it comes to blame, decision makers and implementers are the focus of attention, that is, the narrow sense of agency is used in ascribing responsibility.

Still, this way of thinking is being challenged by the futuristic scenarios of AI researchers who would have us believe that computational artifacts will some day evolve in ways that will make their acts fit the narrow sense of agency. Imagine a future ‘intelligent’ car that is given two goals – meet the EPA standards and maintain competitive performance levels – and is left to figure out how to achieve these goals. Who would be blameworthy if the car created something comparable to the defeat device? Would the car be blameworthy? Answering these questions requires a comparison between the VW defeat device and the futuristic car.

In the VW case, the company’s top management claims that the decision to use the defeat device was taken by the engineers once they realized that the engines on which they were working would never meet the EPA standards without significant improvement (i.e. investments by the company). Not wanting to be bearers of bad news to their higher-ups, the engineers allegedly handled the problem on their own, keeping the engines as they were, but adding the defeat device to reach the goal the management gave them, although unlawfully. VW management accepts accountability but insists that it knew nothing about the defeat device, though acknowledging that both goals (performance and EPA-compliance) were given to the engineers. This parallels the futuristic car scenario in which an AI artifact is given just high-level goals by the users and left alone to figure out how to achieve them.

How does the increased ‘autonomy’ of the artifact affect the blameworthiness of the designers and the users of the artifact? The futuristic car would still have a user (the company manufacturing the car) and designers (the engineers who designed the car) who would be blameworthy but their causal efficacy would admittedly be different than that in the VW case.

References [1] “Laws and Regulations related to Volkswagen Violations”. US Environmental Protection Agency, https://www.epa.gov/vw/laws-and-regulations-related-volkswagen-violations (accessed August 2016) [2] Rachel Muncrief, John German, and Joe Schultz. “Defeat devices under the U.S. and EU passenger vehicle emissions testing regulations”. The International Council on Clean Trasportation, Technical Report March 2016, http://www.theicct.org/sites/default/files/ publications/ICCT_defeat-devices-reg-briefing_20160322.pdf (Accessed August 2016) [3] Markus Schlosser. “Agency”. The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/agency/ (Accessed August 2016)

10:00
Understanding software engineers’ attitude towards information privacy
SPEAKER: unknown

ABSTRACT. With the increasingly important role that the Internet and new information technologies (IT) play in our everyday-lives, concerns about information privacy grow. These concerns come from users and consumers as much as from legal representatives, researchers and activists. Software engineers – as the creators of new technology – begin to acknowledge the increasing importance of information privacy. They carry an especially high responsibility as they design, develop, and maintain the technological products later in use. Understanding motivations and impediments for software engineers’ ethical decision-making is therefore crucial.

As data breach reports from this year (e.g. Identity Theft Resource Center, 2016) show, companies and organizations have not yet sufficiently incorporated privacy and security mechanisms into their IT systems. Regulators are starting to react to this shortcoming. In the US, the White House drafted a framework for protecting privacy (White House, 2012) in addition to several sectorial privacy regulations (for a good overview see “Privacy bridges”, 2015). In Europe, a new EU General Data Protection Regulation (GDPR; European Commission, 2016) adopts the principle of Privacy by Design (PbD; Cavoukian, 2010) as a legal requirement. It will take effect in 2018. Until then, many companies will have to adapt their privacy policies and practices. If companies don’t comply they face sanctions up to 4% of their world revenue.

Against the background of these developments, software engineers have to start taking information privacy considerations very seriously. But are they ready for this challenge?

Lahlou, Langheinrich, and Röcker (2005) summarize the findings of a 2002 survey by Langheinrich & Lahlou (as cited in Lahlou et al., 2005, p. 60): privacy „was either an abstract problem [for software engineers]; not a problem yet (they are ‘only prototypes’); not a problem at all (firewalls and cryptography would take care of it); not their problem (but one for politicians, lawmakers, or, more vaguely, society); or simply not part of the project deliverables“ (p. 60). Have software engineers changed over the past 14 years? And can we explain their thinking about privacy (and ethical computing in general) more systematically?

We conducted an extensive literature research to identify possible explanatory models regarding software engineers’ ethical behavior. Previously, a holistic model of software engineers’ general job motivation (Sharp, Baddoo, Beecham, Hall, & Robinson, 2009) has been proposed, personality types of software engineers have been looked into (e.g. Cruz, da Silva, & Capretz, 2015) and ethical considerations in software engineering as a profession have been discussed (e.g. Brien, 1998). But attempts to establish a deeper understanding for software engineers’ way of thinking as well as for their (emotional) attitudes and beliefs towards ethical computing cannot be found in the literature.

To fill this gap in research, we conducted semi-structured interviews with six senior engineers in leading companies and renowned research labs, such as Google, IBM, Alcatel Lucent, the German Research Centre for Artificial Intelligence (DFKI) etc. Our aim was to identify motivations for considering ethical values in IT system design as well as impediments hindering such endeavor. We especially focused on the incorporation of information privacy mechanisms.

The Theory of Planned Behavior by Icek Ajzen (1991) was taken as theoretical framework for the development of the interview structure. The theory proposes three factors that influence the intention to act and thereby the expression of a behavior (Ajzen, 1991, p. 188): (1) the “favorable or unfavorable” attitude toward the behavior, (2) the subjective norm or “perceived social pressure to perform or not to perform the behavior”, and (3) the “ease or difficulty of performing the behavior” which reflects “past experience as well as anticipated impediments and obstacles”, subsumed in perceived behavioral control.

The software engineers’ attitudes, subjective norms and their perceived control with regard to the incorporation of information privacy mechanisms were discussed in detail in the interviews. We expected that factors preventing software engineers from considering ethical values would be found especially in the subjective norm and the attitude toward the behavior while we expected our interview partners to report full behavioral control.

We conduced extensive interviews, spending roughly 7 ½ hours with senior software engineers who bring in many years of system design experience. 63 pages of recordings from six interviews were transcribed, comprising 34.290 words. Passages considered relevant for the analysis in the transcribed interviews were coded using NVivo software (version 11). In total, 534 text passages (containing single words, phrases or sentences) were marked as relevant and included in the subsequent content analysis. Two researchers agreed on the final selection of text segments and their assignment to the associated categories and themes. The major themes identified in the analysis were information privacy, ethics/ethical computing, values, security and user (characteristics). In the following, we focus on the results regarding information privacy. The number of statements is indicated in squared brackets.

ATTITUDE TOWARDS BEHAVIOR [43 comments]. Four of the six interviewed software engineers considered the incorporation of information privacy mechanisms “inconvenient” or otherwise negative [9]. Furthermore, it is demanding or (intellectually) challenging [8]. Only two interviewees said that implementing privacy mechanisms makes them happy and that it is interesting [5]. The interviewed software engineers’ rarely pointed to the importance of information privacy [8] and there were almost as many comments that referred to information privacy as “secondary” [6], e.g. to Internet connection or functionality. Other comments hinted that it information privacy is not fully legitimate and losing traction [7].

SUBJECTIVE NORM [39]. When considering their environment, software engineers mostly recognize information privacy as something that must be dealt with or is somehow required [23], e.g. to avoid criticism and a negative image. It is considered a commodity rather than an absolute value [7] and three software engineers believe that the people really don’t care about privacy [9].

PERCEIVED BEHAVIORAL CONTROL [45]. Software engineers believe that their control is limited. Information privacy is perceived as strongly dependent on a legal basis [15] and on people’s proper behavior [4]. They see information privacy as something that can always be broken or overruled [16]. Only few positive comments were made, hinting e.g. that information privacy problems can be solved technically [6] or associating information privacy with PbD [4]. Software engineers are aware of many difficulties, impediments and obstacles. Incorporating privacy mechanisms is technically difficult, makes systems heavy and requires tiresome co-operation with lawyers [14]. In addition, the concept of privacy is unclear or ambivalent – as is its legal basis [24]. Information privacy is an impediment making things more difficult and often standing in the way [13]. Finally, the (lack of) user awareness and knowledge asymmetries, consent and the potential manipulation of costumers are seen as a problem [14].

Beyond the Theory of Planned Behavior: PERCEIVED RESPONSIBILITY? [45]. In addition to the influencing factors proposed by Ajzen, we found perceived responsibility to be an important theme in the interviews. Only two interview partners clearly stated that they felt responsible for incorporating information privacy in their systems. Only two interview partners clearly stated that they felt responsible for incorporating information privacy in their systems. Some even made self-contradictory remarks, feeling fully or partly responsible [20] but at the same time stating that it is not up to them [13] or mentioning that someone else is responsible [12], e.g. colleagues, people who make money with data or customers. This is very much in line with findings from the Langheinrich and Lahlou survey mentioned above.

Taken together, our results in light of the Theory of Planned Behavior suggest that software engineers’ attitude toward the incorporation of privacy mechanisms is rather unfavorable. Regarding the subjective norm, software engineers acknowledge that information privacy needs to be considered but at the same time they question its importance. Surprisingly, software engineers reported low perceived behavioral control – the reflection of past experience and anticipated obstacles resulted in a substantial list of challenges associated with privacy.

---------------------------------------------------------------------------------- Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processess, 50, 179–211. Brien, A. (1998). Professional ethics and the culture of trust. Journal of Business Ethics, 17, 391–409. Cavoukian, A. (2010). Privacy by Design: The 7 foundational principles. Toronto, Canada. Cruz, S., da Silva, F. Q. B., & Capretz, L. F. (2015). Forty years of research on personality in software engineering: A mapping study. Computers in Human Behavior, 46, 94–113. European Commission. (2016). Regulation (EU) 2916/679. Official Journal of the European Union, 59, 1–88. Identity Theft Resource Center. (2016). Data breach reports. Lahlou, S., Langheinrich, M., & Röcker, C. (2005). Privacy and trust issues with invisible computers. Communications of the ACM, 48(3), 59–60. Privacy bridges: EU and US privacy experts in search of transatlantic privacy solutions. (2015). 37th International Privacy Conference Amsterdam 2015. Amsterdam, Cambridge. Sharp, H., Baddoo, N., Beecham, S., Hall, T., & Robinson, H. (2009). Models of motivation in software engineering. Information and Software Technology, 51, 219–233. White House. (2012). Consumer data privacy in a networked world: A framework for protecting privacy and promoting innovation in the global digital economy. Washington, DC.

10:30-11:00Break
11:00-12:30 Session 13A: Digital Health + Cybercrime
Location: Room H3: building D4, 2nd floor
11:00
I am a Person
SPEAKER: unknown

ABSTRACT. A review of value sensitive design for cognitive declines of ageing, interpreted through the lens of personhood.

This paper presents a conception of personhood as both physical and social, and both as radically contingent upon their respective physical and social environments. In the context of age-related cognitive decline it supports literature suggesting social personhood is occluded rather than deteriorating with brain function. Reviewing the literature on value sensitive design (VSD) as applied to assistive technologies for people with age-related cognitive decline, it finds an exclusive focus upon physical support. The paper concludes that issues of power must be grasped by those in VSD practice in order to reorient VSD in assistive technologies to support social personhood in those with age-related cognitive decline.

11:30
A Review of Value-Conflicts in Cybersecurity
SPEAKER: unknown

ABSTRACT. Cybersecurity is of capital importance in a world where economic and social processes increasingly rely on digital technology. Although the primary ethical motivation of cybersecurity is prevention of informational or physical harm, its enforcement can also entail conflicts with other moral values. This contribution provides an outline of value conflicts in cybersecurity based on a quantitative literature analysis and qualitative case studies. The aim is to demonstrate that the security-privacy-dichotomy - that still seems to dominate the ethics discourse based on our bibliometric analysis - is insufficient when discussing the ethical challenges of cybersecurity. Furthermore, we want to sketch how the notion of contextual integrity could help to better understand and mitigate such value conflicts.

12:00
Designing Privacy Affordances for Searching
SPEAKER: unknown

ABSTRACT. Current information-seeking activities of users are driven by complex data processing, ease of access and ubiquitous services. When interacting with data-driven services online such as search, a significant issue affecting users is privacy. The impacts of privacy are often instantaneous, reputational and may concern data about interconnected users (Tene 2008; Zimmer 2008). To address user privacy concerns, research has developed privacy-enhancing technologies, privacy-by-design concepts and policy mechanisms (Cavoukian 2009; Gurses 2010; Solove 2009). Less adequate, however, is an understanding of online societal norms in relation to user privacy expectations around personal data flows. Most privacy-enhancing technologies serve a functional purpose, requiring user effort and technical understanding in making complex privacy decisions. Privacy-by-design concepts articulate generic best practices that may be inflexible in adapting to diverse privacy expectations. Policy procedural principles lack mechanisms to govern data protection within specific contexts of use. The nature of the web adds to these complex interactions, making actors and purposes largely invisible(Naughton 2016).

This research aimed to explore how and to what extent, technological affordances around managing personal data flows are effective in addressing people’s privacy expectations. Information privacy is defined as appropriate flows of information using Nissenbaum’s contextual integrity theory as a benchmark. Based on this framework, a value sensitive design study was designed to investigate affordances from technical, empirical and conceptual dimensions situated in the application domain of personalised search. “Value sensitive design is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process” (Friedman, Kahn, Borning 2009, p. 69). Value sensitive design is a relevant methodology to scaffold the exploration of phenomena (i.e. societal privacy expectations) in socio-technical systems (i.e. data flows in Web search).

In the technical investigation phase, a controlled experiment was conducted into data release in search across 7 search engines (both commercial and privacy-enhanced), 4 user scenarios, 29 query terms (informational, transactional, navigational and popular) across 3 distinct timeframes from 2014 to 2016. The analysis of personalisation variance using Jaccard Index and Edit Distance showed that personalised search results are consistently and significantly affected by location, the user-agent parameter, query term popularity and intent. Data actively (measurable variance) and passively (negligible variance) impact personalised search results, with some being personally-identifiable. Minimalism in query requests could serve the purposes of maintaining pseudonymity or reduced query distinguishability through obfuscation. If minimalism cannot be achieved or is impractical in certain scenarios, then increased visibility of such data flows should be made possible for users.

An empirical survey study was conducted among a sample of 441 users aged between 18 and 70 years from a population of university students and staff. The primary aim was to gain an understanding of online societal norms around privacy-sensitive data flows through a series of quantitative and qualitative questions. The qualitative findings revealed disconnected tensions between forces shaping societal norms and the nature of expression of privacy expectations afforded. Users expressed ambivalent views regarding privacy when a clear benefit is apparent or their privacy-sensitivity is not affected. Most users articulated the need for effective affordances by constraining the appropriateness of data flows to the context of use and actors involved underpinned by a clear transmission principle. Users seek privacy values (such as autonomy, choice, control and trust) in data flows (namely creation, collection, processing, dissemination and deletion). The online social norm of services being free on the Internet mean that users may be unwilling to commodify privacy. A second norm is that when users interact with a service, most think that they are only interacting with a primary provider consistent with their purpose of use. They are unaware of omnibus information providers tracking data for behavioural advertising.

In the third phase, we analysed findings from the prior phases and articulated a set of conceptual design recommendations to afford users the ability to exercise their privacy expectations. The key contributions include explicating privacy-sensitivity, evidence of an emerging contextually public sphere and articulating why addressing privacy expectations is a design challenge. We found that privacy-sensitivity around data flows is influenced by the nature of the information being collected, shared and stored; with strong indications for its use constrained to the primary actors and purposes users socially contract with. Further contexts are constrained temporally (short-term interests) or locally (obtaining locally relevant information for directions on a map). The appropriateness of data flows are constrained to the privacy-sensitive norms governing these contexts. These observations give rise to a contextually public sphere where users actively choose to share information for a perceived benefit within the contextual constraints mentioned, contributing to the existing notion of private and public spheres for personal data.

Privacy values and design principles are enabling factors that guide expression of online societal norms. Mediating factors need to be taken into consideration when designing technology that proactively empower users to effectively manage personal data flows in search. Given the complexity of the medium, the basis for interaction must be visible and constrained to its contextual purpose. The potential for misuse of information raises significant privacy risks. These risks could take the form of banal spam emails to more intrusive ransomware scams, where the degree of insecurity increases with the computational sophistication of the attacks. Often privacy is a secondary intangible goal of our interactions. Consider a smartphone as an example, which if it is an iPhone with an iOS 7 or higher, it has by default an itemised list of frequent locations a person has been in based on their Wi-Fi activity (Apple Inc 2016). The capability to harness this information exists beyond its intended purpose of delivering locally relevant information. Although this information was previously transmitted, it is only now that it is visible to iPhone users giving them some control in influencing how their personal location data may be used. The ability to commercialise personal data such as this drives the development of technology that mediates our interactions. Hence, our expectations are framed and influenced by the affordances offered to users, who then use those affordances to exercise their privacy expectations.

When designing technologies in addition to the key functional criteria such as requirements, defaults and usability (for user convenience), designers should adapt appropriate contextually relevant strategies that address the privacy-sensitivities of users and is aligned with online societal norms around privacy risks. For example, designers of recommender systems need to refresh profiles to reflect the temporal interests of users. Personalised search or ad recommendations fall short of meeting user expectations when user profiles are not intuitive enough to reflect a user’s changing interests. As humans we grow and our worldview is changed by our life experiences; we discover interests in new things and loose interest in others. Excessive profiling inhibits the ability to reflect this societal characteristic as the primary aim of profiling is to capture and collect as much information as possible to increase profiling accuracy. When this occurs, privacy expectations may not be met as users could have long moved on from the interests captured in the user profile. Because our expectations are conditioned by the affordances offered to us, this disconnect in expectations builds a case for designing affordances that address the privacy-sensitivities of users, as it is in upholding these societal norms that users find value in their interactions. Design should allow for negotiating the spectrum of privacy expectations that are deeply integrated into social life and not solely be driven by commercial motives.

“It is important to keep in mind that privacy norms do not merely protect individuals; they play a crucial role in sustaining social institutions.” (Nissenbaum 2011, p.44). Although the capabilities of technology are powerful and its impact and risks are far-reaching, the use of these capabilities should be aligned with moral values and not necessarily be exploited just because they exist. Hence, being able to balance the integrity of social life with the immense benefits technology brings is important for technology-mediated interactions on the web to be sustainable in the long-term.

References Apple Inc 2016, About privacy and location services, viewed 1 September 2016, .

Cavoukian, A 2009, 'Privacy by design: the 7 foundational principles',

Gurses, S 2010, 'PETs and their users: a critical review of the potentials and limitations of the privacy as confidentiality paradigm', Identity in the Information Society, vol. 3, no. 3, pp. 539-563.

Naughton, J 2016, 'The evolution of the Internet: from military experiment to General Purpose Technology', Journal of Cyber Policy, vol. 1, no. 1, pp. 5-28.

Nissenbaum, H 2011, 'A Contextual Approach to Privacy Online', Daedalus, vol. 140, no. 4, pp. 32-48.

Solove, DJ 2009, Understanding privacy, Harvard University Press, Cambridge, Mass.

Tene, O 2008, 'What Google Knows: Privacy and Internet Search Engines', Utah Law Review, vol. 2008, no. 4, pp. 1433-1492

Zimmer, M 2008, 'The externalities of search 2.0: the emerging privacy threats when the drive for the perfect search engine meets web 2.0', First Monday, vol. 13, no. 3.

11:00-12:30 Session 13B: ICT and the City
Location: Room H4: building D4, 2nd floor
11:00
Where does the city really end? Redefining Smart Cities and their ethical dangers.
SPEAKER: Brandt Dainow

ABSTRACT. This presentation will show how current definitions of smart cities are inadequate, that this leads to insufficient recognition of their ethical dangers, and propose alternative models.

Smart cities represent a significant increase in power from previous ICT’s. Current conceptions of smart cities typically underestimate their potential, seeing only how smart city technology can be used to solve the problems of today, while ignoring the innovative capabilities they offer and the resultant new problems these will create. This presentation will first review what constitutes a smart city and then use the resultant conception to develop a framework by which to better understand the profound threat they represent to individual autonomy and the long-term viability of society.

Our primary methodology will be to re-evaluate the definition of the smart city by taking a futures approach, in which smart cities are contextualised within the broader range of emerging ICT’s. We shall first review the current definitions of the smart city to discern underlying patterns upon which the varying conceptions can be organised. Here we shall examine conceptions of the smart city found in research literature (Albino, Berardi, and Dangelico 2015), government policy documents (Kitchin 2016), and industry publications (Cobham Tactical Communications and Surveillance Ltd 2015). We shall show how most existing definitions can be divided between those which focus on ends (what you can do a smart city) versus means (the technology involved). However, in both cases smart city technology is delimited by the geographical location of physical infrastructure, without much consideration of what is required for anticipated services to function. Using simple scenarios we will demonstrate that such definitions of smart cities lack the capability to account for anticipated smart city services.

Drawing on work in algorithmic governance of distributed energy systems (Pitt et al. 2014) and the architecture of super organisms (Bicocchi et al. 2015), and the range of emerging ICT’s (Ikonen et al. 2010), we will introduce a new definition of smart cities which views them more broadly and is better able to account for the smart city services described in traditional accounts. We shall demonstrate that the distinction between “city technology”, the “digital home” and personal ICT’s, including wearable and internal (bioelectronic) devices is arbitrary and misleading. We shall further show how the logic of smart cities inevitably requires permeation of surrounding rural hinterland with smart city systems, such that even a distinction between urban and rural is misleading. We shall show how smart cities are, instead, best understood as a built environment combining ambient intelligence, AI and robotics. In developing this conception, we will also re-assess the concept of a built versus a natural landscape, and the advantages of replacing a mechanical paradigm of smart cities with a biological one.

This new definition will then be used to re-evaluate the ethical threats smart cities generate. Just as much current thinking about the capabilities of smart cities is focused on how to solve today’s problems with tomorrow’s technology, so concerns for ethical issues in smart cities focus on the avoiding the problems we encounter with today’s technology, such as privacy and security. What is missing is consideration of what new problems the smart city may give rise to. We will show that the basis for new ethical concerns made possible by smart cities is the ubiquity of the technology and its potential to reach into every aspect of human existence. In doing so, smart city technology creates the ability of outside agencies to manipulate previously inaccessible spaces (our homes and bodies) and thus influence aspects of our lives which have previously been beyond direct external control. By reviewing a few central facts regarding the development of ICT’s over the last 50 years, we will demonstrate that this potential disparity of power raises significant concerns and prompts a need to review modes of governance before smart city systems become prevalent.

In order to better understand the implications of smart cities, we shall then apply Luhman’s concept of social autopoiesis (Luhmann 1986) to our refined definition of smart cities to show that it is productive to apply the concept of autonomy to society as well as to the individual. It is indubitable that unexpected circumstances will arise and that human society will respond in ways we cannot anticipate and which are inherently unanticipatible. It is therefore important that our technological development does not preclude advantageous opportunities in the future which we cannot anticipate now. It is therefore important that, for long-term social viability, that as technology comes to mediate our interaction with our environment and each other, it does so in a manner which promotes the widest range of creative innovation on the part of users. If new technology does not do this, if it instead imposes on us limited models, lacking variability or imposing particular viewpoints upon all in a manner which destroys diversity and reduces the ability to innovate, then it is not an enhancement or an improvement, but is, in fact, a retrograde step. Technologies do not constitute a step forward for society as a whole merely on the basis that they give us a capability we did not have before. For any given capability that is created there are multiple ways of designing the functionality and multiple ways of operating it. Whether a new capability is a step forward, an improvement and a good in society, is dependent on the way in which it is done, not merely that it is being done. So, for example, while social networking has been largely developed by Facebook, whether Facebook’s social networking represents a step forward for society is not a question of whether it provides a new service, but whether it does so in a way that enhances the long-term viability of society, or if it has restricted the potential for society through not offering equally viable alternative models.

We shall show how the concept of autopoiesis better positions us to understand the dangers intrinsic to the wielding of physical construction with digital services inherent within smart cities. We shall show how inappropriate strategies in governance, innovation and implementation can reduce both society’s long-term adaptive capabilities and individual autonomy. By reviewing the history of technology we shall show how such dangers are more likely than not. We shall therefore demonstrate the need for, and technical viability of, alternative strategies by considering some candidates which have already been deployed in subsets of smart city technology, such as energy distribution (Pitt et al. 2014).

BIBLIOGRAPHY Albino, Vito, Umberto Berardi, and Rosa Maria Dangelico. 2015. “Smart Cities: Definitions, Dimensions, Performance, and Initiatives.” Journal of Urban Technology 22 (1): 3–21. doi:10.1080/10630732.2014.942092. Bicocchi, Nicola, Alket Cecaj, Damiano Fontana, Marco Mamei, Andrea Sassi, and Franco Zambonelli. 2015. “Social Collective Awareness in Socio-Technical Urban Superorganisms.” In E-Governance for Smart Cities, edited by T. M. Vinod Kumar. Advances in 21st Century Human Settlements. Singapore: Springer Singapore. http://link.springer.com/10.1007/978-981-287-287-6. Cobham Tactical Communications and Surveillance Ltd. 2015. “Safe Cities.” Whiteley, Hampshire: Cobham Tactical Communications and Surveillance Ltd. Ikonen, Veikko, Minni Kanerva, Panu Kouri, Bernd Stahl, and Kutoma Wakunuma. 2010. “D.1.2. Emerging Technologies Report.” D.1.2. ETICA Project. Kitchin, R. 2016. “Getting Smarter about Smart Cities: Improving Data Privacy and Data Security.” Dublin, Ireland: Data Protection Unit, Department of the Taoiseach. Luhmann, Niklas. 1986. “The Autopoiesis of Social Systems.” Sociocybernetic Paradoxes, 172–192. Pitt, Jeremy, Dídac Busquets, Aikaterini Bourazeri, and Patricio Petruzzi. 2014. “Collective Intelligence and Algorithmic Governance of Socio-Technical Systems.” In Social Collective Intelligence, edited by Daniele Miorandi, Vincenzo Maltese, Michael Rovatsos, Anton Nijholt, and James Stewart, 39–50. London; New York: Springer.

11:30
Ethical Problems in Creating Historically Representative Mixed Reality Make-belief
SPEAKER: unknown

ABSTRACT. Mixed Reality (MR) is a technology that is used to mix real-world elements with virtual reality. MR can use any or all the senses we have, from viewing and hearing (which are the more common methods) to tasting, feeling and smelling – even understanding balance (see e.g. Ranasinghe, Karunanayaka, Cheok, Fernando, Nii and Gopalakrishnakone 2011, Colwell, Petrie, Kornbrot, Hardwick and Furner 1998, Ischer, Baron, Mermoud, Cayeux, Porcherot, Sander & Delplanque 2014, reference withheld). To activate our visual appreciation of MR, we can use different kind of camera’s from 2D to 3D, by computers, phones or other more specific devices (Bujak, Radu, Catrambone, MacIntyre, Zheng and Golubskic 2013). The aim is to present the MR objects to the viewer in the same space as the ‘real’ objects are (Di Serio, Ibáñez,and Carlos 2013).

12:00
A Smart City of Flows: How Smart Cities Can Shape Urban Experience and Creativity

ABSTRACT. In this essay, the smart city is envisioned as an urban “space of flows” whose spatial logic follows from the attempt to optimize the city’s flows. Its algorithms take decisions based on what is deemed efficient, sustainable and stable. While this smart city logic can help to optimize processes, it can also lead to social compartmentalization which is seen as means to avoid conflict and confrontation that could cost energy and cause instability. Such compartmentalization works through social sorting, where people are abstracted as data sets and ascribed to pre-established categories. In the smart city, social sorting is also applied when providing people with informational feedback. Via the feedback they provide to us, smart city technologies define the space we experience while the logic behind their algorithms stays opaque. Smart city technologies enable access rather than denying it – they allow us to perceive what we can do, but not what we cannot do. This can lead to the emergence of a spatial filter bubble, where spaces which could cause friction or dissonance are omitted from personalized urban experiences. Such development comes at the expense of creative potential, because potential surfaces of friction and spaces of dissonance can give rise to creative spaces where encounters with novel problems require creative solutions. A challenge for the smart city is then to provide creative spaces while offering safe spaces and avoiding to reproduce or create social injustices.

11:00-12:30 Session 13C: Fiction + Open
Chair:
Location: Room H5: building D4, 2nd floor
11:00
by design
SPEAKER: Thessa Jensen

ABSTRACT. In lieu of an abstract a few words of warning.

This story was originally written as a fanfiction. It has been cleared from explicit sex scenes, but might still challenge both the reader’s expectations for a fictional story and the very idea of a story exploring ethical design issues.

The following tags and trigger warnings would be applied for this story on a fanfiction site:

Suicidal Thoughts; Apparent Suicide; No Character Death; Artificial Intelligence; Angst; Hurt/Comfort.

PROLOGUE He had given up. Tommy had tried to turn back to the life he once knew so well. Drifting through bars and clubs, searching for something, someone, anything, anyone. But whenever another man tried to contact him, turn towards him, he shied away, excused himself and all but fled.

Now he was sitting in the darkest corner of the bar, watching, observing loving couples kissing and chatting the night away; a stranger looking at him with piercing blue eyes. Tommy emptied his drink and didn’t lift his eyes from the floor when he walked outside. Once more, one last time, he went to the bridge where it all had started. He looked out at the river, then turned in imitation of his moves from two years ago. But no man came running to his rescue, no man was kneeling in front of him, asking him if he was okay. Instead, the dawning light only revealed empty streets and places. Tommy shivered. He took a deep breath and closed his eyes.

Today Thomas Foster would end his life.

An almost-smile played on his lips. He stood, tall and straight. Steady, determined steps took him away from the bridge. He found his car where he had parked it the evening before and made his way back to his house. When he stopped in front of the garage, he turned the key immediately. He did not leave the car. Like he had done every morning for the past two weeks, Tommy contemplated the garage in front of the car. How easy it would be. Open the garage, park the car inside, close the garage. Sit back into the car and turn it on. How long would it take before he would be unconscious? Before he was dead? He shook his head. He left the car and opened the door into the house without heeding the dark shadow that seemed to linger behind one of the trees.

A few minutes later, Tommy came out of the house, carrying a small envelope and a bag. He went back into the car and drove away. One last time he turned the car onto the road towards the beach, their beach.

Finding the unpaved road and climbing the fence which surrounded the bleak, green area, were as easy as ever. Tommy tried and failed to imagine walking side by side with Edward. The memories were as vivid and clear as if it had happened yesterday. But the emptiness inside him couldn’t be filled with images from a long gone past. Not even his anger, his frustration, nor his grief could fill the void any longer.

He had walked along the beach to the very spot where they had been drinking a cup of tea from a thermos. So domestic and mature. So very much missed by now. Tommy took off his shoes and socks. Put everything neatly together, including the bag and letter. One last look along the beach on both sides. No other living being was in sight. Once more Tommy straightened his back and turned towards the water. It was cold, but he didn’t mind. Step by step, one foot in front of the other, he went in. His clothes soaked up the water, became heavy, clinging to his body. Randomly he thought about his last meal. Two days ago. Just a bit of toast and beans. Since then nothing but tea, water and the one drink at the bar. The water reached up to his neck and he started to swim to avoid the small waves hitting him in the face. And then he was dragged down by the weight of his clothes and the emptiness in his heart. He wondered if everything was going according to plan or if he rather hoped for some kind of cock up.

He wouldn’t have to wonder for long.

11:30
Narrative technologies meets virtue ethics: investigating the possibility of a narrative ethics of technology
SPEAKER: unknown

ABSTRACT. Modern technologies, and most notably novel ICTs such as virtual reality technologies, Internet banking systems and interconnected mobile devices, have an increasing impact on our daily lives. Moreover, they blend together material, linguistic and social aspects of our lives: for example by combining a navigational tool that offers an augmented reality perspective with a social network in which you engage with others (with the game “Pokemon Go” as a good example to illustrate this tendency). Because of the increasing pervasiveness of these technologies, it has become clear that there is a need for an ethics of technology to reflect on them. Virtue ethics provides a promising starting point for thinking about ethics of technology, as some authors have forcefully shown in recent years (Anonymous, 2012; Vallor, 2010, 2016). In this paper, we aim to utilise some central insights in virtue ethics of technology to build upon the philosophical framework of narrative technologies that offers a distinct understanding of technological mediation. By doing so, we offer the preliminary outlines of a “narrative ethics of technology”.

The framework of narrative technologies has been constructed in response to a number of criticisms on established theories of technological mediation that were developed after the “empirical turn” in philosophy of technology (Brey, 2010). Most notably, these focused on a lack of attention on the role of language (Anonymous, 2015) and the social (Van Den Eede, 2010) in the theorisation of technological mediation. The central thesis of the framework of narrative technologies is that we can understand the mediation of our social reality by technologies along similar lines as we can understand the mediating roles of narratives in written texts. We utilise one of the central works of Paul Ricoeur, Time and Narrative, (1983) to build upon this thesis (Anonymous 2016). Even though Ricoeur doesn’t himself engage with technology in his writings, Kaplan already pointed at the merits of his work for philosophy of technology (Kaplan, 2006).

Ricoeur’s narrative theory revolved around the model of narrative time that aims at explicating the process of mediation of our understanding of social reality through interaction with a text. This model stipulates the moments of prefigured time, before interacting with a text that is already embedded in a cultural repertoire, configured time, the moment of interacting with a text during which our understanding of social reality is changed, and refigured time, during which the world of the text and the world of the reader become synthesised. We utilise this model to understand technological mediation, focusing on the moment of configuration. During the moment of configuration, characters and events are organised into a meaningful whole that we designate as a plot. This process has some distinct dimensions that can inform our idea of technological mediation.

First, we look at the extent to which technologies approximate the paradigm of the text, by investigating their capacity to actively organise characters and events into a meaningful plot. While some technologies, such as a bridge, largely remain passive elements of our social reality, other technologies – notably ICTs – actively configure our understanding of social reality through a constant process of interaction. Following Ricoeur, we explicitly look at the way in which technologies shape our understanding of temporality, at whether they configure a chronological, rigid – or an a-chronological, dynamical sense of time. Second, we look at the extent to which technologies configure narratives that abstract from the world of action. Ricoeur explains that abstraction in narratives happens through the construction of “quasi-entities” (Ricoeur, 1983: 181); for example by constructing entities such as “Germany” and “the Industrial Revolution” in historical narrative that abstract from the actual characters and events to which these entities refer. We argue that technologies can similarly configure narrative structures that abstract from the world of action, for instance by constructing a digital “profile” that represents actual characters engaging in actual events (de Vries, 2010).

Even though the framework of narrative technologies allows us to better understand the way in which technologies mediate our social reality, it does not yet offer an ethical framework to evaluate the technological mediations that it helps us to describe. Revisiting the framework in order to enable such an ethical approach, we engage with the tradition of virtue ethics. We focus on virtue ethics for two distinct reasons: (1) because, as McIntyre explained, it takes as a basic premise the “narrative character of human life” (MacIntyre, 2007: 144) and (2) because it focuses on the different “concrete contexts for human action” (Vallor, 2010: 162), which is in line with Ricoeur’s emphasis on understanding the particular through narrative rather than the universal. What the framework of narrative technologies adds to the approaches in virtue ethics is that it offers a model of narrative configuration that helps us to understand the way in which the world of action – which is central to virtue ethics – is mediated by technologies.

In virtue ethics of technology, it is argued that “virtue grows as a specific way of doing” (Anonymous, 2012: 227) and that we should pay attention to the role of virtues in directing and motivating moral actions (Vallor, 2010: 160). Thus, virtues are strongly related to the way in which our world of action is organised, which in our contemporary world happens to a great extent through technologies. In line with our framework of narrative technologies, we can say that the organisation of characters and events in a meaningful whole is what defines this mediated world of action. The claim can therefore be made that the virtuous character of a human interacting with a technology can be affected according to the way in which the technology configures her world of action. For instance, we can argue that credit cards have configured the world of action by changing our sense of temporality (the decreasing distance between payment and ownership) and by configuring an abstraction from the world of action (an increasing distance between act of paying through the technology and the effects of the payment on our actual lives – for instance the effects of being in debt). This configuration of the world of action by the technology in turns leads to what virtue ethicists call a change in “character” (Vallor, 2010: 163).

A person’s character can be understood as “a more-or-less consistent, more-or-less integrated, set of motivations, including the person’s desires, beliefs about the world, and ultimate goals and values” (Kamtekar, 2004: 460). More specifically, a person’s character depends on the consistency of her motivations and their non-conflictual nature. For instance, in the case of the earlier example of the use of credit card technology, we could argue that the configuration of the world of action by the technology can lead to inconsistencies in motivations: of on the one hand being motivated to lead a secure and moderated life and on the other hand being motivated to increasingly purchase goods which leads to greater insecurity in the future.

We have arrived therefore at a point at which the framework of narrative technologies allows us to explicate how technologies configure the world of action, while virtue ethics shows us how we can evaluate these configurations. Combining these approaches might therefore pave the way toward a narrative ethics of technology that allows us to better engage in ethical reflection on the ways in which contemporary technologies mediate our social world.

References: Brey, P. (2010). Philosophy of Technology after the Empirical Turn. Techné: Research in Philosophy and Technology, 14(1). Anonymous (2012). Anonymous (2015). Anonymous (2016). de Vries, K. (2010). Identity, profiling algorithms and a world of ambient intelligence. Ethics and Information Technology, 12(1), 71–85. http://doi.org/10.1007/s10676-009-9215-9 Kamtekar, R. (2004). Situationism and Virtue Ethics on the Content of Our Character. Ethics, 114(3), 458–491. http://doi.org/10.1086/381696 Kaplan, D. M. (2006). Paul Ricoeur and the Philosophy of Technology. Journal of French and Francophone Philosophy, 16(1/2). MacIntyre, A. (2007). After Virtue: A study in moral theory (Third Edit). Notre Dame, Indiana: University of Notre Dame Press. http://doi.org/10.1017/CBO9781107415324.004 Ricoeur, P. (1983). Time and Narrative - volume 1. (K. McLaughlin & D. Pellauer, Eds.) (Vol. 91). Chicago: The University of Chicago. http://doi.org/10.2307/1864383 Vallor, S. (2010). Social networking technology and the virtues. Ethics and Information Technology, 12(2), 157–170. http://doi.org/10.1007/s10676-009-9202-1 Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford: Oxford University Press. Van Den Eede, Y. (2010). In Between Us: On the Transparency and Opacity of Technological Mediation. Foundations of Science, 16(2–3), 139–159. http://doi.org/10.1007/s10699-010-9190-y

12:00
What Words Can’t Say: The ethical impact of emojis and other nonverbal elements of computer-mediated communication
SPEAKER: Alexis Elder

ABSTRACT. In this project, I explore the ethical impact of emojis and other nonverbal elements of computer-mediated communication. I take a cue from Vallor (2012), who points out that even small-scale and seemingly trivial technological innovations can have pervasive impacts on our characters, relationships and lives, especially when ubiquitous to our daily activities. I argue for two major theses: first, that emojis and other non-verbal elements of computer-mediated communication, which are rapidly becoming major components of daily interactions, shape our communications, and thereby the nature of our relationships with, our intimate social connections - particularly by facilitating the expression of socially-challenging emotions. Second, I argue that the prevalence of these new forms of communication tend to make us reflect on features about ourselves and especially our emotional states in new ways, changing our self-perceptions in ethically significant ways.

Early discussions of technologically-mediated communication, especially its role in close personal relationships such as friendships, focused on textual communication versus face-to-face, these being, in those early days, the only major options. Bandwidth was expensive and so were cameras and videocameras. (See, for example, Cocking and Matthews 2000, and Briggle 2008.) But in today’s world, this is no longer the case. Emojis, stickers, and incorporation of photographs and short video clips into computer-mediated communication has become ubiquitous.

Early criticisms of CMC, therefore, focused on the relative impoverishment of textual communication, devoid of nonverbal communication channels such as body posture, facial expression, and tone of voice (Cocking and Matthews 2000, McFall 2012, Walther 2007 ; Hancock 2007, Vallor 2012). Defenses of CMC (Briggle 2008, Elder 2014) focused on the expressive capacity of written language: for example, literature and poetry’s ability to express complex and subtle emotions and circumstances.

But the early critics seem to have been onto something, too – tone of voice, facial expression, and bodily posture seem to efficiently convey rich layers of meaning that are difficult, even if not impossible in principle, for most of us to convey via text, at least given the time and space constraints of day-to-day communication via texting, instant messaging, and an ever-expanding range of social media platforms.

Early workarounds included emoticons, simple icons constructed out of standard keyboard symbols such as colons, semi-colons, hyphens, and parentheses. Individual and cultural variations abounded, but many may be familiar with the smiling face represented by the colon, hyphen, and closing parentheses, like so: :-) as well as textual descriptions marked out with various punctuation marks, such as or ::hugs::.

But today’s texters enjoy a rapidly expanding range of communicative options. This range includes a variety of graphics, from emojis, which are standardized across platforms via the Unicode Standard, as well as more complicated but platform-specific “stickers” illustrating everything from two sheep cuddling (in Snapchat) to a turtle hiding in its shell while ants crawl over it (in Google Hangouts). Increasingly, communication platforms also support individually-produced doodles and drawings, individualized icons such as the graphics produced on platforms such as Bitmoji, which allow the user to utilize dozens or hundreds of graphics featuring an avatar customized to resemble themselves in hairstyle, facial features, body type and clothing, , and photographs and videos, which may (in addition to introducing photographed content) be further customized via cropping, captions, filters, or line drawings (Duggan 2013).

The popularity of such supplements to textual communication is skyrocketing. In 2014, emojis collectively were used on Twitter more frequently than either hyphens or the numeral 5 (Sternbergh 2014) It seems uncontroversial that these graphic options are allowing friends to regain some of the expressive capacities that were once limited by a “pure” text medium, a limitation which historically people tried to overcome using emoticons and other DIY ASCII art (Rezabek and Cochenour 1998) and which modern communication platforms now provide directly.

On its own, this would be significant in addressing some of the concerns of earlier critics of computer-mediated communication and relationship. But in addition to providing a way to work historically recognized non-verbal communications such as facial expressions and gestures into computer-mediated communication, they may introduce something new to communication and the relationships connected via CMC. Specifically, it is my contention that they introduce a new expressive range that allows for novel kinds of communication, or at minimum, in virtue of the ease of use afforded by modern communication platforms, selectively shapes our interactions by making it easier to express some things quickly, easily, and comfortably than in the past.

Without going quite so far as to insist upon Marshall McLuhan’s adage that “the medium is the message” (McLuhan 1964), it is plausible to suppose that the availability of emojis and stickers expressing socially-challenging emotions, presented on equal footing with more widely-sanctioned ones, may make it easier for people to express these emotions, especially because they do not have to face the recipient directly. For example, Sternbergh (2014) describes a woman who said that “her mom had recently sent a text relaying regret, followed by a crying-face emoji—and that this was possibly the most straightforwardly emotional sentiment her mother had ever expressed to her” (2014). , especially given data that the majority of social-media interactions occur amongst relatively small groups of intimates (Broadbent 2012), the increased capacity to express difficult emotions is, I will argue, a significant impact on our close personal relationships.

At the same time, because emojis, stickers, photographs, video clips and doodles are all consciously selected, they lack some of the immediacy and revealing nature of unconscious body language (Scheflen 1972), and are interpreted by recipients consciously aware of the signs, if not the significance, of this new form of nonverbal communication - unlike much of the unconscious processing that seems to go on when it comes to gestures, posture, expression and tone (de Gelder and Hadjikhani 2006). Although the phenonemon is still evolving, this heightened consciousness of how oneself and others represent emotional states can impact both how we see others, and how we see ourselves. Body posture has been shown to subconsciously affect how we perceive ourselves (Carney et al 2010) - I argue that something similar may be true for repeated use of graphics that function communicatively in similar ways. This would lend new significance to issues such as diverse racial and gender representation in emojis, as well as change people’s conceptions of themselves by facilitating conscious recognition and identification of emotional states.

Changes to self-image via conscious selection and propagation of photographs and video clips. The ease with which users can take, send, and respond to photographs and videos, via platforms such as Instagram and Snapchat, two of the most popular social media apps amongst young adults and teens , reduce the effort and cost of sharing photographs, including (thanks to the ubiquity of front-facing cell phone cameras) “selfies” to become a major element of computer-mediated communication. This can also encourage users to take both their own points of view and appearance, and those of their friends and intimates, into consideration in new ways or at least with greater frequency.

Emojis, photographs, and other graphic non-verbal elements of computer-mediated communication are especially significant in virtue of their role in the frequent, quick, low-grade but rewarding “check-ins” via SMS texting and social media that allow friends with differing schedules, activities, and interests to share small moments of their daily lives (Vaterlaus et al 2016, Kelly et al 2015). Their cumulative impact on our relationships to ourselves and others should not be overlooked nor denigrated, despite their inconsequential and frivolous appearance.

References Briggle, A. (2008). Real friends: how the Internet can foster friendship. Ethics and Information Technology, 10(1), 71-79. Broadbent, S. (2012). Approaches to personal communication. Digital anthropology, 127-145. Carney, D. R., Cuddy, A. J., & Yap, A. J. (2010). Power posing brief nonverbal displays affect neuroendocrine levels and risk tolerance.Psychological Science, 21(10), 1363-1368. Cocking, D., & Matthews, S. (2000). Unreal friends. Ethics and Information Technology, 2(4), 223-231. de Gelder, B., & Hadjikhani, N. (2006). Non-conscious recognition of emotional body language. Neuroreport, 17(6), 583-586. Duggan, M. (2013). Photo and video sharing grow online. Pew Research Internet Project. Elder, A. (2014). Excellent online friendships: An Aristotelian defense of social media. Ethics and Information Technology, 16(4), 287-297. Hancock, J. T. (2007). Digital deception. Oxford handbook of internet psychology, 289-301. Kelly, Ryan, and Leon Watts. "Characterising the inventive appropriation of emoji as relationally meaningful in mediated close personal relationships."Experiences of Technology Appropriation: Unanticipated Users, Usage, Circumstances, and Design (2015). McFall, M. T. (2012). Real character-friends: Aristotelian friendship, living together, and technology. Ethics and information technology, 14(3), 221-230. McLuhan, M. (1964). Understanding media: the extensions of man. Cambridge Mass.,/London, The MIT Press. Rezabek, L., & Cochenour, J. (1998). Visual cues in computer-mediated communication: Supplementing text with emoticons. Journal of Visual Literacy, 18(2), 201-215. Scheflen, A. E. (1972). Body Language and the Social Order; Communication as Behavioral Control. Sternbergh, A. (2014). Smile, you’re speaking emoji: The rapid evolution of a wordless tongue. New York Magazine. Vallor, S. (2012). Flourishing on facebook: virtue friendship & new social media. Ethics and Information technology, 14(3), 185-199. Vaterlaus, J. M., Barnett, K., Roche, C., & Young, J. A. (2016). “Snapchat is more personal”: An exploratory study on Snapchat behaviors and young adult interpersonal relationships. Computers in Human Behavior, 62, 594-601. Walther, J. B. (2007). Selective self-presentation in computer-mediated communication: Hyperpersonal dimensions of technology, language, and cognition. Computers in Human Behavior, 23(5), 2538-2557.

12:30-14:00Lunch
14:00-15:00 Session 14: Plenary CEPE Keynote: Herman Tavani
Location: Aura Magna: building D2, base floor
15:00-15:30Break
15:30-17:00 Session 15A: e-SIDES Workshop

Societal and Ethical Challenges in the Era of Big Data:  Exploring the emerging issues and opportunities of big data management and analytics

Ethical and Legal issues overview (M. Jozwiak): presentation and discussion

Societal and economic issues overview (M. Friedewald): presentation and discussion

Location: Room H3: building D4, 2nd floor
15:30-17:00 Session 15B: Student Track
Location: Room H2: building D4, 2nd floor
15:30
Privacy and Brain-Computer Interfaces: Preliminary findings
SPEAKER: unknown

ABSTRACT. Brain-Computer Interfaces (BCIs) are emerging technologies that acquire and translate neural data, applying to the control of other systems (Allison et al., 2007). BCIs have been categorised according to user engagement modes (Wahlstrom et al., 2016, Zander and Kothe, 2011). When using active BCIs, the user deliberately intends an outcome; when using reactive BCIs, the user reacts to stimuli; when using passive BCIs, the user is not focussed on the BCI at all, but on some other cognitively demanding activity; finally, hybrid BCIs combine two of these preceding types. Privacy has been identified as an ethical issue possibly arising from the use of BCIs (Bonaci et al., 2014, Denning et al., 2009, Stahl, 2011). This project reported in this paper seeks to identify whether BCIs give rise to changes in privacy and if so, how and why. Once changes to privacy are identified and understood, it may be possible to suggest privacy-sensitive design guidelines for future BCIs and to develop a proof-of-concept prototype. In this project, privacy is understood as a social norm grounded in social interaction: privacy is distinct from seclusion and exists only with respect to third parties. Also, as this project positions privacy as socially grounded, by definition it is distinguished from data security and data confidentiality, which are systemic in motivation. Furthermore, understanding privacy as a social norm enables this project to be aligned with Nissenbaum’s contextual framework for information privacy: privacy and social context are inter-related and changes in a social context may give rise to changes in privacy (Nissenbaum, 2009). Emerging technologies may cause change in social contexts and consequently in privacy (Nissenbaum, 2009). Viewing privacy as a social norm also enables access to Habermas’s Theory of Communicative Action (TCA). Finlayson (2005), Horster (1992), Klein and Huynh (2004) and Thomassen (2010) briefly describe the TCA as constructed upon Habermas’s descriptions of the lifeworld and the system. The lifeworld encompasses the informal, unregulated aspects of everyday life. It is shaped by, and it shapes, the attitudes and practices of people interacting in the lifeworld’s native institutions (eg family and culture). The system encompasses the formal, regulated features of social life (eg policing, economics and politics) and it depends upon the lifeworld for its authenticity. Communicative actions occur in the lifeworld and instrumental actions occur in the system; communicative actions perpetuate the lifeworld whereas instrumental actions support systemic goals (eg profitability, election). Under the coercive influences of the system and vested interests, participants in instrumental actions engage in competition and conflict, distorting lifeworld norms in the pursuit of systemic goals. Norms distorted by instrumental actions carry the legitimacy established in the lifeworld and may be interpreted as lifeworld-authentic social norms. Thus the lifeworld is susceptible to amendment by the imperialistic tendencies of systemic practices, for which Habermas adopts the term ‘colonisation of the lifeworld’. However, communicative actions replenish and perpetuate the lifeworld, protecting it from the colonising tendencies of the system. Habermas’s conception of the lifeworld articulates the significance of communicative action in the mutual shaping of social norms and Nissenbaum’s contextual account of information privacy suggests that changes to social context may cause changes to privacy. This project positions privacy as a social norm and consequently, the TCA and to Nissenbaum’s contextual account of privacy can be accessed as theoretical scaffolds. Therefore, the project may consider whether BCI technologies cause change in privacy norms by studying the effects of communicative actions. The project’s method consists of the two research phases. The first phase is a pre-selection survey and the second consists of two communicative actions establishing privacy norms, separated by the use of a BCI. The pre-selection survey was adapted from the Office of the Australian Information Commissioner’s privacy attitudes survey, with permission. 31 respondents were recruited from research students and have been sorted into small groups so that privacy attitudes are mis-matched. The second phase consists of 4 research tasks. First, small groups will engage in scripted communicative actions in order to establish the group’s privacy norm. Second, individual participants will use an active BCI. Third, individual participants will complete a de-briefing interview in which they consider different BCI technologies through the lenses of six theoretical understandings of privacy. Finally, the small groups will re-convene to repeat the communicative actions and re-establish the group’s privacy norm. At the time of writing this abstract, the first phase is complete and the second phase is ongoing. As the full paper is being drafted, the second phase will have been completed and preliminary findings will be reported in the full paper. Participants in the pre-selection survey are described in Table 1. Twenty-three different cultural groups working in twelve different scholarly disciplines are represented. None of the respondents have used a BCI in the past. Table 1: Respondent characteristics Trait % of respondents Female 32.5 Male 67.5 Age: 18-24 20.0 Age: 25-34 52.5 Age: 35-54 27.5 Dependents: 0 75.0 Dependents: 1 12.5 Dependents: 2 10.0 Dependents: 3 0.0 Dependents: 4 2.5

Detailed findings from the pre-selection survey will be presented in the full paper along with preliminary findings from the second phase of the research project.

References

ALLISON, B., WOLPAW, E. & WOLPAW, J. 2007. Brain-Computer Interface Systems: progress and prospects. Expert Review of Medical Devices, 4, 463-474.

BONACI, T., CALO, R. & CHIZECK, H. 2014. App stores for the brain: Privacy & security in Brain-Computer Interfaces. IEEE International Symposium on Ethics in Science, Technology and Engineering.

DENNING, T., MATSUOKA, Y. & KOHNO, T. 2009. Neurosecurity: security and privacy for neural devices. Neurosurgical FOCUS, 27, E7.

FINLAYSON, J. 2005. Habermas: A Very Short Introduction, Oxford University Press.

HORSTER, D. 1992. Habermas: an introduction, Pennbridge.

KLEIN, H. K. & HUYNH, M. Q. 2004. The critical social theory of J{ü}rgen Habermas and its implications for IS research. In: MINGERS, J. & WILLCOCKS, L. (eds.) Social theory and philosophy for information systems. Chichester: Wiley.

NISSENBAUM, H. 2009. Privacy in Context: Technology, Policy, and the Integrity of Social Life, Stanford Law Books.

STAHL, B. 2011. What Does the Future Hold? A Critical View of Emerging Information and Communication Technologies and Their Social Consequences. In: CHIASSON, M., HENFRIDSSON, O., KARSTEN, H. & DEGROSS, J. (eds.) Researching the Future in Information Systems. Springer Berlin Heidelberg.

THOMASSEN, L. 2010. Habermas: a guide for the perplexed, Continuum.

WAHLSTROM, K., FAIRWEATHER, N. B. & ASHMAN, H. 2016. Privacy and brain-computer interfaces: identifying potential privacy disruptions. SIGCAS Computers and Society, 46, 41-53.

ZANDER, T. & KOTHE, C. 2011. Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general. Journal of Neural Engineering, 8, 025005.

16:00
How to make decisions with algorithms: Ethical decision-making using predictive analytics
SPEAKER: unknown

ABSTRACT. The use of automated decision-making support, such as algorithms within predictive analytics, will inevitably be more and more relevant, and affecting society. Sometimes it is good, and sometimes there seems to be negative effect, such as with discrimination. The solution focused on in this paper is how humans and algorithms, or ICT, could interact within ethical decision-making. What predictive analytics can produce is, arguably, mostly implicit knowledge, so what a human decision-maker could, possibly, help with is the explicit thought processes. This could be one way to conceptualize the interactive effect between humans and algorithms that could be fruitful. Presently there does not seem to be very much research regarding predictive analytics and ethical decisions, concerning this human-algorithm interaction. Rather it is often a focus on pure technological solutions, or with laws and regulation.

16:30
Interdependent Privacy
SPEAKER: unknown

ABSTRACT. Abstract

With the emergence and exponential growth of online social networks (OSNs), sharing has rapidly emerged as a global phenomenon and become a new social norm. Information that users share about one other has significant impacts on impression formation as well as risks to individual privacy. While online privacy is becoming increasingly difficult to ignore, so far, far too little attention has been paid to interdependent privacy. To address this gap, this paper reports on a phenomenological study focusing on interdependent privacy on Facebook in relation to content disclosed by others (other-generated disclosure). This study explores other-generated disclosures, based upon lived experiences of adult Facebook users, in multi-dimensional aspects such as motivations, perceptions, types of content, actions, and impacts to users on their offline as well as online relationships. Using both online survey and in-depth interviews, the findings report the influence of peer disclosure as well as users’ strategies to mitigate privacy issues because other-generated disclosure currently lacks of proper privacy managements and beyond individual control.

Online social networks (OSNs), such as Facebook, Google+, and Twitter, have brought about a revolution in communication, as well as played important roles in our daily lives as indispensable communication channels, tools for self-presentation, online repositories of information, or webs of relationships. In addition, ubiquitous Internet together with an emergence of smart devices like smartphone-s offers people greater convenience through ‘always-connected’ and ‘share-as-you-go’ accessibility to OSNs. OSNs have become increasingly embedded in daily life activities. Not only do users instantaneously interact with others, but they also freely generate, publish, share, or distribute their thoughts, ideas, feelings, expressions, or even personal life matters. Information disclosure is no longer dependent on the individual user, but interdependent upon the activities of others. Users may disclose information on either their own SNSs’ virtual spaces (“profiles”) or other users’ profiles during their online interactions and activities. Consequently, OSNs have become vast resources of user-contributed information ranging from general, private, personal, to sensitive data. This abundance of valuable information is susceptible to any misuse, inferences, threats, or attacks at any time. In particular, there is the threat to privacy.

Online privacy is one of the most significant debates that has become difficult to ignore in law and moral philosophy. Privacy on OSNs has been extensively studied in the literature ranging from privacy policies, privacy settings, access control, to privacy practices [1]–[3]. However, privacy conflicts and privacy-related issues still arise, particularly in the dynamic, ever-changing, and privacy-interdependent platforms like OSNs. Although perpetual modifications offers users more benefits, on-going changes create additional privacy challenges. Particularly, new features as well as add-on applications entice users to continuously interact and frequently share more information not only about themselves but also about others.

Sharing content on OSNs has become a global phenomenon and a new “social norm” (Zuckerberg, Facebook CEO). People feel more comfortable to share and more open online than offline. Several people share their personal lives and problems with strangers rather than with close ones. While users are deeply immersed in their roles as information producers and distributors, they are unaware of posing threats to others including their social circles (family members, friends, peers, mutual friends, and so on).

Nonetheless, information disclosure on OSNs is associated with a higher risk in both information privacy and personal privacy. Even selectively shared information can propagate beyond intended audiences. Collaborative activities, such as co-owned or multi-owned content, also raise a new set of privacy challenges. Recently, a decision on sharing or distributing multi-owned content can be made by just one person without a requirement for consensus agreements among co-owners. The lack of collaborative privacy management leads to distribution of content beyond intended audiences or information leakage to the public, regardless of individual awareness.

Users are both free to share what they wants and independently decide what they want to share. Unfortunately, they often cannot control the content others reveal about them (other-generated disclosure). Moreover, self-disclosures can be inhibited whereas other-generated disclosures is beyond individual control. While considerable research in online SNS privacy has focused on self-disclosure [4]–[7], other-generated disclosure is under-researched. Especially, other-generated disclosure within the context of this present research has yet been unexplored.

Other-generated disclosure has significant impact on impression formation, self-presentation, identity, and personal privacy. Earlier research, as cited above, shows that content generated by other in some case carry more weight than self-disclosure. Regardless of intention of the disclosers, unfavourable or unpleasant disclosure by others has jeopardised personal lives, posed face threats, resulted in harassment, lost opportunity (e.g. jobs), shed one’s negative public image, ruined reputation, and so on. The phenomenon of revealing about others without consent is not uncommon on SNS wall posts, comments, videos, links, photos, and tags. Of particular concern is a lack of control over other-generated disclosure, particularly outside users’ profiles [2], [8].

Some cases of other-generated disclosure can be claimed for legal protection or compensation under privacy laws, depending on counties. For instance, Mr. Madill downloaded 83 pictures of a nine-year-old girl from his friend’s Facebook and then re-posted those pictures to a Russian child porn web site [9]

Unfortunately, not all privacy-related issues respecting other-generated disclosures can be legally protected. Some cases are still beyond current scopes of legislation or in shadow of existing regulations such as “digital kidnapping”. Digital kidnapping is a pervasive privacy issue and a trending phenomenon in a recent year. As of 4 August 2015, news reported that hashtags involving digital kidnapping (#babyrp, #babyrpl, #adoptionrp, or #orphanrp) had yielded 57,000 results on Instagram [10]. This new phenomenon of “baby role play” [11] occurs when someone steals a child’s photo available on OSNs, then posts that stolen photo on other websites for role-playing. In general, female digital kidnappers use the stolen photo to show others as if the child belongs to them. Nevertheless, some cases of digital kidnapping are much disturbing in communities when digital kidnappers use the stolen photos in sexual and abusive role-playing. Digital kidnapping is not a crime although it can lead to kidnapping in the real world where the worst case scenarios may cause harm to a child’s life. This paper reports on a phenomenological study conducted in two phases. First over 400 professionals were surveyed about their experiences of disclosure – both as disclosing information about others, and their experience of having their information disclosed by others. Secondly, approximately 10% of survey respondents were interviewed, to further explore in-depth their experiences of such interdependent privacy. This study explored other-generated disclosures in multi-dimensional aspects such as motivations, perceptions, types of content, actions, and impact to users on their offline as well as online relationships. In particular, this study examined adult Facebook users (25-65 years old) who have engaged in other-generated disclosures. To gain deeper understanding of the phenomena, this present research investigated both engaging parties, the “discloser” (people who disclose about others) and the “disclosed” (people who was disclosed by others). This present study chose Facebook as a platform of interest because of its popularity, features, fine-grained privacy settings as well as restricted privacy control.

This study makes a contribution to the scant literature on OSN interdependent privacy as well as draws attention to new privacy challenges on OSNs whereby no existing control are available, or where controls are beyond individual control capabilities. This study encourages further research to pay close attention to this interdependent privacy challenges in order to discover effective detection mechanisms contributing to practical solutions in the future.

REFERENCE

[1] S. Labitzke, F. Werling, J. Mittag, and H. Hartenstein, “Do online social network friends still threaten my privacy?,” in Proceedings of the third ACM conference on Data and application security and privacy, 2013, pp. 13–24. [2] C. Patsakis, A. Zigomitros, A. Papageorgiou, and E. Galván-López, “Distributing privacy policies over multimedia content across multiple online social networks,” Comput. Networks, vol. 75, Part B, no. 0, pp. 531–543, 2014. [3] J. Watson, H. R. Lipford, and A. Besmer, “Mapping user preference to privacy default settings,” ACM Trans. Comput. Interact., vol. 22, no. 6, p. 32, 2015. [4] S. Utz, “The function of self-disclosure on social network sites: Not only intimate, but also positive and entertaining self-disclosures increase the feeling of connection,” Comput. Human Behav., vol. 45, no. 0, pp. 1–10, 2015. [5] Y. Al-Saggaf and S. Nielsen, “Self-disclosure on Facebook among female users and its relationship to feelings of loneliness,” Comput. Human Behav., vol. 36, no. 0, pp. 460–468, 2014. [6] S. Taddei and B. Contena, “Privacy, trust and control: Which relationships with online self-disclosure?,” Comput. Human Behav., vol. 29, no. 3, pp. 821–826, 2013. [7] E. Van Gool, J. Van Ouytsel, K. Ponnet, and M. Walrave, “To share or not to share? Adolescents’ self-disclosure about peer relationships on Facebook: An application of the prototype willingness model,” Comput. Human Behav., vol. 44, no. C, pp. 230–239, 2015. [8] A. Besmer and H. R. Lipford, “Moving beyond untagging: photo privacy in a tagged world,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2010, pp. 1563–1572. [9] He took the innocence and made it disgusting. February 3, (2015). http://www.news.com.au/technology/online/man-steals-pictures-from-facebook-posts-them-to-child-porn-website/story-fnjwnhzf-1227208112703 [October 29, 2015] [10] Chang, L.: Baby role-play, or virtual kidnapping, is the most disturbing Instagram hashtag ever, http://www.digitaltrends.com/mobile/baby-role-play-virtual-kidnapping/ [August 4, 2015] [11] Beware: Digital kidnappers of children are lurking. March 4, (2015). http://www.kidspot.com.au/could-digital-kidnappers-steal-images-of-your-child/ [January 29, 2016

17:00-18:00 Session 16: Q/A with the Ethicomp Steering Committee
Location: Aura Magna: building D2, base floor