ETHICOMP 2018: ETHICOMP 2018
PROGRAM FOR TUESDAY, SEPTEMBER 25TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-11:30 Session 6A: Technology Metaethics
Location: Room 0.8
09:30
From Value Sensitive Design to Virtuous Practice Design

ABSTRACT. In recent decades, value sensitive design (VSD) has become a dominant approach in the field of ethics of technology. Different types of VSD have been proposed, which have the shared characteristics of (1) proposing a tripartite method, consisting of conceptual investigations, empirical investigations and technical investigations, (2) promoting a heuristic or list of human values, and (3) implicitly or explicitly acknowledging that values can somehow be embedded in technology design. In this paper, we criticise the second characteristic of VSD, arguing that the no-tion of value is unhelpful, that it bars the necessary reflective component in ap-plied ethics and that instead of trying to improve its conceptual grounding an al-ternative linked to normative ethics should be sought after. As an alternative to VSD, we propose a framework that builds on virtue ethics of technology, drawing from the work of Vallor and MacIntyre. The resulting tripartite method consists of (1) investigating narratives, (2) reflecting on the practices captured by these narra-tives using a heuristic of virtues and (3) prescribing aspects of relevant practices to enhance the extend to which they cultivate the virtues.

10:00
Kekistanis are not illegal aliens

ABSTRACT. Kekistan is the fictional homeland of digital natives frequenting 4chan and other online spaces where transgressive humor and irreverent skepticism prevail. Betraying the tongue in cheek nature of the whole scene, the name takes its root from “Kek,” which is (roughly) an Orcish translation of “LOL.” The story of ethnic Kekistanis harnessing the “memes of production” in their ongoing struggle for a Free Kekistan against the oppressive Normies, Cucks, and Social Justice Warriors is an elaborate satire of ethnic and gender identity politics, popularized by YouTube personality Sargon of Akkad when he set out to have Kekistanis officially recognized as an ethnicity in the British census. Through an elaborate working out of inside jokes and ironic moral and political posturing, Kekistanis have established for themselves all the trappings of a nation. They have devised a flag, an anthem, a church, and an origin story. From their shibboleth salutation and valediction, “shadilay,” to their ubiquitous mascot and prophet, Pepe the Frog, the Kekistanis are as much an online nation or tribe as there is.

It is crucial to distinguish the white nationalist “Alt-Right” from Kekistanis. Although both are loosely associated with the political right, they are fierce adversaries and not allies, with the Alt-Right heartily endorsing the racial identity politics of the left in an inverted form, while Kekistan’s very existence is a merciless satire of that whole way of conceiving politics, challenging the notion that ethnicity is anything to take so seriously. If it makes sense to call them “conservative” at all, the Kekistanis are “South Park Conservatives” (Anderson, 2005) whose irreverent, transgressive humor is provoked as much by the “blood and soil” right as by the “Social Justice Warrior” (SJW) left. In the words of Kekistanis themselves, “The Alt-Right, aka Right-Wing SJWs, is a white nationalist movement that stands opposed to the Free Kekistan movement, and actively tries to steal Pepe as one of their own symbols. Despite this, many Normies often mistakenly conflate the movement with Kekistan, likely due to the Clinton Campaign slandering our prophet Pepe as a white supremacist symbol” (http://kekistan.wikia.com/wiki/Alt-Right).

I will argue that, although Kekistanis brandish an online culture that is self-consciously and outlandishly alien to the other tribes of modern, pluralistic society, they are not illegal aliens. We have more reason than ever to stop moralizing and learn to get along with aliens like the Kekistanis, since evolved humans in an age of global social media cannot reasonably expect to discover let alone impose any universal morality. However, getting along with aliens is not easy.

What on earth are we talking about? In a society of permanent reasonable pluralism, there are at least three levels of normative discourse that are routinely conflated: ethics (norms grounded in the values of a particular ethical community), politics (norms enforced whether or not they are consonant with the values of this or that ethical community), and shared social norms that bridge these (cross-subcultural norms that legitimize enforceable rules by articulating the “spirit” of the rules). Online culture wars typically emerge when one subculture evangelizes for its sacred values without regard to political or shared norms governing such interaction. Given our diverse natures as moral animals, these clashes are as predictable as they are fruitless.

Any sound normative philosophy will have to be consonant with what we know about the nature and origins of moral sentiments and beliefs, so the analysis begins with a summary of that knowledge and its implications. Of special interest are the intertwining accounts of ethics given by evolutionary biology (Joyce, 2007), game theory (Ridley, 1996), neuroscience (Churchland, 2012) and moral psychology (Haidt, 2012). I will argue that these accounts undermine moral philosophy’s traditional program of finding universal moral imperatives, confronting us with a stark choice between some version of error theory (Mackie, 1977), fictionalism (Joyce, 2007), or desirism (Marks, 2014). In the end, we shall have to take seriously Emerson’s (1841) maxim that “No law can be sacred to me but that of my nature,” as well as the Nietzschean (1887) declaration, “You have your way. I have my way. As for the right way, the correct way, and the only way, it does not exist.”

Although this controversial insight threatens to produce a sad nihilism, a naturalistic understanding of what morality is does not make our moral sentiments and beliefs disappear. Most of us will still have our sacred values. It does, however, transform what we take ourselves to be doing as appeals to transcendent values give way to working out various ways of life suited to our basic human diversity. At the same time, most everyone has a clear interest in enforcing rules that foster profitable intercourse and cultural institutions articulating the spirit of those rules.

Ethical Norms Perhaps the most salient feature of moral psychology is its deeply tribal nature. As Haidt (2012) emphasizes, “Morality binds and blinds.” Championing this or that sacred value at once declares one’s loyalty to an in-group while rendering out-groups as alien at best and more typically as blasphemous evildoers. Along with the usual foibles of human cognition, including the fundamental attribution error, stereotyping, and semantic chaining, this tribalism produces extreme polarization grounded in confusion as, for example, all feminists are conflated with their most radical fringe, and all opposition to that radical fringe is conflated with Nazis. The inevitable exaggeration of small differences results in our living in “different worlds” (Alexander, 2017a).

Nagle (2017) documents how this diversity manifests online across the political spectrum in our ongoing culture wars, with the Kekistani right and the anonymous 4chan culture from which it flows valorizing outrageous transgression and anti-moralizing clashing with the virtue signaling and witch-hunting of Tumblr by denizens of the Social Justice Warrior left. This online clash has real world consequences, as culture warriors set out to destroy the lives and livelihoods of opposing tribes (Ronson, 2015). While it is widely acknowledged that morally loaded tribalism manifests in measurable discriminatory behavior along ethnic lines, recent empirical research shows such behavior is even more strongly motivated with respect to political differences and especially in the internet age (Iyengar and Westwood, 2015; Lelkes et. al., 2017).

Political Norms Liberal political theory on the left (Rawls, 1993) and right (Hayek, 1988) has long acknowledged the tension between thick ethical norms and the thin political norms of liberalism. Nonetheless, liberalism is prepared to navigate these tensions. Indeed, Hayek explicitly works from the same evolutionary and game theoretical approach described earlier, and Rawls specifically cites the religious wars of early modern Europe as a principle impetus for political liberalism.

The relevant political argument is familiar enough: Online interaction inevitably produces encounters between communities with significantly different values. When values and practices are strange enough to undermine mutual understanding and respect, these are encounters with aliens. Encounters with aliens cannot function in accordance with the thick norms of one or another ethical community unless that community enjoys sufficient coercive power to enforce these norms. The use of coercive power to enforce norms is politics. Insofar as being an alien is sufficient to incur a coercive response, politics treats aliens as illegal. If a person is declared illegal, that person has no stake in maintaining the political order. Inevitably, a politics of enforcing thick norms results in a negative sum game in which each community has every incentive to control the levers of coercive power by any means necessary, resulting in unproductive conflict, violence, and even all-out war. To avoid this outcome and encourage the positive sum game of social cooperation, liberal politics advises against declaring any persons illegal.

If no person is an illegal alien, then Kekistanis are not illegal aliens.

Social Norms But constraining moral impulses by politics is not enough. So long as we are still moral animals, politics can only work smoothly insofar as suitable social norms legitimate our political arrangements to our various ethical perspectives as living up to some spirit of the rules worthy of our varied allegiances. By definition, these norms are neither grounded in thick moral language nor are they enforceable principles formally adequate to rule of law, and this makes them especially hard to articulate precisely.

The paper concludes with proposals to make sense of these social norms, including Alexander’s (2016, 2017b) pragmatic “Be Nice, at least until you can coordinate meanness” and an articulation of the “spirit of free speech” in terms of discerning how well a particular speech act contributes to the discovery of truth by sustaining a well-functioning “marketplace of ideas.”

Above all, I will argue the requisite social norms ask us to recommit ourselves to truth, each as we honestly and sincerely see it, and to speak the truth at once boldly and in a manner consistent with the positive sum game of discovering truth in the marketplace of ideas. The alien may never fully assimilate—perhaps should not assimilate—but we stand to learn much more from one another if we agree that no person is illegal and live up to the full spirit of that insight.

References Alexander, Scott. “Be Nice, at least until you can coordinate meanness.” Slate Star Codex (blog), 05.02.2016. http://slatestarcodex.com/2016/05/02/be-nice-at-least-until-you-can-coordinate-meanness/ accessed 01.09.2018.

Alexander, Scott. “Different Worlds.” Slate Star Codex (blog), 10.02.2017. http://slatestarcodex.com/2017/10/02/different-worlds/ accessed 01.09.2018.

Alexander, Scott. “Is it possible to have coherent principles around free speech norms?” Slate Star Codex (blog), 08.01.2017. http://slatestarcodex.com/2017/08/01/is-it-possible-to-have-coherent-principles-around-free-speech-norms/ accessed 01.09.2018.

Anderson, Brian. South Park Conservatives. (Washington, DC: Regnery Publishing, 2005)

Churchland, Patricia. Braintrust. (Princeton, NJ: Princeton University Press, 2012)

Emerson, Ralph. “Self-Reliance.” Volume II – Essays, First Series (1841) Online at http://www.rwe.org/ii-self-reliance/ accessed 01.09.2018.

Haidt, Jonathan. Righteous Mind. (New York: Vintage Books, 2012)

Hayek, Friedrich. The Fatal Conceit. (Chicago: University of Chicago Press, 1988)

Iyengar, Shanto and Westwood, Sean. “Fear and Loathing across Party Lines: New Evidence on Group Polarization.” American Journal of Political Science (59:3), July, 2015, pp 690-707.

Joyce, Richard. The Evolution of Morality. (Cambridge, MA: MIT Press, 2007)

Lelkes, Yphtach and Sood, Gaurav and Iyengar, Shanto. “The Hostile Audience: The Effect of Access to Broadband Internet on Partisan Affect.” American Journal of Political Science (61:1), January, 2017, pp 5-20.

Mackie, J.L. Ethics: Inventing Right and Wrong. (New York: Penguin, 1977)

Marks, Joel. Ethics without Morals: In Defence of Amorality. (Abingdon, UK: Routledge, 2014)

Nagle, Angela. Kill All Normies. (Alresford, UK: Zero Books, 2017)

Nietzsche, Friedrich. Thus Spake Zarathustra (1887). Online at http://nietzsche.holtof.com/Nietzsche_thus_spake_zarathustra/III_55.html accessed 01.09.2018. [Note: The reference here is to the penultimate line of III, 55 from the Thomas Common translation Zarathustra, which seems to have been the original inspiration of the Nietzschean sentiment quoted. However, the exact quote in the text above is not a faithful translation but is the wording found all over online, and it renders the spirit of Zarathustra’s words as a quip more succinctly than a faithful translation. One could argue it is a better translation for this reason. It is at least more suited to printing on a t-shirt, coffee mug, or bumper sticker.]

Rawls, John. Political Liberalism. (New York: Columbia University Press, 1993)

Ridley, Matt. Origins of Virtue. (New York: Penguin Group, 1996)

Ronson, Jon. So You’ve Been Publicly Shamed. (New York: Riverhead Books, 2015)

Wiki of Kekistan. Alt-Right. http://kekistan.wikia.com/wiki/Alt-Right accessed 01.09.2018.

10:30
Exceptionalism in the Ethics of Humans, Animals and Technology

ABSTRACT. see extended abstract in pdf

11:00
THE MISGUIDED CONFLATION OF EPISTEMIC ONTOLOGY AND EPISTEMIC ONTICISIM IN AGI RESEARCH

ABSTRACT. Field for consideration: Technology Metaethics

Artificial general intelligence (AGI), also referred to as “strong AI,” is understood as machine intelligence that is capable of having consciousness. The nature and dynamics of machine consciousness are defined in a certain manner in current AGI research, and in order to address this metaphysical domain, Martin Heidegger’s investigation of Dasein or “Being” in Being and Time (1927) seems most appropriate. Heidegger makes a point of delineating between ontology and onticism in his work, establishing the difference between “essence” and “what is factually there.” Drawing upon Heidegger’s distinction between ontology and onticism, I will argue that a fundamental assumption underlying AGI research is a misguided conflation of epistemic ontology and epistemic onticism: that knowing the nature of consciousness results from knowing how consciousness operates and functions. Functionality would be the tangible, conspicuous property that is “factually there.” An analysis of Daniel Dennett’s work “Imagining Consciousness” in Consciousness Explained (1992) will serve as my launching point for this argument. Epistemic onticism is effective since it eschews murky discussions of Cartesian dualism, opting for a functionalist view of human consciousness. Yet, this view is rather reductive and exclusive. In other words, an epistemically ontic approach to human consciousness in relation to AGI reduces the human mind merely to the algorithmic tasks it can perform. Moreover, the approach is exclusive in that the primary task attributed to the human mind is “data processing” based on an input-process-output (IPO) model. This model places the human mind and computer software on a level playing field. However, there is more to the human mind; the faculty cannot simply be narrowed down to the IPO model. That being said, the human mind cannot be viewed as analogous to computer software, a presumption that frequently emerges in philosophy of mind as well as in the development of AGI. Moreover, the conflation of epistemic ontology and epistemic onticism is rather concerning. Heidegger created this ontic-ontological distinction for a particular reason: both onticism and ontology constitute existence together; they do so in the form of a duality, not unity. When the two are taken as a unity, the ontic inevitably supplants the ontological so that “function” becomes “nature,” especially in the case of AGI. The conflation’s net result is an epistemically ontic view of consciousness. How the human mind operates and performs can be mimicked, but to think that this mimicry can give birth to an entity akin to the actual human mind is a shaky misconception. So, I offer that AGI research returns to the debacle that Descartes fathered. While the mind-body problem has certainly perplexed many philosophers over the centuries, its analysis is not for naught. Although specific details of responses to the mind-body problem are quite elusive, a dualist perspective provokes a variety of responses whereas an epistemically ontic account is more exclusive to the mind. The mind-body problem can be a virtue in philosophy of mind and computer science when one steps away from the particulars and, instead, regards “the big picture.” What does the gestalt of this problem suggest? There is some entity beyond the mind, such as the body. David Chalmers’ “naturalistic dualism,” for instance, proposes that mental states are caused by physical systems. Or, perhaps this feeling of some entity existing beyond the mind suggests the vast breadth of the mind and its connection to what humans perceive as the body or the environment. The body or the environment could simply be an extension of the mind as Gottfried Wilhelm Leibniz, a monist who understood the mind and the body as non-causally linked, parallel entities in his theory pre-established harmony, believed. Ultimately, discussions in response to the mind-body problem sees the abilities of the human mind and human consciousness in a broader context than does the epistemically ontic assumption in the AGI status quo. What does this mean for AGI research going forward? Equating AGI’s potential to the human mind produces a phenomenon in practical ethics known as “Playing God.” There is an invigorating glee that comes with the prospect of creating an entity that could purportedly be conscious as humans are. Although such glee and excitement are understandable, prolonged feelings of triumph can become risky and possibly lead to self-apotheosis. Thus, here I argue for a precautionary principle with one of the most famous works in Western literature. While literature may be dismissed as “fiction,” it is nonetheless reflective of humans’ most innate desires. Mary Shelley’s Frankenstein; or the Modern Prometheus (1818) tells of a man who failed to anticipate the consequences of his creation, drunk on the power that “Playing God” gave him. The same applies to AGI research if current assumptions continue and are not modified. Once one recognizes that epistemic ontology and epistemic onticism are not the same and should not be conflated, such hubris can be abated. AGI is not human consciousness nor a mimicry of human consciousness. AGI is a drastically exaggerated manifestation of one facet of human existence in order to fulfill tasks related to information processing and thereby increase productivity in numerous areas of life. Therefore, AGI is an intelligence independent of human existence; it will be its own entity. That being said, researchers cannot predict how AGI may evolve and thus cannot succumb to the delusion that they are creating consciousness “in their own image.” As Heidegger puts it in “The Question Concerning Technology” (1977), “[t]he will to mastery becomes all the more urgent the more technology threatens to slip from human control” (2). Human control may have its limitations. Initiatives such as the beneficial AI movement endeavor to ensure that AI development, including AGI, aligns with human goals and interests. Yet, it is my hope that they also consider the possibility that, in the future, AGI may no longer be “technology” or tools that humans employ, but an entity that humans cannot keep under their jurisdiction. If that day comes, humans will face the irrevocable horror that Dr. Frankenstein suffered.

09:30-11:30 Session 6B: Teaching/Professional Ethics
Location: Room 0.9
09:30
Ethical issues of crowdsourcing in education

ABSTRACT. Crowdsourcing has become a fruitful solution for many activities, promoting the joined power of the masses. Although not formally recognised as an education model, the first steps towards embracing crowdsourcing for formal learning and teaching have recently emerged. This paper has an intention to warn the current and prospective creators of crowd-oriented educational systems about the challenges this approach can trigger. It presents the ethical issues affecting the collaborative content creators, prospective users, as well as the institutions intending to implement the approach for educational purposes. Even the revealed barriers might increase the risk of compromising the main educational goals, if carefully designed and implemented, crowdsourcing might become a very help-ful, and at the same time, a very reliable educational model.

10:00
Experiential Learning Pedagogy: Computing Capstone Class

ABSTRACT. Experiential learning pedagogy can be referred to as a style where students are “learning by doing,” (Dewey 1915 ). Hence, “Experiential learning is participative, interactive, and applied. It also allows, as stated by Mollaei and Rahnama (2012) “contact with the environment, and exposure to processes that are highly variable and uncertain. It involves the whole-person; learning takes place on the affective and behavioral dimensions as well as on the cognitive dimension. The experience needs to be structured to some degree; relevant learning objectives need to be specified and the conduct of the experience needs to be monitored. Students need to evaluate the experience in light of theory and in light of their own feelings”. In 1975, Wolfe and Byrne outlined four steps in an experiential learning framework. These included Design, Conduct, Evaluation, and Feedback. This paper used Wolfe and Byrne’s four steps to reflect on computing capstone classes where experiential learning pedagogy was implemented. The paper discusses how learning took place in a classroom environment for three years where students worked on a real business project. The project discussed in this paper is the Design and Analysis Toolkit for Inventory and Monitoring (DATIM) application, which is being developed for the U.S. Forest Service Inventory and Analysis Program (FIA). The main modules of the DATIM project include: 1) Design Tool for Inventory and Monitoring (DTIM), which is designed to assist users in determining objectives, questions, and metrics for monitoring plans; 2) Analysis Tool for Inventory and Monitoring (ATIM), which enables users to analyze vegetation data to derive estimates of current conditions and trends in the forest and surrounding landscapes .

In this on-going research, the authors of this paper believe that core skills such as teamwork, communication, professionalism and ethics are part of experiential learning pedagogy that prepares the students to deal with challenges faced in today’s technology related businesses. Phase one of the on-going research involved a discussion of the importance of including ethics into experiential learning computing capstone pedagogy. During this phase of the research, information was presented on how working with real world projects can help prepare computing students to meet global challenges that they may face after graduation .This was followed by phase two, that involved a capstone project where computing students worked with a Forest Service application in its development phase. The results and reflections of phase two research were published using the National Society for Experiential Education’s (NSEE) theoretical framework to describe the pedagogy of the capstone class along with the development, proposed implementation, feedback from the students, and lessons learned and challenges faced during the project . This paper uses the four steps of Wolfe and Bryne’s framework to understand the student’s perspective on their learning experience. The first step is Design. This upfront effort by the instructor sets the stage for the experience. This is critical because it lays the foundation so that students can view the experience in the desired context. The context in this case was DATIM. The second step, involves maintaining and controlling the design. The design phase may include the creation of a timetable for the experience. The important implication of this step is that the experience is a structured and closely monitored one. The three years of DATIM project was structured yet it provided flexibility for students to identify their milestones. Discussion on how this project was structured and closely monitored will be presented. This step is followed by Conduct, which involved maintaining and controlling the design. Here the authors discuss how the timetable for the classroom pedagogy was created and altered according to the student needs. The changes incorporated into the process were made to create a favorable learning environment for the students. The third step, evaluation, was more focused on how and what methods were used by the instructor to evaluate the experience. In this step, various methods used throughout the three years of capstone classes will be described. Finally, the Feedback step, an almost continuous process from the pre-experience introduction through the final debriefing using various methods and tactics used by the instructor will be described. The contribution of this paper is to provide a rich insight as well add to the existing body of studies where researchers and practitioner’s believe experiential learning can accelerate learning, increase engagement levels, and enable personalized learning.

10:30
Practical computer ethics- An unsolved puzzle!

ABSTRACT. “The solution often turns out more beautiful than the puzzle” (Richard Dawkins)

This submission debates not how important is computer ethics in education (puzzle); but, the contemporary global solution (not beautiful) through a dialectical process between a practitioner and a theorist. The underlying reasons are: (i) computer ethics status quo; (ii) ICT development; (iii) authors’ professional and academic background; (iv) daily challenges to computer ethics within organisational contexts.

Remembering Professor Simon Rogerson keynote speech during ETHICOMP 2014, computer ethics (research field) future challenges were: (i) ICT pervasiveness as well as exponential growth (appliances); (ii) the retirement of people like himself, Don Gotterbarn or Deborah Johnson, whose contributions to the research field were practical (practitioner perspective); (iii) allow young practical academics (“hybrids”) to flourish, i.e., promote a behavioural change within the computer ethics community (counterculture the philosophical dominance in discourse and action); and, (iv) action instead of reaction in order to demonstrate to policy makers, industry and other stakeholders our importance.

ICT pervasiveness, exponential growth and global impact is well-documented (see MacArthur & Waughray, 2016). These authors acknowledge the principle of circular economy as “prevalence of connectivity, through the Internet of Things, (…) and enable a less resource-dependent economy (…) through a compelling vision on “intelligent assets” creation intersect economy, technology and people within businesses, cities or regions.” (p. 3). Note that ethical and social dilemmas are neglected, which can be an indicator of non-counterculture and action (practitioner argument). However, computer ethics theorists may argue that The Ellen MacArthur Foundation is one of various initiatives from businesses (“mathematical impossibility” to influence everyone) (Arrow, 1950 ); or, the influential contribution of Luciano Floridi within Google (member of Google´s Advisory Council on the Right to be Forgotten) (Oxford Internet Institute, 2014).

Other example in favour of theorists are Stahl et al. contributions (2014, 2013) upon responsible research and innovation (RRI). In spite of computer ethics be a reference discourse for ethics concerning innovation, such authors recommend a broader approach through an historical overview of computer ethics underpinnings. And, more recently, move towards a practical process (2017): explore the multilevel positive impacts on companies’ performance, as well as, reflect upon the potential alignment of internal practices with RRI. This achievement is a positive retort to Professor Simon Rogerson challenge- action instead of reaction regarding policy makers and industry.

Although, from a practitioner perspective, the computer ethics community continues to have a poor time of response or proactive behaviour; because SME´s (Small Medium Enterprises) represent 89% of European companies in which 93 % are micro since employ less than 10 people . Technological services (development- traditional, online or mobile, digital marketing, telecommunications, etc.) acknowledge over 60% non-financial companies. Also, “start-ups and scale-ups are important drivers of economic growth (…) average 9.2 % of firms with at least 10 employees in the EU-28 business economy were high-growth firms in 2014” (p. 7), as well as,

newly founded firms, created by self-employed, have survival rates typically between 30-60 % into the first five years. While data for the surviving firms show that a vast majority of firms that do not substantially increase employment (…), there is a sub-set of up to 20 % of firms that manages to increase employment by more than 5 employees (p. 10)

And, Stahl et al. (2017) study acknowledges: (i) three case studies in major ICT companies, which presupposes well-established procedures and other organisational practices. Therefore, an entire different reality when compared to SME´s: organisational strategies and procedures (for instance development vs. social media services), human resources and stakeholders’ interests; (ii) interviews only to technical staff (development or support) despite different organisational hierarchies. It is important not to disregard a variety of individuals within organisations that interact in a daily basis with information or systems and, not necessarily develop but influence the outcome or users/consumers behaviours (e.g., beta testers, social marketers).

Why ICT professionals are different? Because ICT exponential development imposes: (i) lifelong learning for technical (specific) and soft skills; (ii) professional responsibility is under constant pressure (potential and unintended consequences); and, (iii) ethical maturation of computing quality standards or codes of ethics (Andersen et al., 1993; Gotterbarn, 1991). Despite the above statement teaching computer ethics continues to be a herculean mission! An example can be found into Johnny Soraker words:

I thought this was going to be a difficult chore, but didn’t quite anticipate how difficult it was going to be. The problem is that the majority of literature in computer ethics is directed towards other computer ethicists are incredibly boring, labour over points that are self-evident or unimportant, are irrelevant to the actual practice of software engineering (Soraker, 2010).

Johnny’s own words recognise philosophy dominance over practice, which Rand Connolly & Alan Fedoruk (2014) demonstrate: (i) ICT practitioners need education in applied ethics; (ii) computer ethics education is theoretically unsound and empirically under-supported; (iii) focus on explicitly understanding the social contexts of computing and significantly less focused on its ethical evaluation is required. Although, recent publications continue such strategy (lack of social context or people’s behaviour/physiological response) despite the case studies (e.g., Burton et al., 2017).

This hardcore philosophical discourse mixed with pure technical case studies, contrarily to experience in socio-professional contexts raises a serious problem: computer ethics as a research field is not attractive enough for “hybrids” (young researchers with practical purposes or from different backgrounds). Besides, a narrow understanding or lack of practical sense due to non-existent experience in business from a large majority of the community, acts like a barrier to practitioners (opportunity to influence SME´s and people through a proactive behaviour). This is noticeable in several communitarian indicators: (i) number of researchers under 25 or, above 50 years old; (ii) community average age; (iii) number of practitioners; (iv) number of publications with practical cases- industry application (particularly since 2014- joint venture with CEPE).

Gathering circular economy, academic discourse and communitarian behaviour the conclusion is obvious: the challenges addressed by Professor Simon Rogerson are more prominent and powerful! To ensure a prompt answer, a practitioner and a theorist will debate daily organisational issues, problems and dilemmas throughout a dialectical process.

Table 1. Authors’ academic and professional background Co-Author Academic Background Professional Background 1st BEc, MSc E-Business/IT, MBA, PhD Computer Science1 Lecturer2, Key Account Manager (Finance and ICT)3, Entrepeneur4 2sd BSc, PhD Cultural Sciences Lecturer/Researcher5 Legend: 1. Not concluded | 2. 10 years’ experience (5 on computer ethics) | 3. 5 years’ experience | 4. 20 years’ experience (7 on ICT development and digital marketing) | 5. 10 years’ experience (4 on computer ethics)

Note: additional information about the authors will be in the paper. The aim is to ensure anonymity during the review process

References Anderson, R. et al. (1993). Using the new ACM code of ethics in decision making. Communications of the ACM, 36(2), 98-107. Arrow, K. J. (1950). A difficulty in the concept of social welfare. The Journal of Political Economy, 58(4), 328-346. Association for Computer Machinery. (2018). ACM code of ethics and professional conduct. ACM. Available in http://www.acm.org/about/code-of-ethics (accessed 8 January 2018). Burton, E. (2017). Ethical considerations in artificial intelligence courses. Cornell University Library. Available in https://arxiv.org/abs/1701.07769 (accessed 12 January 2018). Connolly, R. & Fedoruk, A. (2014). Why computing needs to go beyond good and evil impacts. In E. Buchanan et al. (Eds.), ETHICOMP 2014 (Paper: pen). Paris: University of Pierre and Marie Curie. Gotterbarn, D. (1991). Computer ethics: Responsibility regained. The Phi Beta Kappa Journal, 71, 26-31. Institute of Electrical and Electronic Engineers (2018). IEEE code of ethics. IEEE. Available in http://www.ieee.org/about/corporate/governance/p7-8.html (accessed 8 January 2018). MacArthur, D. E. & Waughray, D (2016). Intelligent assets: Unlocking the circular potential economy. Oxford Internet Institute. (2014). Luciano Floridi appointed to Google’s Advisory Council on the Right to be Forgotten. Oxford Internet Institute. Available in https://www.oii.ox.ac.uk/news/releases/luciano-floridi-appointed-to-googles-advisory-council-on-the-right-to-be-forgotten/ (accessed 15 January 2018). Rogerson, S. (2014, June 25). The future of computer ethics. Keynote presented at ETHICOMP 2014, University Pierre and Marie Curie, Paris. Soraker, J. (2010). Designing a computer ethics course from scratch. Soraker. Available in http://www.soraker.com/designing-a-computer-ethics-course-from-scratch/ (accessed 4 January 2018). Stahl, B. C. (2013). Responsible research and innovation: The role of privacy in an emerging framework. Science and Public Policy, 40(6), 708-716. Stahl, B. C. et al. (2014). From computer ethics to responsible research and innovation in ICT- The transition of reference discourses informing ethics-related research in information systems. Information & Management, 51(6), 810-818. Stahl, B. C. et al. (2017). The Responsible Research and Innovation (RRI) maturity model: Linking theory and practice. Sustainability, 9(1036), 1-19.

09:30-11:30 Session 6C: Learning from Narratives: Historical and Fictional
Location: Room 1.1
09:30
Early history of computers as a tale of machines as omnipotent instruments of power. Hopes, fears and actual change in administration, politics and society from the 1960s to 1980s.

ABSTRACT. 1. Introduction

The proposed paper is very much influenced by Joachim Radkau’s book Die Ges-chichte der Zukunft (2017), in which he shows in a factual and detailed manner that the future is always different from what one hopes for or fears. If at all, the transformations of the past into the present and the future only appear conclu-sive, explainable and even somehow meaningful from the retrospective. This is presumably due to the fact that we reconstruct such lines of development in the light of our present knowledge. However, from the point of view of those who have acted in the past with a view to the future, it is usually true that things do not turn out as expected.

The thematic background will be a tension, which can be illustrated quite well by comparing pop-cultural content with scientific debates. For example, Dutton and Danzinger (1982:1, italics in the original) write about the situation in the 1960s to 1980s:

"Computers and electronic data-processing systems are major tools of modern organizations and components of many other technologies. Occa-sionally a dramatic image of the computer has captured the public’s imagi-nation, as did the uncontrolled and threating computers in the films 2001: A Space Odyssey, The Demon Seed, and Colossus: The FORBIN Project."

Since Duttons and Danzinger's text was published in 1982 and many films of the remaining 1980s could not be considered by the authors, one could add: Tron (1982), War Games (1983) or Terminator (1984); if one were to add Welt am Draht (1973) and Logan's Run (1976) from the 1970s, a considerable number of films would have been identified in which wild and/or power-hungry computers play a decisive role. If such an enumeration were supplemented by those com-puters from science fiction literature that play an at least ambivalent role in the respective stories, this list would become quite long.

In the popular media, computers have certainly had an ambivalent image since their invention, because they are often portrayed in such a way that they free themselves from people's power and finally control or even subjugate their for-mer masters (e.g. Ower 1974). Even if such extremes were not claimed in factual debates beyond those in mass media, computer technology is often viewed critically (e.g. Genrich 1975). This is probably most evident in Joseph Weizenbaum’s vehement criticism in Computer Power and Human Reason. From Judgement to Calculation (1976). Such almost apocalyptic views, in turn, are counterbalanced by the hope that computers could be a tool for making administrative or production processes more efficient, cost-effective and humane. As Radkau points out, however, much more far-reaching hopes have been placed on the use of computers, e.g. the revitalization of democracy through computer-aided participation (e.g. Krauch 1972).

What now almost sounds like a deterministic with regard to the effects of the use of technology in general and computers in particular, however, in what follows will be described in a much more complex way, since it will become clear that computer development and social change are interwoven with each other without a clear direction of action being discernible. To put it another way: the respective social conditions enable, perhaps even promote, the development of technology in general and computer technology in particular, but vice versa it is also true that the available (computer) technology promotes social change - alt-hough this does not necessarily have to be positive in retrospect. It will also prove that the possibilities of computer technology will help to stop or at least slow down social change. What will also become apparent, however, is that even in the early days of computer technology, the hoped for and/or feared opportu-nities and repercussions of new technologies like computers led to social debates that could then influence the trajectories of technological development.

2. Cybernetics, computers and computer networks in the Soviet Union: Between ideology and necessity

In what follows, it shall be outlined with one example what will be described in much more detail in the full paper and with regard to the situation in the West as well as the East from the 1960s to 1980s.

Using the example of networked computers, it is easy to see how technology could contribute to social change, but also how political forces tried to save the status quo by preventing technical developments. The progress of science, tech-nology, society and politics are not independent of each other, but interwoven – in which way and in which direction effects are exerted on each other, can prob-ably only be decided by arbitrary demarcations in relation to the period of time that one wants to consider.

For the Soviet Union, the use of networked computer systems was nothing fun-damentally new. Gerovitch (2008:338) names a number of networked computer systems that served various military purposes: an airspace surveillance and air defence system, a missile defence system and another for space surveillance. Moreover, intensive efforts were made at a very early stage to equip scientific institutions with networked computers so that the corresponding research could be supported efficiently (Shirikov 2011). The networking idea soon met with massive resistance from very different directions, the origin of which can be summed up in three words: loss of control – at all levels of the political and eco-nomic hierarchy. First, there were economists who welcomed or deliberately sought to bring about this loss of control by seeking to integrate market-economy elements into the Soviet command economy in order to increase efficiency, but this would have contributed to the loss of political control. Second, the managers of state-owned enterprises resisted such a system because they were concerned that the inefficiency, waste of resources and presumably corruption in their businesses would become visible. Third, the bureaucratic apparatus feared not only a loss of control, but also the (partial) loss of its raison d'être, since many of the tasks of this apparatus with its privileged jobs would then have been taken over by the computer system.

3. Conclusion

Such a reform made possible or even driven forward by technology would not have been in the interest of large parts of the Soviet political, bureaucratic and economic elites and was therefore prevented – and thus ultimately also the as-sociated technical development itself. Whether one wants to evaluate this posi-tively or negatively, the decisive factor is that there was no automatism. Tech-nology has not already been realized because it bore the possibilities of social change. Therefore, the idea that technology is a deterministic driver of social change is plainly wrong; similarities to the supercomputers in the films and books of science fiction mentioned above do not even begin to exist – the imagi-nation of filmmakers and authors does not find any correlation in the reality of the early days of computer technology. For today, the lessons that might be learned not only from the above mentioned example are that if there are deter-mined people who want to prevent or promote a certain social development by promoting or slowing down the use of technology, then this undertaking can actually succeed – an apocalyptic view on technology can be proven wrong. But at the same time there is no reason to believe that it would be enough to have networked computers and the like and democracy will flourish – the euphoric views then and now with regard to the democratizing effects of ICT are also plainly wrong. There is no such thing like a deterministic development of tech-nology and society.

4. References

Dutton, W.H., Danzinger, J.N., 1982. Computers and politics, in: Danzinger, J.N., Dutton, W.H., Kling, R., Kraemer, K.L. (eds.), Computers and politics: High technology in American local governments. Columbia University Press, New York, p. 1-21.

Genrich, H.J., 1975. Belaestigung der Menschen durch Computer, in: Mülbacher, J. (Hrsg.), GI — 5. Jahrestagung: Dortmund, 8.–10. Oktober 1975. Springer, Berlin, Heidelberg, p. 94–106.

Gerovitch, S., 2008. InterNyet: Why the Soviet Union did not build a nationwide computer network. History and Technology 24, p. 335–350.

Krauch, H., 1972. Computer-Demokratie. VDI, Düsseldorf. Ower, J.B., 1974. Manacle-forged minds: Two images of the computer in Science-Fiction. Diogenes 22, p. 47–61.

Radkau, J., 2017. Geschichte der Zukunft: Prognosen, Visionen, Irrungen in Deutschland von 1945 bis heute. Hanser, München.

Shirikov, V.P., 2011. Distributed systems for data handling, in: Impagliazzo, J., Proydakov, E. (eds.), Perspectives on Soviet and Russian Computing: First IFIP WG 9.7 Conference, SoRuCom 2006, Petrozavodsk, Russia, July 3-7, 2006, Revised Selected Papers. Springer, Berlin, Heidelberg, p. 36–45.

Weizenbaum, J., 1976. Computer Power and Human Reason. From Judgement to Calculation. Freeman, San Franciso.

10:00
Exploring the Ethics of Diversity Initiatives based upon EPSRC’S Diversity days

ABSTRACT. As part of a recognition that Equality and Diversity needs to be improved within STEM, there has been a recent impetus to conduct a range of Diversity Initiatives. One example of a government driven initiative is the Engineering and Physical Science Research Council’s (“the EPSRC”) Inclusion Matters call, where £5 million was earmarked for initiatives: EPSRC said they were “looking to show leadership in changing the culture, practices and makeup of the research community to improve equality, diversity and inclusion". As part of this process the EPSRC held two (invitation only) sessions which were audio recorded and transcribed: these involved speeches by their CEO (Philip Nelson OBE), questions from the audience (responded to by EPSRC staff and the CEO), as well as presentations from learned societies. The author has obtained under the Freedom of Information Act (2000) a copy of the transcripts of these sessions: in effect, this makes them a public document. This particular transcript happened to provide a particularly detailed set of examples of harmful myths driving equality strategies which in effect were promoted to University’s around the United Kingdom. This paper discusses some of these myths and illustrates why ‘diversity schemes’ might not always advance equality.

10:30
Towards a Chronological Taxonomy of Tourism Technology: an Ethical Perspective

ABSTRACT. As tourism has evolved, it has continued to utilise technological advances. There appear to be many ethical issues associated with technology usage in tourism evolution. This paper undertakes a preliminary analysis of this evolution, linking aspects of tourism with technological advances and highlighting potential ethical issues related to such links. Thomas Cook is used as an illustrative case study. It redefines the term tourism technology so it fits more appropriately over time. It develops a new structure for collected data which offers great potential in developing greater understanding within this hitherto poorly investigated area.

11:00
First snow of summer

ABSTRACT. It is the end of summer. Snow is falling. Both Lumi and Pyry can see it even if they live in parallel universes.

11:30-12:00Break and Refreshments
12:00-13:15 Session 7: Agnieszka Landowska Keynote: Uncertainty in emotion recognition

Uncertainty in emotion recognition

The research field of affective computing is gaining attention and applications. There are decisions made on the basis of affective computing solutions. The decisions on sales or marketing cost money. Perhaps more decision influencing people lives are to come in the next decades. Can people trust affective computing? Are people aware of the uncertainty related to automatic emotion recognition?

Affective computing researchers are well familiar with the limitations of the domain. The methods provided so far are, as all artificial intelligence algorithms, susceptible to noise, mislabeled data, changing contextual circumstances. Emotional expressions, that the algorithms are basing on, are highly individual and even might change depending on a mood. In-the-wild conditions make the results even less reliable. All mentioned factors lead to uncertainty in the analysis of human affect. Uncertainty can be described as a state of the analyst, that cannot foresee a phenomenon due to intrinsic variability of the phenomenon itself, or to lack of knowledge and information. But still most research studies in affective computing concentrate on improving accuracies rather than reliability of the results. Commercially available emotion recognition solutions provide no information on confidence of the result so far. The quantification and characterization of the resulting output uncertainty is an important matter when results are used to guide decision making.

Location: Theatre 0.4
13:15-14:30Lunch
14:30-16:00 Session 8A: Cyborgs
Location: Room 0.8
14:30
Cyborg Ethics in Spain: A quantitative and qualitative Study

ABSTRACT. Information and Communication Technologies have been transforming our society during the recent decades. The way we work, the way we communicate or the way that we live is different. ICT is involved in a “never ending” innovation process, in which cost and capacity is increasing exponentially. But now we need to cope not only with the permanent transformation of ICT, but the emergent integration of ICT into our bodies. Technological implants are a major challenge for society and even for the natural concept of what a human being is. The merge of technologies and persons arise many questions that should be asked, and we consider that ethics should be the framework in which all developments of cyborg technologies should be done. For instance, if my brain functions integrate intelligent technologies both external and implantable, how should I be treated: like a machine or like person? (Gillett, 2006).

A cyborg can be defined as a human being with an electronic device implanted in or permanently attached to their body for the purpose of enhancing their individual senses or abilities beyond the occasional use of tools (Park, 2014). Cyborgs are a realistically possible technology nowadays (Warwick, 2014).

In order to explore the situation, in 2016 a team was created as a “Cross Culture Cyborg Observatory” maned as C3O. We are defining a qualitative study to analyze ethical perspectives. We will analyze the Spanish situation taking into consideration ethical factors of implanted technologies such as integrity, control, security or social inclusion (Park, 2014). We will try to analyze perceptions about the opinion of it is perfectly acceptable to upgrade humans with all the enhanced capabilities creating a Cyborg (Warwick, 2003). This research will be coordinated with an international team in the Cross Cultural Cyborg Observatory.

15:00
Cyborg athletes or technodoping: How far can people become cyborgs to play sports?

ABSTRACT. This study deals with the social acceptance of the usage of wearable and/or implantable electronic devices for enjoying sports in Japan as part of the cross-cultural study on cyborg ethics. Smart watches or wrist bands have been used by many professional athletes to make their training more effective. Wearable GPS tracking devices have widely been used in rugby football matches to measure each player's work rate, which apprise coaching staff of opportune moment for substitution. Other wearable/implantable electronic devices such as smart contact lenses with built-in camera (Elgan, 2016; Wong, 2016) and brain chips which enhance, for example, mental agility and memory may soon be utilised in sports scenes. Will the devices, which enhance athletes' innate human exercise capacities or give them new sporting abilities, be socially accepted as an important technology to evolve sports, or be rejected as an unwelcome one to spoil the spirit of sports? To what extent can athletes use such devices, or become cyborgs, to enjoy sports? Will the use of those devices to create a new breed of sport which everyone can enjoy regardless of whether he/she is an able-bodied person or the disabled be socially desirable? To investigate these questions, the authors will conduct questionnaire surveys of athletes and interviews with experts in various research areas including sports science and healthcare, in addition to literature and case studies. According to the outcome of the questionnaire survey of university students the authors conducted in November and December 2016 in Japan, whereas the usage of implantable medical devices for their intended purpose was approved by respondents, male respondents displayed a neutral attitude towards the use for enhancing athletic performance and female respondents tended to slightly disagree with it (Murata et al., 2017). This may show that the use of implantable electronic devices or insideables for athletes would be questioned in Japan.

In any sport, ensuring fairness in competition is most prioritised. However, it is difficult to draw a clear line between fairness and unfairness in sports, due partly to the development and widespread usage of various kinds of sports gear, training devices and regimens, nutritional supplements, medical implants and prosthetics for athletes. The accessibility to these often becomes a decisive factor in determining who is a better athlete in a particular sport.

In his seminal book published in 1958, Caillois (2001: pp. 9-10) defined play including sports (agôn) as an activity which is essentially: 1. Free: in which play is not obligatory; if it were, it would at once lose its attractive and joyous quality as diversion; 2. Separate: circumscribed within limits of space and time, defined and fixed in advance; 3. Uncertain: the course of which cannot be determined, nor the results attained beforehand, and some latitude for innovations being left to the player's initiative; 4. Unproductive: create neither goods, nor wealth, nor new elements of any kind; and, except for the exchange of property among the players, ending in a situation identical to that prevailing at the beginning of the game; 5. Governed by rules: under conventions that suspend ordinary laws, and for the moment establish new legislation, which alone counts; 6. Make-believe: accompanied by a special awareness of a second reality or of a free unreality, as against real life. However, the progress of professionalisation in many of sports and the great commercial success of sports events and business have seemed to change the nature of sports in a greater or lesser degree especially as to "unproductiveness". Sports now can provide players and relevant parties including coaches and sports-related companies with opportunities to enjoy their financial success and social prestige. This situation would abet athletes, and their supporters, in wrongdoing such as drug doping. When sports are exploited to boost the prestige of a country, top athletes in that country would strongly be encouraged to be involved in a state-sponsored doping programme and turn to unfair acts to strengthen their physical and/or psychological competition ability with making sure nobody finds out the acts.

On the other hand, the development of sports technologies has contributed to achieve better performance of athletes, to heighten competition scores or to break athletes' records, while sometimes provoking controversies over the use of those technologies. For example, the LZR Racer launched in February 2008, competition full-body swimsuits manufactured by UK company Speedo using a new material fabric and a unique design which significantly reduced the drag of the swimmers and improved their buoyancy, were worn by almost all top-class swimmers in the world at the Beijing Olympics held in August 2008 and the 2009 World Swimming Championships. At the Olympics in Beijing, 94% of the gold medals in swimming races were won and 23 out of the 25 world records broken were achieved by LZR swimmers. However, opponents of the swimsuits criticised the usage of it saying a technical aid (Baker, 2015) or technological doping (BBC Sport, 2009), because they considered the suits provided the swimmers with an unfair advantage.

The use of assistive technology by disabled athletes in able-bodied sports has been a subject of intense debate. One of the most famous cases is that of Oscar Pistorius, a South African "Blade Runner" who ran in the 400 meters and the 4x400m relay at the 2012 London Olympics using two leg energy-storage-and-return (ESR) prostheses. The point at issue in the Pistorius case was whether ESR technology provided the sprint runner with an unfair advantage or not. Another prominent case is that of German "Blade Jumper" Markus Rehm. Although he wished to compete in able-bodied sport in the long jump event, he was not allowed to do because his prosthesis was considered to provide him with an unfair advantage by the German Athletics Association (Dyer, 2015; Davies, 2016).

In an interview conducted by one of the authors (Fukuta) in December 2017, Japanese sports scientist Kazuhiko Sawai pointed out that the notion of fairness in sports is determined based not on an absolute but on a consensus of concerned interests, which inevitably reflects conditions of the times and the available technologies. If a large majority of concerned parties consider the use of a technology in sports competitions is within the acceptance range, he said, that use is fair. He also mentioned that Pistorius was allowed to participate in the Olympic games because his personal best time did not reach a satisfactory level to contend for a championship (thus, his use of prostheses seemed fair), whereas Rehm was not due to his outstanding long-jump record (thus, his prosthesis seemed to provide him with an unfair advantage).

In Japan, the Superhuman Sports Society was established in June 2015. They declare that aim of the society is to redesign sports with modern technologies based on the following three superhuman sports principles: 1. All participants can enjoy sports; 2. Sports continue to evolve with technology; 3. All spectators can enjoy sports, and based on human augmentation engineering that complements and augments human physical abilities and human-machine integration technology, a new field of sports will be created in which "superhumans" with augmented abilities "overcome personal barriers" of physical differences, age, or disabilities, and freely compete with each other using these technologies (http://superhuman-sports.org/; accessed on 29 December 2017). Following their ideas, the use of wearables/insideables in sports so as to provide people with an equal opportunity to enjoy sports and realise a parity in terms of sports play among people regardless of whether they are able-bodied or not would socially be accepted as an attempt to protect people's right to sports.

References Baker, D. A. (2015). The "second place" problem: assistive technology in sports and (re)constructing normal. Science and Engineering Ethics, 22(1), pp.93-110. BBC Sport (2009). Fina extends swimsuit regulations. BBC Sport, 19 March. Available at http://news.bbc.co.uk/sport2/hi/olympic_games/7944084.stm (accessed on 27 December 2017) Caillois, R. (2001). Man, Play and Games. University of Illinois Press, Urbana and Chicago, IL (translated by Barash, M.). Davies, G. A. (2016). Markus Rehm puts Olympic disappointment to one side and hopes retaining title will boost London bid. The Telegraph, 16 September. Available at http://www.telegraph.co.uk/paralympic-sport/2016/09/16/markus-rehm-puts-olympic-disappointment-to-one-side-and-hopes-re/ (accessed on 27 December 2017). Dyer, B. (2015). The controversy of sports technology: a systematic review. SpringerPlus, 4:524. Available at https://doi.org/10.1186/s40064-015-1331-x (accessed on 29 December 2017). Elgan, M. (2016). Why a smart contact lens is the ultimate wearable. Computerworld, 9 May. Available at https://www.computerworld.com/article/3066870/wearables/why-a-smart-contact-lens-is-the-ultimate-wearable.html (accessed on 29 December 2017). Murata, K., Adams, A. A., Fukuta, Y., Orito, Y., Arias-Oliva, M. and Pelegrin-Borondo, J. (2017). From a science fiction to reality: cyborg ethics in Japan. Computers and Society, 47(3), pp.72-85. Wong, R. (2016). Samsung patents smart contact lenses with a built-in camera. MahsableAsia, 6 April. Available at http://mashable.com/2016/04/05/samsung-smart-contact-lenses-patent/?utm_cid=mash-com-fb-pete-link#JDBdfZVbcaqB (accessed on 29 December 2017).

15:30
Cross Cultural Cyborg: An International Analysis

ABSTRACT. Cyborg phenomenon has been seen as something related with science fiction that we watched in films such as The Terminator, Blade Runner or Minority Report, not as a real technology that will be a fact very soon (Warwick, 2014). Technology has been always been part of human development. Since the use of a stone tool up to a complicated smartphone, technology is a fundamental part of society and human evolution. But nowadays cyborg technology, understood as any information technology implanted in a human that let them to increase their innate capabilities, is among us. For instance, in 2011 more than 300.000 people had a cochlear implant that let deaf people hear satisfactorily in most of the cases (Park, 2014). Cyborgs are here, and they have come to stay among us.

But the challenges that technological implants are huge. Today, we can modify and create our own body according to our personal desires, but what is the limit of this achievement? (Palese, 2012). Implants for medical and aesthetic purposes are widely accepted, but hacker groups have started to explore new uses of implants such as magnets to pick up electromagnetic fields, RFID-tags and NFC-chips to connect implants to mobile devices (Park, 2014). And the creation of a superhuman that with implanted technologies increases their innate capabilities is almost a fact. This has enormous implications on ethical values for both humans and cyborgs (Warwick, 2003).

Within this framework our proposal is to do an exploratory study about the main challenges from the ethical point of view that should be considered. Qualitative Research is seen as an appropriate when the research objective is exploratory. The number and influence of qualitative research articles has been growing across top-tier management journals (Reinecke, Arnold, & Palazzo, 2016). Interviews have been a key primary data source for research (Arsel, 2017). Interviews are a growing method because they give voice to people’s lives and their perceptions of experiences important to them (Belk, Fischer, and Kozinets 2013).

We will analyze the situation from a cross cultural cyborg perspective, taking into consideration the data collected with similar methodology and similar interviewed profiles in Japan, China, India, Spain, Germany, USA and Mexico.

14:30-16:00 Session 8B: Ethical Issues with Algorithms
Location: Room 0.9
14:30
Ethics of Algorithms, Formal Methods, and Abstract Model Theory

ABSTRACT. Big Data and their processing is one of the biggest concerns of our age, affecting a wide range of activities from science to business, and from governance to social policies [1]. As a consequence, the study of the ethical dimension of the various algorithms doing the processing is imperative, if we do not want to be caught by surprise by their ethical implications. In addition, algorithms gradually permeate every aspect of our life, mediating social processes, business transactions, governmental decisions, as well as how we perceive, understand, and interact among ourselves and with the environment [2]. Thus, the study of the ethics of algorithms is of crucial importance, both for our current well-being, and the insurance against negative future developments. We suggest a promising approach to the study of algorithm ethics, combining formal methods with abstract model theory.

Computer ethics is the – recently emergent - research field responsible for the study of the ethics of algorithms. As Norbert Wiener, who is considered the father of the field, has already predicted in 1945, the invention of computing machines in the beginnings of the twentieth century gave rise to a whole new reality, the so-called Information Age [3]. The gradual domination by computers - and computer algorithms - of most fields of human activity is a direct result of their unique nature; as Moor has pointed out, computers are logically malleable, and informationally enriching [4]. They are logically malleable, because they can be manipulated to perform any task that accepts an input and returns an output through a process involving logical operations. And they are informationally enriching because when an activity becomes computerized, it often also becomes informationalized: “the processing of information becomes a crucial ingredient in performing and understanding the activities themselves. When this happens both the activities and the conceptions of the activities become informationally enriched.” [4] Thus, not only do computer algorithms mediate a constantly increasing number of human activities, but by doing so they transform these activities in an unprecedented way.

Algorithms not only decide how data will be processed, but may advise – if not decide - about how they should be interpreted, and what actions should be taken as a result. Profiling and classification algorithms determine how individuals and groups are shaped and managed [1]. Recommendation systems give users directions about when and how to exercise, what to buy, which route to take, and who to contact [5]. Data mining algorithms are said to show promise in helping make sense of emerging streams of behavioural data generated by the ‘Internet of Things’ [6]. Online service providers continue to mediate how information is accessed with personalization and filtering algorithms [7]. Machine learning algorithms automatically identify misleading, biased, or inaccurate knowledge at the point of creation (e.g. Wikipedia’s Objective Revision Evaluation Service). As these examples suggest, algorithms increasingly mediate how we perceive and understand our environments and interact with them and each other.

The use of formal methods in the study of properties of algorithms used in Big Data analysis is not at all new. Formal specification and verification of computer software is a growing area with many potential applications to data science. Techniques from Formal Methods (FM) - such us model checking, and theorem proving - can be used to verify methodologies for Big Data analysis and decision-making. Nevertheless, not much work has been done on the study of the ethical implications of algorithms implementing such methodologies and decision-making, the potential applications of formal methods to data handling, and how this is related with ongoing debates such us the open/free software debate. We believe that the formal study of the ethical issues related with the design and deployment of such complex algorithms can shed new light to data analytics.

The main advantage of using formal methods (FMs) for studying the ethical aspects of algorithms is the guaranteed higher level of correctness of information, and the “localization” of inconsistencies. Each FM has its own semantics and proof system, depending on the logical system(s) underlying it. An example can be deontic based FMs providing a description and informal analysis of ethical discourse. Ideally, FMs for ethics should use deontic, epistemic and action logic as their underlying logical systems (examples of corresponding logical statements: A lets B know that p, A shares his knowledge that p with B, and A informs B about p). We have developed a unified logical framework combining these logics using the theory of institutions [8]. The latter is a form of abstract model theory developed by J. Goguen and R. Burstall in the 80s, to tackle the population explosion of different logical systems used at the time in computer science [9], [10]. The concept of institution formalizes the informal notion of a logical system using category theory, and an extended form of Tarski’s semantic definition of truth to model the satisfaction relation between models and sentences. Thus, formal methods for computer ethics can use as their underlying logic the framework we have constructed with the use of abstract algebraic techniques initially applied to specification and programming, since it combines all the necessary logics. We explain how exactly such formal methods can be used, why they are a promising approach to the study of algorithm ethics, and we indicate some first applications for Big Data in medicine.

Bibliography

1. L. Floridi (2012), Big Data and Their Epistemological Challenge, Philos. Technol. 25: pp. 435 – 437. 2. B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter and L. Floridi (2016), The Ethics of Algorithms: Mapping the Debate, Big Data & Society: pp. 1 -21. 3. Wiener N. (1948), Cybernetics: Or Control and Communication in the Animal and the Machine, MIT Press. 4. Moor J. H. (1985), What is Computer Ethics?, Metaphilosophy 16 (4), pp. 266-275. 5. de Vries K (2010). Identity, profiling algorithms and a world of ambient intelligence. Ethics and Information Technology 12(1): pp. 71–85. 6. Portmess L and Tower S (2014). Data barns, ambient intelligence and cloud computing: The tacit epistemology and linguistic representation of Big Data. Ethics and Information Technology 17(1): pp. 1–9. 7. Newell S and Marabelli M (2015) Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification’. The Journal of Strategic Information Systems 24(1): 3–14. 8. Dimarogkona M. and Stefaneas P., Institutions and Computer Ethics, Cryptography, Cyber-Security and Information Warfare (Ed. Nicholas J. Darras), Nova, forthcoming. 9. Burstall R. and Goguen J., The Semantics of CLEAR, A Specification Language, Proceedings of the Abstract Software Specifications, pp.292-332, Springer, 1979. 10. Goguen J. and Burstall R. , Institutions: Abstract Model Theory for Specification and Programming, Journal of the Association for Computing Machinery 39 (1), pp. 95-146, 1992

15:00
The disciplinary power of algorithms: Foucauldian panopticism and normalization revisited

ABSTRACT. Algorithms pervade our lives. However, they do not just objectively calculate particular states of affairs; they also have a performative function in the practices in which they are embedded (cf. Introna 2016). Meanwhile, algorithms in force are various. They play chess or go, guide self-steering cars, steer our search for information or a hotel, and determine our request for a mortgage. This article explores the performativity of two particular types of algorithms. On the one hand those that rate the performances of hotels, restaurants, airlines and the like. They assist us in choosing the best good or service that we are looking for. On the other hand the algorithms that assist banks in the decision to grant a loan or a particular insurance, assist the authorities in detecting tax fraud, money laundering and/or terrorist activities, and the like. The latter algorithms aim at predicting the present or future behaviour of the subjects under consideration.

This article explores their performativity and shows that they are quite distinct in their performances. This thesis is developed by taking recourse to the works of Michel Foucault. In particular I rely on his Discipline and Punish (1975/1977), an analysis of the developments of the prison regime from the 17th century onwards. He analyses how discipline entered the prisons, and subsequently the army, schools, hospitals, and factories. Their subjects are disciplined by means of a division of tasks, regulations, time-tables, exercises, inspections, examinations, and the like. Furthermore, sanctions and rewards become part of parcel of the disciplinary regime. Foucault stresses that these modalities of power did not overturn the institutions, but just crept into them and strengthened them (Foucault 1977: 215-216).

The disciplinary gaze is exercised along the panoptic principle (Foucault 1977: 195-230): one is always in full light and visible, and must assume to be watched all the time. The archetypical design for prisons has been sketched by Bentham: the Panopticon. A central guard can observe all inmates all of the time; these can neither see each other nor whether the guard is actually observing them. But one must be careful not to equate the panoptic design with Bentham’s proposals: it can have many other applications, in other guises, in hospitals, schools, and factories. Panopticism is a utopian idea; ‘in fact a figure of political technology that may and must be detached from any specific use’ (Foucault 1977: 205).

All gathered observations serve a purpose: classifying the subjects on one homogeneous scale (or several ones). Examinations, exercises, the observance of rules and regulations, and the morality of behaviour: all can be taken into account. The scale describes and normalizes at the same time. The subjects involved are drawn into a comparative field for all which, by its very existence, exerts a pressure to conform. The scale is constructed ’to function as a minimal threshold, as an average to be respected or as an optimum towards which one must move’ (Foucault 1977: 183). As Foucault summarizes the process: the scale ‘compares, differentiates, hierarchizes, homogenizes, excludes. In short, it normalizes’ (Foucault 1977: 183).

After this excursion to Foucault we return to the algorithms under consideration. What about their performances? First the rating algorithms. The institutions concerned mobilize a community of users who are invited to share their evaluations of the products/services involved. Performances of hotels, restaurants, airlines are solicited. What happens here is that a Panopticon is created in the first place – with users as principals and service providers as agents. A kind of inverse pyramid with users on top is the shape. The scores obtained are usually incorporated in one continuous scale of merit. This scale obviously creates a normative pressure – normalization in the sense of Foucault is taking place. This uproots the dynamics in the field. Providers of a similar service are drawn into a massive comparative field from which there is no escape (unless they prefer to marginalize their services). Those on top prosper; those on the bottom of the scale suffer. As a result, agents may try and manipulate the reviewing process.

The article treats two typical examples. First the dynamics of TripAdvisor ratings of hotels (‘Trip Advisor Popularity Ranking’). On dedicated websites hotel owners discuss the particulars intensely, complain about their scores, note that sometimes the system seems to change overnight, etc. Moreover, manipulation of reviewing is attempted; in response, TripAdvisor is actively engaged in filtering them out. Secondly the system of Universal Tennis Rating (UTR) is considered. It is based on as many data as possible about the performances of tennis players in tournaments. Although the formula is kept a secret, it seems to incite less controversy than the hotel ratings discussed above. The UTR now ‘performs’ actively in influencing the planning of tournaments, and opportunities for scholarship and education.

Taken together, these rating algorithms have installed themselves in the relevant institutions along a pattern that resembles the classic disciplinary mode as described by Foucault. We now turn to predictive algorithms. They can be also observed to enter into many an institution: banks, insurance companies, tax departments, security services, and the like. Their disciplinary power, though, deviates considerably from the Foucauldian pattern.

An institution involved normally collects relevant data on its own. From the data, by means of machine learning, an algorithm is developed that separates the deviants from the conformers. The algorithm is an optimal fit to the data used (‘training data’). Subsequently this model is used to make predictions about new cases: whether they are likely to deviate or conform in the (near) future. Until recently such modelling was hardly possible due to the paucity of data. Nowadays, though, we live in the age of big data: institutions may gather other datasets that have been generated by other organizations for their own purposes. Datasets are routinely exchanged, or bought from external parties. These acquired datasets are pooled together and become the – more solid – base for proper machine learning.

What happens is that the panoptic gazes of many institutions are coupled together: a ‘Polypanopticon’. Not only must members of a Panopticon realize that they are visible and presume to be watched; they must also realize that the results of that gaze (their data) are presumably relayed to the central guards of other Panoptica in which they are entangled. The data from any Panopticon may surface in any other Panopticon and be brought to bear in surprising ways.

The other deviation from the classic Foucauldian pattern concerns normalization. What kind of scale is involved? Take the case of tax fraud, where possible suspects are identified say by means of a profile. The scores obtained do not reflect actual behaviour (filing a fraudulent tax form), but the amount of suspicion that upon being selected for an audit, the tax form will turn out to be fraudulent. A time delay may also be involved: upon asking for a loan one’s creditworthiness is estimated. It reflects the chances that one will pay off regularly in the near future. So what is involved is prediction. Note that any prediction builds on a prior description of the phenomena under study: whether tax fraud, or credit worthiness. Inevitably, the normalization of prediction builds on the normalization of description.

Prediction is a curious kind of disciplining. It normalizes estimates of behaviour. Thereby a pressure is created, not to conform in terms of the targeted behaviour, but to ‘manage’ the predictors of one’s behaviour. Is such ‘management’, with which one hopes to achieve satisfactory prediction values, possible at all?

To get this clear it has to be borne in mind that the machine learning models involved are often opaque – in two meanings of the term (cf. de Laat 2017). For one thing, algorithms developed by companies are usually kept a secret since they consider them to be their intellectual property. For another, even if the models are transparent and ready for inspection, it can be quite difficult to produce an explanation for specific recommendations. This is so while classification or decision trees are usually employed with ensemble methods, so the final algorithms are the summation of hundreds of trees. Or neural networks are used which are uninterpretable by their very construction. Only in rare cases an algorithm can be interpreted straight away (such as simple trees, Bayesian rule lists).

So in such rare cases the normalizing pressure is exerted by a visible indicator (or several of them) on which one scores below the mark – just like the case of rating algorithms. In response, people subjected to the algorithm may try and manipulate them – which is only possible with variables under one’s control (behavioural characteristics); personal characteristics can hardly be changed. Normally, though, the pressure emanates from an unknown or at least incomprehensible algorithm that classifies its subjects – effectively a black box. No clue can be derived how the algorithm can be ‘gamed‘ to produce a better rating. So as a rule, those subjected to the power of prediction are powerless against being normalized.

REFERENCES:

Paul B. de Laat (2017). Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability? Philosophy & Technology, 17 pp. Michel Foucault (1977). Discipline and Punish: Birth of the Prison (Alan Sheridan translator). Vintage Books. Translation of Surveiller et punir: Naissance de la prison, Gallimard, 1975. Lucas Introna (2016). Algorithms, Governance, and Governmentality: On Governing Academic Writing. Science, Technology, & Human Values, 41(1), 17-49.

15:30
“It would be pretty immoral to choose a random algorithm”: Opening up algorithmic interpretability and transparency

ABSTRACT. Background There is growing debate in public and policy discourses over the prevalence of algorithms in daily life. As described in publications by, amongst others, the World Economic Forum [1], the European Parliament [2] and the European Commission [3] these debates crystallise around the values of fairness, personalisation, transparency, responsibility, and explainability. These values are reflected in the ACM Principles for Algorithmic Transparency and Accountability [4]. Recent controversies over algorithmic decision-making show that concerns are raised when these values appear to be undermined. Key examples include concerns that algorithms that determine job listings, credit offers, and parole and sentencing decisions can all be (unintentionally) biased and lead to outcomes that systematically disadvantage certain demographics such as black and minority ethnic populations. These contemporary debates challenge us to consider whether an algorithm can be ‘fair’, whether transparency is necessary in order to assess the fairness and suitability of algorithms and whether it is possible to objectively select which specific algorithm is most suitable in a given situation.

Rationale for study Against this background, we are interested in the question: “If given the opportunity, could a population of users agree on a single preferred algorithm and what characteristics might this algorithm have?” To answer this, we are undertaking work to better understand how people reason about algorithm preference. In this paper we report on the outcomes of initial research to explore this issue in which we conducted an experiment requiring groups of participants to select their most and least preferred algorithms from a predefined selection. The experiment produced qualitative and quantitative findings which were analysed to investigate the following questions: How likely is it that different participants will each prefer the same algorithm within the context of a specific scenario and what characteristics might this algorithm have? Is a community-level algorithm preference achievable? How might preference be affected by individual background and information transparency? How do participants rationalise their preferences and what reasonings emerge when they discuss their selections with others? Is it possible to draw out generalised understandings over preferred algorithms from these selections and discussions?

Approach Our experiment was based on a limited-resources allocation case study. The case study drew on a genuine scenario: a class of 34 undergraduate students are to be assigned a coursework topic each and each topic can only be allocated once. For each topic students were given the opportunity to indicate their personal level of satisfaction if it were to be assigned to them - by scoring the topics from 1 to 7, with 1 = very unhappy and 7 = very happy. As each topic can only be allocated once, two (or more) students with the same preferences may not be equally satisfied with their ultimate allocations.

We designed five predefined algorithms to allocate the coursework topics to the 34 students and asked our study participants to select which of these algorithms they considered the most and least preferred options for the above scenario. These algorithms differed in values expressed in terms of the sum of students’ satisfaction scores (as indicated for the topics the algorithm assigns to them) and the sum of students’ distance (where distance is the total difference between the student’s satisfaction score and those of all other students).

We ran the experiment on 4 occasions with 4 separate groups of participants. Group 1 consisted of 9 computer science undergraduates; Group 2 of 7 postgraduate or postdoctoral computer scientists; Group 3 of 10 postgraduate social science or law students; and Group 4 consisted of 13 working professionals with a preexisting interest in algorithms and algorithmic fairness. Each experiment was conducted in an identical two-part format. In Part 1, participants were presented with a questionnaire that set out the case study scenario and the outcomes of the 5 algorithms. Participants were asked to individually select their preferred and least preferred algorithms for the allocation of coursework topics to the students. They were then asked to discuss their selections as a group. In Part 2, participants were given the same questionnaire, which this time included a brief description of each algorithm. They once again selected their preferred and least preferred algorithms individually and then discussed them as a group. We conducted quantitative analysis of the questionnaire responses and qualitative, thematic analysis of transcripts of the discussion sessions. Analysis explored participants’ preferences and the ways in which they rationalised their selections.

Summary of findings The quantitative analysis of the questionnaires did not yield statistically significant results, but nevertheless we can make the following observations:  Participants presented with the same case study scenario consistently expressed a diverse range of preferences over their least and most preferred algorithms.  The questionnaire was split into two parts to assess whether the provision of extra information had an impact on preference selection. Change of preference across the two parts did occur: around 30% of the algorithms selected as the most and least preferred in the first part of the questionnaire were not marked as such in the second part. However, it is unclear whether changes in preference selection were prompted by the further information provided in Part 2 of the questionnaire or by other factors such as the group discussion that took place at the end of Part 1.  There were some relationships between participants' responses and their (professional or educational) background. The most homogenous preference selections came from the postgraduate and postdoctoral computer scientist group whereas the stakeholder group was more diverse in its selections.

Our qualitative analysis revealed that:  Although the questionnaire asked participants to select algorithms in terms of preference, they consistently used the language of fairness when asked to explain their selections in group discussion. They invoked normative understandings of right and wrong to argue that the preferred algorithm should be the fairest one. Opinions about which algorithm was fairest differed, as did ideas about what actually constituted fairness. Participants frequently attended to the difficulty or even impossibility of a single algorithm producing a fair result in all cases.  Participants consistently related their preferences to the (real or imagined) context in which the algorithms were to be applied. They drew on the given context to support their preferences and also expressed the need for further information - about the demographics of the students, the nature of the course etc. - in order to aid their decision making.  Across the different groups participants displayed varying levels of familiarity with technical features of algorithms.

Implications of study These experiments form a valuable starting point to our investigation of algorithm selection by highlighting: 1) the difficulty of identifying a single, objectively preferred algorithm; 2) the importance of trade-offs in algorithm criteria and concepts of fairness in the selection of preferred algorithms; and 3) the relevance of information provision, context, and personal background to participant understanding and selection of preferred algorithms. Our results indicate that it is unlikely for a single preferred algorithm in a given context to be agreed upon by users since participants make different preference selections and rationalise them differently. It may be possible however, via a larger study, to cluster together participants with particular characteristics to reach agreed preference on a sub-community level. Meanwhile, the importance placed by participants on context suggests that it would not be possible to determine a single preferred algorithm that could be applied across different scenarios. It does however indicate the perceived relevance of context to algorithm design.

Our findings resonate with the values that have come to the fore in current debates over algorithm prevalence. We identify a community-level association of a preferred algorithm as being a fair algorithm. Competing models of fairness are drawn on in expressions of preference, however there may be some general favouring of models that balance out or trade-off different relevant criteria such as maximising satisfaction score and minimising distance. In addition, although it is not possible to reach global agreement on fairness, it does appear possible that some groups sharing certain characteristics might be able to reach consensus and that agreement can be reached over which algorithms are definitely not fair. Furthermore, given the overall priority given by users to fairness, if a particular fairness model could be identified as applicable in a given scenario, then it might be possible for consensus to be reached around which algorithm is preferred. These are issues that will be explored in our further work moving forward.

References [1] World Economic Forum. 2016. The State of Artificial Intelligence. https://www:weforum:org/events/worldeconomic-forum-annual-meeting-2016/sessions/the-state-of-artificial-intelligence. (2016).

[2] European Commission. 2016. Algorithms and Collusion -Note from the European Union. https://one:oecd:org/document/DAF/COMP/WD(2017)12/en/pdf. (2016).

[3] European Parliament. 2016. Algorithmic accountability and transparency. https://www:marietjeschaake:eu/en/event-07-11-algorithmic-accountability-and-transparency. (2016).

[4] ACM. 2017. Statement on Algorithmic Transparency and Accountability. https://www:acm:org/binaries/content/assets/public-policy/2017_joint_statement_algorithms:pdf. (2017).

14:30-16:00 Session 8C: Cybersecurity
Location: Room 1.1
14:30
Legal and Ethical Issues in Regulating Observational Studies: the impact of the new EU Data Protection Regulation on Italian Biomedical Research

ABSTRACT. Legal and Ethical Issues in Regulating Observational Studies: the impact of the new EU Data Protection Regulation on Italian Biomedical Research

This paper discusses the emerging issues of data governance, data ethics, and data protection in the context of biomedical observational studies in Italy.

Over the past twenty-five years, the multifaceted relationship between law, ethics, biomedical research, and technology has grown increasingly complex (Mittelstadt and Floridi, 2015). The digital revolution has made it possible for healthcare facilities and health agencies to use massive digital databases for the storage of administrative and health service data gathered from routine clinical practice (Stendardo et al., 2013). The availability of such a large group of heterogeneous data-sets, coupled with advances in computing power, is driving researchers to create sophisticated algorithms for the analysis of pre-existing data valuable to medical research (not previously combinable through matching techniques) to look for patterns, correlations, and links of potential significance (Mostert et al, 2016). Extracting meaningful information from this flood of data is a challenge, but holds unparalleled potential for observational or epidemiological studies (Thiese, 2014). These studies are often retrospective, meaning that they are based on the reuse of previously collected sensitive data (Thiese, 2014); they seek to identify the distribution, incidence, and etiology of human diseases to shed light on their causes and prevent their spread (Green, Freedman and Gordis, 2000). Analysing data in real time and on a much wider scale through web-based data mining tools is having a revolutionary impact on epidemiological research (Salathé et al., 2012) and facilitates certain studies, such as those on rare diseases (Woodward, 2013).

There are many initiatives, promoted by groups like BBMRI-ERIC (Van Ommen, Törnwall, Bréchot, et al, 2015) and Global Alliance for Genomics and Health (Knoppers, 2014), that aim to facilitate the large-scale reuse and linkage of health and genomic data. Yet whilst the possibilities in terms of innovative research springing from “general analysis” continue to expand, developments in IT and the reutilization of sensitive data have led to increasing concern about the effectiveness of data protection regulations based on the “informed consent or anonymization paradigm”, which is challenged in the context of data intensive medical research (Mostert et al., 2016).

This paradigm may well turn out to be a chimera in retrospective observational studies, not only because of their basis in data collected for previous research, but because of the difficulties in “creating a truly anonymous dataset whilst retaining as much of the underlying information as required for the task” (as stressed by the Article 29 Working Party in the Opinion 05/2014 on “Anonymisation Techniques”) as well as the elevated cost of reobtaining consent from the subjects.

In the wake of a growing need for consistent protection of personal data that should not hinder responsible technological research of Big Data through manifold techniques, such as machine learning (Pagallo, 2017), the European Parliament and the Council last year adopted the much-awaited “Regulation on the protection of natural persons with regard to the processing of personal Data and on the free Movement of such data” - also known as “GDPR”. This act repeals Directive 95/46/EC and will be enforced on May 2018. This two-year grace period allows Member States to revise or adapt their legislation in order to come into compliance with the GDPR

The new regulation recognizes the benefits of research carried out using information contained in large databases (Recital 29) and acknowledges that the “informed consent or anonymization paradigm” may hamper data intensive medical research. In order to reconcile the often-competing values of data protection and innovation, the GDPR carves out numerous derogations for “historical or scientific purposes”, allowing researchers to obtain, for example, something similar to “broad consent” for future research projects (Recital 33), or to avoid restrictions on secondary uses of sensitive data (Article 5(1)(b), Recital 50). However, these derogations depend on the appropriate safeguards set up by the data controller. Article 89(1) specifies that one way for the controller to comply with the new legal framework concerns the use of “pseudonymization” techniques combined with further “technical and organizational measures”, such as the principle of privacy-by-design, privacy-by-default and the presence of a data protection officer (Bologni and Bistolfi, 2017). Still, the possible impact of GDPR on current observational studies in medical research is not clear. It hinges on how Member States will use the extensive discretional powers afforded to them in this field (Pagallo, 2017): for instance, Article 9(4) of the GDPR allows Member States to introduce further safeguards with regard to the processing of genetic data, biometric data or data concerning health.

With this in mind, the paper aims to restrict the focus of analysis to the Italian legal framework. Here, the processing of health data for research purposes has so far been governed by a regulation that is, in many respects, stricter than those existing in other EU countries (Picciocchi et al., 2017). In November 2017, the Italian Parliament approved statute no. 167, which adds a new Article 110-bis to the Italian Data Protection Code (Legislative Decree no. 196 of 30 June 2003). This article allows the reuse of personal and sensitive data for research purposes, with the exception of genetic data, in cases authorized by the Italian Data Protection Authority, so long as effective forms of minimization and anonymization take place. The exclusion of possible reuse of genetic data and the request for anonymization, rather than pseudo-anonymization techniques, seems to be at odds with the EU legislator's attempt to strike a fair balance between the protection of personal data and the strengthening of technological research.

The novelties introduced by the GDPR on the processing of sensitive data for research purposes and their impact on the Italian legal regulation of health data processing can be deemed as an example of the stratification of different inputs and rules: EU law, national law, orders issued by authorities, and soft law, which need to be integrated with ethical principles, political strategies and practical solutions. Nonetheless, the paper aims to show how the GDPR's delegation of powers back to the national legal systems of the Member States entails a number of critical drawbacks. As shown by Article 110-bis of the Italian Data Protection Code, such discretional powers may hamper the progress of medical research, especially when compared to other EU countries.

References:

Bolognini L. Bistolfi C., (2017) “Pseudonymization and impacts of Big (personal/anonymous) Data processing in the transition from the Directive 95/46/EC to the new EU General Data Protection Regulation”. Computal Law Secure Rev, 33(2), pp. 171–181., doi: https://doi.org/http://dx.doi.org/10.1016/j.clsr.2016.11.002.

Green M.D., Freedman D.M., Gordis L. (2000) “Reference guide on epidemiology. Reference manual on scientific evidence” (Federal Judicial Center). Available: http://www.fjc.gov/public/pdf.nsf/lookup/sciman06.pdf/file/sciman06.pdf.

Knoppers B.M (2014), “International ethics harmonization and the global alliance for genomics and health”. Genome Med, 6(2), p. 13, doi: https://doi.org/10.1186/gm530.

Mittelstadt, B.D., Floridi, L., (2016) “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”. Sci Eng Ethics. 22(2), pp. 303-41, doi: http://dx.doi.org/ 10.1007/s11948-015-9652-2.

Mostert, M., Bredenoord, A.L., Biesaart, M.C. and van Delden. J. J., (2016) “Big Data in medical research and EU data protection law: challenges to the consent or anonymise approach”. Eur J Hum Genet, 24(7), pp. 956-60, doi: http://dx.doi.org/10.1038/ejhg.2015.239.

Pagallo U., (2017) “The Legal Challenges of Big Data: Putting Secondary Rules First in the Field of EU Data Protection”. European Data Protection Law Review, 3(1), pp. 36-46. Available: http://hdl.handle.net/2318/1640445.

Piciocchi, C., Ducato, R., Martinelli, L. et al., (2017), Legal issues in governing genetic biobanks: the Italian framework as a case study for the implications for citizen’s health through public-private initiatives. J Community Genet, pp 1–14, doi: https://doi.org/10.1007/s12687-017-0328-2.

Salathé M., Bengtsson L., Bodnar T.J., Brewer D.D., Brownstein J.S., Buckee C., et al. (2012) “Digital Epidemiology”. PLoS Comput Biol, 8(7): e1002616. doi: http://dx.doi.org/10.1371/journal.pcbi.1002616.

Stendardo A., Preite F., Gesuita R., Villani S., Zambon A., SISMEC "Observational Studies" working group, (2013), “Legal aspects regarding the use and integration of electronic medical records for epidemiological purposes with focus on the Italian situation”. Epidemiology Biostatistics and Public Health, 10(3): e8971, doi: http://dx.doi.org/10.2427/8971.

Thiese M. S., (2014), “Observational and interventional study design types; an overview”. Biochem Med (Zagreb), 24(2), pp. 199-210, doi: http://dx.doi.org/10.1161/BM.2014.022.

van Ommen G.J.., Törnwall O., Bréchot C., Dagher G., Galli J., Hveem K.., Landegren U., Luchinat C., Metspalu A., Nilsson C., Solesvik O.V., Perola M., Litton J.E., Zatloukal K., (2015) “BBMRI-ERIC as a resource for pharmaceutical and life science industries: the development of biobank-based Expert Centres". Eur J Hum Genet, 23(7), pp: 893–900, doi: http://dx.doi.org/10.1038/ejhg.2014.235.

Legal References:

General Data Protection Regulation (EU) 2016/670. Available in English: http://ec.europa.eu/justice/data-protection/reform/files/regulation_oj_en.pdf

WP29, Opinion 5/2014 on anonymization techniques, WP 216, 10 April 2014. Available in English:http://ec.europa.eu/justice/dataprotection/article29/documentation/opinionrecommendation/files/2014/wp216_en.pdf

Italian Personal Data Protection Code. Legislat. Decree no. 196 of 30 June 2003. Available in English: http://www.garanteprivacy.it/home_en/italian-legislatio.

15:00
Automated automobiles and ethics: what should we focus on?

ABSTRACT. Automated vehicles are here. Computers are replacing humans as truck drivers and Google’s AI is driving down the streets of California [1-3], militaries plan using automated vehicles to support troops in dangerous areas [4], drones fly our deliveries through our skies [5] and Elon Musk is predicting that no one will be allowed to drive a car in near future [6] – because automation makes less mistakes. Automated vehicles have the potential to solve a lot of problems, e.g. people driving while tired, distracted, intoxicated, or with limited skills. Human mistakes can be eliminated by taking the human out of the loop. But what are the downsides of this technology and have the ethics been taken into accord while developing this emerging technology? In this paper automated vehicles and their socio-ethical dilemmas are discussed. Society of Automotive Engineers (SAE) have created a taxonomy and definition for “Terms Related to On-Road Motor Vehicle Automated Driving Systems”. It defines six levels of driving automation for on-road vehicles, going from 0 (no automation) to 5 (full automation). On levels 0-2 the human driver is in charge, while on levels 3-5 the computer has the (main) responsibility for driving the vehicle. As the level automation increases, the computer must make more decisions (ethical and otherwise) as the human driver increasingly becomes a passenger in the vehicle due to decreasing ability to control the vehicle. [7]

Picture 1: SAE levels of automation. [7] One of the problems with ethics and autonomous vehicles are the questions it raises. The current ethical research question around the autonomous cars is mostly focused on how the vehicle should behave in traffic (i.e. who or what should the vehicle collide with when a collision is otherwise unavoidable). There have been numerous theoretical and some practical (sic!) research settings where these questions have been pondered (see e.g. [8-10]). As these (simultaneously with other trolley-problem) questions are indeed important, they are just a part of the whole problem. The underlying question about autonomous vehicles and ethics lies within the need, utility and risks involved with the emerging technology. As shown before, the need and utility are unarguable but how do they cope with the risks? It is important to observe that practically everything controlled by a computer can be hacked, and modern cars are not exempt. This in turn puts all those people on the streets, roads, and alleys at a risk. The questions we must therefore pose are: can we afford the risks involved with autonomous cars? Are they a better option than cars driven by humans? As cars have become increasingly automated and thus more dependent on computerised systems, these systems must also be protected against unauthorised and unwanted access and tampering. The car therefore is not only a data storage or a system to produce data, but it also is a tool to make day-to-day life easier and safer. No one would buy a car that does not inherently promote values of safety and security. Besides the driver, the security aspects must also be considered from the point of view of passengers and other road-users as well. Whereas a modern car is controlled by the teamwork of computer and driver, both controllers have their own separate fields of responsibility, as well as some overlapping fields (e.g. collision avoidance systems). Whereas the driver is relatively hard to hack, computer systems are more vulnerable. To protect all interest groups, hacking such critical computer systems should be made as hard as possible – as completely unhackable computer systems do not exist. Or as Eiza and Ni state about cars and computers: “if you see word ‘software’, replace it with ‘hackable’; and if you see word ‘connected’, replace it with ‘exposed’” [11]. Thus cybersecurity solutions must be implemented and maintained with meticulous care for the whole life-cycle of the car. Vulnerabilities and built-in features in different cars that can be used against the driver have been around for a while. One of the clearest examples was the vulnerability found in the Jeep Cherokee, resulting in the recall 1.4 million Cherokees after researchers demonstrated they could remotely hijack the car’s system over the internet. The attack was described to be one “..that lets hackers send commands through the Jeep’s entertainment system to its dashboard functions, steering, brakes, and transmission, all from a laptop that may be across the country.” [13] Numerous other examples also exist [11]. Another clear threat is tampering with the sensors of the car from the outside [11]. As Internet of Things solutions became more common, hacking them became more commonplace too. Nowadays a news report of a refrigerator sending junk mail [14] would not even get published – while similar news went viral just four years ago! Presently, news are portraying hacking incidents on a massive scale. According to Kaspersky Lab, millions of computers are mining crypto currencies without their owners knowing [15]. Also major ransomware attacks have been made in recent years where the computer had been taken over and the information inside it encrypted (see e.g. [16-18]). It is important to notice that almost everything a hacker can make a computer or a phone do, they can do to a car – and much more. The car is a computer with wheels and an engine. Therefore it should not be hard to hack the car to mine cryptocurrencies – with the energy cost of gasoline none the less. A car could also be a target of a ransomware attack just to get the car to work or keeping the health and well-being of the people inside of it as a ransom. Thus we need cybersecurity where not only the information and its’ integrity nor only the communication is protected, but also the physical world and the assets that the security must protect are taken into accord. E.g. Simson Garfinkel has stated that the hackers are the real obstacle for self-driving cars [19]. While the car has essentially become a computer (a set of networked computers, actually) with wheels and an engine, it is also approximately 2 tons of metal moving over 30 metres per second, with humans inside and moving in an environment with people in vicinity. Therefore, the possibilities for a hacker to cause harm are increased when hacking a car compared to a normal computer. One of the worst scenarios is using automated cars as a terror or a military first-strike weapon [20]. In this scenario a nation or an organisation hacks a massive amount of vehicles from another area, triggers them simultaneously to hit pedestrians, other cars, trains, bridges, or other targets causing massive amount of death and injuries simultaneously while crippling the infrastructure similar to heavy bombardment. A nation targeted by such an attack is most likely in chaos for a long time. There have also been claims that US intelligence services (mainly CIA) uses or have used the vulnerabilities in modern cars for assassination purposes, but they are yet just claims and should be treated as such. [21] These methods – as shown above – could be used to such described assassinations, however. Therefore we should look at the values we have embedded to the automobiles. As we yearn for efficiency and ease-of-use, we should also remember the safety and security for the driver, passengers and other people alike. Whereas the automated cars can react faster and do not suffer from distractions, tiredness, deceases, or intoxication, they can be hacked. To promote the safe roads in future the cars (not only automated ones) should be made (more) “hackerproof”. One of the solutions is to “drive manually”. In the full paper we will discuss more in-depth in automated vehicles, hacking techniques, society and ethics around automobiles, and what we should focus when developing the next stage of private and public transportation.

15:30
Cybersecurity in health – disentangling value tensions

ABSTRACT. SEE UPLODADED FILE: PAPER CONTAINS FIGURES.

16:00-16:30Break and Refreshments
16:30-18:00 Session 9A: Cyborgs
Location: Room 0.8
16:30
Free will or Freiwild

ABSTRACT. The concept of cybernetic organisms is going to be even more important in the coming years, as more and more users want to use technology, devices and apps to improve and enhance themselves. Further advances of robotics, automatization and artificial intelligence generate a growing economic influence that changes how we work and live. The research group behind the Cross Cultural Cyborg Observatory uses a qualitative approach to get an international perspective and a deeper understanding about cyborg perception and related ethical concerns. As the next step in the cooperation between humans and machines the technological focus will shift from wearables to insideables. The user can forget a wearable or the device runs out of energy. But an insideable works in and with the human body to introduce a wide variety of new possibilities. Medicine is a field where insideables are already established and show their potential. For example a pacemaker that collects data about the patient and provides the doctor with useful information. Otherwise more money and time intensive procedures are needed to collect this data. Thus not only yielding better results but also improving the patient’s quality of life. The legislation and development of those devices can be a forecast for other fields. To foresee the acceptance of insideables it is also worthwhile to look at robotics and wearables in the different countries. Their development and legislation are as important as public perception and usage. Metrics, for example, are sales as well as market penetration. The forecasts for worldwide sales of smartwatches and fitness trackers promise a continuous growth over the coming years, according to Tractica (2017) and the Consumer Technology Association (2017). The German market research institute “Gesellschaft für Konsumforschung” conducted a survey in 2016 with 1502 participants on the topic of apps and devices to monitor health and fitness. Out of those 28% were currently tracking their activities. Another 13% did not currently monitor their health statistics via an app or device but did track it in the past and were generally interested in the topic. Participants between the age of 15 and 29 had a chance greater than 50% to fall in one of those two categories. But collecting all this additional information does not only have benefits. In 2016 CNN did a survey with female users of Fitbit fitness trackers. Those report that they really like the positive feedback but there are also more concerning sentiments. Users voiced that they feel naked without their fitness devices and grow dependent on the feedback loop. They feel that doing untracked activity is wasted and guilt if they don’t exercise when their tracker advises it. So where do we draw the line? Is it necessary to replace parts of ourselves with technology to be considered a cyborg, or is it enough to grow dependent on an external device and alter our behavior to please it? In the meantime German health insurances begin to subsidize tracking devices for their customers and award users that transmit their data. The German officials Heiko Maas and Andrea Voßhoff voiced their concern on users giving away their data too easily. Heiko Maas (2016), federal minister of justice and consumer protections urges that patient data are part of their privacy and should not be used to make additional profits or prefer patients above one another. But he also notes that new laws or changes to existing laws may be necessary to restrict usage of patient data. Andrea Voßhoff (2015), commissioner for data privacy and freedom of information warns the users of fitness applications and tracking devices that they should be cautious and weigh the short-term financial gains against long-term dangers. She reminds that prognoses about the future medical development of a patient can be used to determine his treatments but also his health insurance premiums. Regardless of the prediction turning out as true or false. While customers of the public health insurance plans should be protected from predatory behavior this is not given for patients of the private health insurances. Those could be required by contract to give up all their data or be stuck in the “classic plans” that could become more and more expensive each year. As the first government openly starts plans to track and rate all their residents, their behavior and who they interact with, we have to ask ourselves: Are we going to protect our free will or are we becoming Freiwild. Do we keep the freedom to decide who to reveal our data to? Or do we become obligated to use more and more intrusive methods to feed our data to algorithms that decide about our lives? Will the artificial intelligence understand if you had a second piece of cake on your birthday?

The goal of the survey is to get an insight into the mind of German citizens, their opinions on emerging technologies and their awareness of possible benefits and dangers of technological and social developments. It will be interesting to see if the public perception of data privacy aligns with the opinions of the German politicians. To create an international context and a bigger picture the results will be compared to the findings of the other participants. The following countries take part in the qualitative survey: Spain, Chile, Mexico, USA, Japan, India, China and Germany. The research team adapts the questionnaire to local specifics, like a different technological situation and culture of a country. At the same time a similar profile is preserved to make the results comparable. The local research teams conduct and analyze the interviews with professors and students. Those come from different knowledge areas like Economy, Law, Engineering, Computer Science, Health or Philosophy. All data collected in the study is accessible to the coordinators for further international analysis and local teams are free to use all the local data for any other research project.

17:00
Exploring Security and Ethical Implications of New Emerging Technologies: Case Study in USA and India

ABSTRACT. Gartner estimates that the worldwide sales of electronic wearable devices will total 274.6 million in 2016, 18.4 percent higher than in 2015. Of the $28.7 billion revenue that the sales of these devices will generate this year, $11.5 billion will be from smartwatches. While smartwatches—50.4 million of which will be sold in 2016, according to Gartner . With any new emerging technology, ethical and security concern arise. Wearable and insideable technologies are no exceptions. Insideable, technologies implants in the bodies are although used for medical purposes still raise concerns. In a conference “Superintelligence: Science or Fiction?” included such luminaries as Elon Musk of Tesla Motors. The discussion was led by MIT cosmologist Max Tegmark. Musk, commented “I think the two things that are needed for a future that we would look at and conclude is good, most likely, is, we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface to the cortex, so that we can have a digital tertiary layer that’s more fully symbiotic with the rest of us. We’ve got the cortex and the limbic system, which seem to work together pretty well - they’ve got good bandwidth, whereas the bandwidth to additional tertiary layer is weak” . A recent article in the Nature discusses the various challenges including ethical and security generated by such new emerging technologies. Indeed these concerns cannot be ignored. Countries like USA and India are seeing a drastic change in purchase of wearable technologies.

This paper, second phase of an on-going research aims to understand students’ perceptions towards the new emerging technologies, wearable and insideables in particular. It focuses on students in higher education from the USA and India. It reflects on the ethical and information security implications. Qualitative techniques included interviews and observations (semi-structured and unstructured interviews and group observations) are used to collect data. Interviews allowed the students to disclose thoughts and feelings, which have both security and ethical implication. Interviews provided useful information for the current research and generated rich data. Further, the author felt data regarding wearables and insideable technologies is essential in gaining insight into students’ perceptions and values. This paper examines the qualitative collected from interviews. The data collection will contribute to the other data collected from international collaboration. Other countries include Japan, Spain, Chile, Denmark, and Mexico. The objective of the collaborative research is to analyze the perceptions towards the new emerging technologies, wearable and insideables in particular. It also explores the benefits and problems of such emerging technologies along with examining how they can increase people’s capabilities. The findings will compare and contrast as well as explore the ethical challenges perceived in insideables from personal and social perspectives. The contribution of this paper is to explore wearable technologies in the education context. Consequently, the findings of this research will provide rich insight in the area of wearable and insideable technologies.

17:30
Wearables and Insideables: Is Mexican society and economy prepared for them?

ABSTRACT. The actual market could define the place in which consumer is able to acquires goods and services in every moment and from any place. Internet is nowadays the support to make competitive emerging and consolidated business. Several technology companies develop every year several mobile devices as gadgets, wearables and insideables or implants. Most of them requires Internet or the Smartphone to work adequately, but recent technologies works with, a set of data who interpret the environment and take appropriate decision; well known as Artificial Intelligence (AI). We can imagine a near future in which the use of devices with AI that will improve the human disabilities such as physical or mental defects through a set of microcircuits implanted and managed (or not) by external devices like wearables. The adoption of the technology for improve the human abilities or disabilities is a complicated topic for the business and social arena. If on the one hand, many individuals believe in the use of technology for transform their lives and to increase their welfare, on the other many people make their lives in a strict order of culture, religion or both of them. In this regard, marketers, economists or decision takers in a business and government level should study that more this topic in order to take steps that affect their economy. Most of the consumerism in technologies focuses on wearables for health (52% of sales of wearables in the world) and the 39% will represent to health gadgets, such as Fitness bracelets or Smartwatches. According to Garibay (2018), in Mexico 51.9% of individuals adopted and uses at least 3 gadgets; in the global context for 2018 is expected an increase on consumerism of Smartwatches in 13.36 billons dollars, the wearables cameras will represent 2,278 million dollar approximately. In Latin America, technologic companies expect to sell 26.06 million of gadgets in 2018. The implants business in Mexico focus on the Cosmetic and Health context. According with El Universal (2016) Mexico is in the 4th place in the Breast Implant ranking, and according with El Debate (2017) in 2015 a 900,000 cosmetic implants were released in the country. The technological implants in Mexico is not common as other kind of surgical intervention, this may due to certain factors such as, expensive technology, expensive surgeries, lack of knowledge about the topic or culture and religion impediments. The consumption of products is a fundamental part for jump-starting the markets in order to rise an economic development sustainable, however most of Mexican do not have the financial solvency for acquiring forefront technology (gadgets – wearables, implants or mobile) due principally to additional duties of importation and foreign i+D added costs. Most of the technology acquired in Mexico is imported from different countries in which has trade agreements. In their last meeting (in México), the OECD countries established certain objectives in order to increase the digital transformation of services in each country (OECD, 2017). In the case of México, the amount for invest in Technology and Innovation is less than 1% of the Gross Domestic Product (GDP) in comparison with others OECD countries that invest more than 20% (Camhaji, 2017). This situation leads to economic stagnation and under development. Consequently, the growth of the country will diminished for a lack of knowledge development; and as is augmented in Cabrero Mendoza (2017): “The knowledge-based economy refers to the ability to generate scientific and technological knowledge, which allows to be more competitive, grow more, and transform the economy to achieve higher levels of social welfare”. In the last year, Mexican government approved fiscal incentives in order to facilitate the consumerism of technology in all economic sectors. Those incentives could provide facilities to companies for save almost 94% of the investment in technology (Neuman, 2017). But for individuals to acquire forefront technology is still expensive, Mexican (specifically youngsters from mid-sized class) opt for purchasing cheaper technology such as low range wearables. In spite of cuts in the budget, Universities and Research Institutions in Mexico have been working in i+D. The principal aims in the research is the generation of biomaterials that could impact in the individuals’ needs (Manjarrez Nevárez et al., 2017). The proposal of this research is to analyse the perceptions about the acceptance of wearables or technological implants by Mexican citizens (particularly youngsters of San Luis Potosí). Furthermore, we will analyse the impact of the social ethics perspectives of the acceptance of these technologies in the Mexican economy and business development.

References Cabrero Mendoza, E. (2017). ¿Dónde está México en ciencia y tecnología? Retrieved from http://www.jornada.unam.mx/2017/10/02/opinion/030a1pol Camhaji, E. (2017). La ciencia, la oportunidad que México ha dejado pasar. Retrieved January 6, 2018, from https://elpais.com/elpais/2017/12/01/ciencia/1512157927_534452.html El Debate. (2017). México , paraíso de cirugías estéticas. Retrieved January 8, 2018, from https://www.debate.com.mx/mexico/Mexico-paraiso-de-cirugias-esteticas-20170503-0366.html El Universal. (2016). México, cuarto lugar mundial en cirugías de aumento de busto . Retrieved January 8, 2018, from http://www.eluniversal.com.mx/articulo/nacion/sociedad/2016/ 03/30/mexico-cuarto-lugar-mundial-en-cirugias-de-aumento-de-busto Garibay, J. (2018). ¿Será 2018 un buen año para los Wearables? Retrieved January 5, 2018, from https://www.merca20.com/sera-el-2018-un-buen-ano-para-los-wearables/ Manjarrez Nevárez, L. A., Terrazas Bandala, L. P., Zermeño Ortega, M. R., De la Vega Cobos, C., Zapata Chávez, E., Torres Rojo, F. I., … Lerma Gutiérrez, R. (2017). Biomateriales como Implantes en el Cuerpo Humano. Retrieved January 8, 2018, from http://beta.uach.mx/articulo/2017/10/20/biomateriales-como-implantes-en-el-cuerpo-humano/ Neuman, G. (2017). Invierte en tecnología y deduce hasta un 94 %. ¡ Deducir o no deducir , esa es la...! Retrieved January 6, 2018, from https://www.pulsopyme.com/inviertir-tecnologia-deduce/ OECD. (2017). OECD Digital Economy Outlook 2017. Paris: OECD Publishing. https://doi.org/http://dx.doi.org/10.1787/9789264276284-en

16:30-18:00 Session 9B: Responsible Research & Innovation
Location: Room 0.9
16:30
Monitoring the value of RRI in industrial nanotechnology innovation projects

ABSTRACT. This paper addresses the issue of understanding whether and how responsible research & innovation (RRI) can lead to measurable scientific and economic benefits in commercial industrial organizations in the field of nanotechnology. We describe and discuss a possible method to quantitatively assess the value of RRI strategies in industrial research and development practice.

17:00
The benefits of RRI - the researchers' view

ABSTRACT. This paper is based on a two large-scale surveys among European researchers run in the course of the EU funded project MoRRI (Monitoring the evolution and benefits of RRI)about their views on the relevance, benefits, barriers and hindrances for RRI within their daily research activities. Especially the investigation of different types of benefits (economic, scientific, democratic, social) and the framework conditions which promote or hinder their occurence brought relevant and new insights. The surveys show that the respondents which received EU funding report scientific benefits most frequently, followed by economic benefits. Social as well as democratic benefits are mentioned less frequently. Even if the already observed benefits are less widespread within these two categories, the respondents still frequently expect benefits. This holds particularly true for an increasing interest in science, an improvement of curricula and enlarged competencies among students as well as an inclusion of citizens’ knowledge.

17:30
Responsible Dual Use Research and Technology: Towards a Novel Framework

ABSTRACT. INTRODUCTION The aim of this paper is to develop a novel approach of ‘responsible dual use research and technology’ by applying the Responsible Research and Innovation (RRI) concept to dual use issues. The term ‘dual use’ refers to research and technology that can have civilian as well as military applications. This responsible dual use approach is developed in the context of one of the biggest European Union (EU) collaborative civil ICT research projects the Human Brain Project (HBP). The need for a novel approach to dual use issues in the EU has become urgent due to an increasing EU defence and dual research agenda with a particular interest in cybersecurity, big data, artificial intelligence, robotics, and super-computing.

Applying the RRI concept to dual use could help to move beyond important discussion on multiple ways of defining dual use towards advancing understanding of its governance arrangements that include anticipation, reflection, engagement and action. Additionally, RRI is potentially a valuable way to address the politically contested nature of dual use issues that involve diverse perspectives and considerations from a broad range of fields including research, technology, security, politics, trade, intelligence, industry, and civil society. Moreover, RRI allows distinguishing ‘responsible dual use’ from ‘irresponsible dual use’.

EMPIRICAL CONTEXT The suggested approach of ‘responsible dual use research and technology’ is developed within the context of the HBP. It is a 10-year EU project that started in 2013. This project aims to build research infrastructure for neuroscience, medicine and computing. It is one of two current Future and Emerging Technologies Flagship FET initiatives (the other being Graphene) funded by the EU research programme Horizon 2020. The HBP is expected to turn scientific advances into concrete innovation opportunities, growth and jobs, and contribute to address some of the major societal challenges Europe is facing. With a total budget of around one billion Euros and some 500 scientists in more than 100 institutions working on it, it is one of the largest international scientific collaborations ever. Social and ethical concerns related to the HBP research are addressed within the Ethics and Society subproject that helps the HBP to pursue the policy of RRI (Aicardi, Reinsborough and Rose 2017; Rainey, Stahl, Shaw and Reinsborough 2017). In particular, the HBP applies the AREA (anticipate, reflect, engage and act) framework for RRI (HBP 2015) established by the UK Engineering and Physical Sciences Council EPSRC.

One of the social and ethical concerns that might be relevant for the HBP is dual use research and technology. While the HBP (as all projects funded by the Horizon 2020 should) focuses exclusively on civil applications, some of its research might have dual use relevance. Dual use issues have been the focus of internal discussions within the HBP Ethics and Society subproject for several years.

RELEVANCE OF RRI FOR EVOLVING UNDERSTANDING OF DUAL USE Against this background, this paper explores opportunities to apply RRI to evolving concept of dual use internationally and within the EU. First, internationally, due to changes in security threats and technologies, the understanding of dual use has developed from its original focus on civil-military distinction, weapons and national security to also include terrorist and criminal activities, non-state actors and human-centred security (Rath, Ischi and Perkins 2014). In addition to national authorities and international organizations, civil society increasingly contributes to governance of dual use issues. RRI, with its focus on anticipation, reflection and engagement could be a productive way to govern this broader understanding of dual use involving a wider range of diverse stakeholders.

Second, dual use research and technology issues are becoming increasingly relevant in the EU. Traditionally, the EU has been known as a ‘soft power’ based on persuasion, norms and values as opposed to ‘hard power’ of the US based on military (Nye 2017). More recently, in light of new threats (e.g. terror, cyber) and changing transatlantic relationship the EU is developing its collective defence capabilities and emerging as so-called ‘smart power’ combining elements of soft and hard power. These new efforts include first support for collaborative defence research which so far has been strictly national. The EU supports collaborative defence research through its European Defence Agency and recently launched European Defence Fund, which after 2020 is expected to invest 500 million Euros in collaborative defence research, thus becoming one of the biggest defence research and technology investors in Europe. Additionally, the European Structural and Investment Funds have recently started to support dual use research projects. All of these developments indicate an increasing need to define the relationship between the EU’s civil research (Framework Programmes) and emerging defence and dual research. RRI, as an approach that is familiar to EU policy-makers and researchers could be a useful tool for governing relationship between civil and defence research at macro-level of EU policies as well as micro level of research projects.

APPLYING RRI TO GOVERNING DUAL USE RESEARCH AND TECHNOLOGY To apply RRI for dual use research and technology, this paper will apply the four dimensions of the AREA framework – anticipate, reflect, engage and act - to dual use identifying relevant techniques, actors and indicators. First, the anticipation dimension will deal with forecasting potential applications of dual use research and technology. Second, the reflection component will focus on contemplating possible responsible and irresponsible applications of dual use research. Third, engagement will involve a broad range of stakeholders from research, military, policy, industry and civil society in deliberation and dialogue to negotiate common understandings of responsible and irresponsible dual use. Finally, the action part will develop and implement mechanisms for supporting responsible dual use and prohibiting irresponsible dual use. The particular contribution of the paper will be on the final aspect, on the action arising from responsible dual use. The HBP is currently developing an Opinion on dual use which will highlight the key concerns and issues. In our paper we will introduce these and then discuss in more detail how they can be put in practice. This discussion will highlight the conceptual issues of dual use and, maybe more importantly, the difficulties of realising practical action with regards to a contested concept such as dual use.

Thus, this paper aims to make a conceptual and practical contribution to RRI and dual use issues by extending the RRI framework to a new, rapidly developing and contested area of dual use research and technology in the EU, which has particular interest in issues of big data, artificial intelligence and cyber security.

REFERENCES: Aicardi, C., M.Reinsborough and N.Rose (2017) The integrated ethics and society programme of the Human Brain Project: reflecting on an ongoing experience, Journal of Responsible Innovation. Advance online access. HBP (2015) Human Brain Project. Framework Partnership Agreement. Nye, J. (2017) Soft power: the origins and political progress of a concept, Palgrave Communications DOI: 10.1057/palcomms2017.8 Rainey, S., B.Stahl, M.Shaw and M.Reinsborough (2017) Ethics Management and Responsible Research and Innovation in the Human Brain Project. In: R von Schomberg (ed.), Handbook - Responsible Innovation: A Global Resource. Edward Elgar Publishing Ltd. Rath, J., A.Ischi and D.Perkins (2014) Evolution of Different Dual-Use Concepts in International and National Law and Its Implications on Research Ethics and Governance, Science and Engineering Ethics 20: 769-790.

16:30-18:00 Session 9C: IT, Civic Life & Political Culture
Location: Room 1.1
16:30
Discourse about Governmental eHealth Information Systems - Jargon, Non-sense and Quasi-Rationality

ABSTRACT. 1 Rational Discourse for the Development of Governmental Information Systems

There is a trend of developing citizens centered governmental information systems (ISs). Since, may of these ISs affect citizens directly, it has been suggested that citizens should be able to participate in the development. This means, that citizens should be acknowledged and preferably also be part of development together with professional developers and researchers. [see e.g. 1, 2, 3] Finland together with other Nordic countries are devoted to support and commit to transparency of governmental actions and more widely transparency is now internationally regarded as essential to democracy [4, 5]. To highlight the importance of transparency many democracies have joined Open Government Partnership, which requires the participants to commit to making their governments more open, accountable, and responsive to citizens [6]. For instance, in Finland current strategy is to aim to understandable communication that supports the possibility to citizen participation [7]. However, it seems that public communication about big changes in governmental processes and public information system development projects is at best confusing - even hard to understand by professionals - and at worse case actually non-existing [8]. Thus, it seems that there is no real transparency nor possibility of rational discourse. Rational discourse in society should be starting point and target in aforementioned situations. Even though rational discourse may never be reached fully, we should aim to rational and open discourse. Now we are situation where we instead of rational discourse we have quasi-rational discourse { that we claim is more harmful than fairly admitting that we lack of rationality in our current political communication.

By rational discourse we are referring the public communication and structure of participation that Habermas [9] is describing. Habermas was describing the legislative discourse (usually big changes in governmental processes are legislative ones) and we claim that those demands that are set for legislative discourse should also be followed in big governmental IS projects as they can have huge impact on society [see also 10]. There are several preconditions for rational discourse by Habermas [9]. First, all members and parties in society should have possibility to participate discourse. This means that there must be way to access the discussion and thus the transparency is actually mandatory. Secondly, all subjects of legislation | here also stakeholders of processes and governmental ISs - must have possibility to see implications of legislation and changes before they can have the agreement on it. Thirdly, the consensus is needed before the legislation/changes can be made. Fourthly, the strategic games are not allowed in rational discourse as those change the direct of discourse to be trade of choices where actors decide to accept some goals they do not agree, if they gain some other benefit. This changes discourse from being based on "best and most justified arguments" towards struggle between negotiation power|which is not acceptable in Habermasian rational discourse. In reality, those precondition are impossible to reach totally. Nevertheless, they can be used as indicators how rational is our discourse in society. By quasi-rational discourse we mean a discourse that is disguised as rational discourse, but actually does not meet preconditions of rational discourse presented by Habermas [9]. We claim that this kind of discourse is harmful, since it does not allow participation and prevents achievement of governmental ISs that are accepted by citizens and thus, also prevents citizens from gaining justified position in the society.

2 Analysis of discourse from perspective of four principles of healthcare

In the full paper we present a example of quasi-rational discourse. For this purpose we discuss the discussion about governmental information system development in the Finnish healthcare. In Finland the whole sector of social services and health care are currently under a major reform [11, 12]. This reform includes also renewal of governmental information systems which in the health care sector will affect citizens directly by transforming health care services more digital [12]. Citizens have been encouraged by ministers to participate in discourse about this reform [13], but despite the possibility to use democratic services such as otakantaa.fi discussion board, there is no discourse in official forums [see 14, 15].

However, government has faced critique about the reform in the media. Despite of criticism presented by different professional the renewal is pushed forward by politics without attempts to engage in rational discourse. The politician has been giving the quasi-rational arguments where argumentation is usually based on following idea: because of economical pressure of future we should reform our healthcare. Problem is, that there is no consensus about what is going to be done and no evidences that the reform will bring any changes since governmental officials are mainly sharing jargon and non-sense to citizens.[see e.g. 16, 17, 18, 19, 20]

Shifting this kind of quasi-rational discourse towards rational discourse requires unambiguous and informative argumentation, which are still missing. Besides the requirements described earlier the shift also requires shared view about values that are the core of healthcare. Currently the discourse is focused on assumed savings and actual values are rarely discussed. There are four principles of medical ethics that can be seen the commonly accepted (but not only ones) basis of medical ethics: respect for autonomy, beneficence, non-maleficence, and justice [21, 22, 23]. These principles could be considered as a simplification of the codes of ethics for healthcare but also for eHealth developers { as a necessary but not as a sufficient condition [24]. The analysis of discourse - how it concentrates the core values that should be center of healthcare - is conducted in full paper.

3 Preliminary conlusions Nowadays society lacks the rational discourse in public level. Even professionals are bypassed because of strategy games of political systems where good arguments are not truly respected. We need new indicators for our public communication about governmental IS development public the precondition of rational discourse should be used for that. Likewise we should also use indicators for those systems developed and for eHealth we could use the Four principles of medical ethics. However this will be presented more detailed in full paper as we lack of space in here. Although rational discourse is an ideal that may never be met, it does not meant that it should not be aimed for. Principles of Habermas and four principles serve as good template to move from quasi-rational discourse towards more rational discourse. In the full paper we will present not just why this should be done, but also how it could be done in our example case of Finland.

References

[1] Olli I Heimo, S. Koskinen, Jani, and Kai K. Kimppa. Responsibility in acquiring critical governmental information systems: whose fault is failure? In The possibilities of ethical ICT, Ethicomp 2013, conference proceedings, University of Southern Denmark, Kolding Campus, 12 to 14 June 2013, pages 213|217, 2013. [2] Minna M. Rantanen and Olli I. Heimo. Problem in Patient Information 3 System Acquirement in Finland: Translation and Terminology, pages 362{ 375. Springer Berlin Heidelberg, Berlin, Heidelberg, 2014. [3] Richard L. Baskerville and Michael D. Myers. Design ethnography in information systems. Information Systems Journal, 25(1):23{46, 2015. ISJRE- 1011.R1. [4] Turo Virtanen. Finland: Active Reformer Looking for More Centralisation and Horizontal Coordination, pages 129{139. Edward Elgar Publishing, 2016. [5] John C. Bertot, Paul T. Jaeger, and Justin M. Grimes. Using icts to create a culture of transparency: E-government and social media as openness and anti-corruption tools for societies. Government Information Quarterly, 27(3):264 { 271, 2010. [6] opengovpartnership.org. Open governement partnership, 2018. https://www.opengovpartnership.org/ Accessed: 2018-01-08. [7] VM. Avoin hallinto. avoimen hallinnon iii toimintaohjelma 2017-2019. Report, Valtiovarainministerio, Ministry of Finance of Finland, 2017. [8] VTV. Tietojarjestelmahankintojen kustannusten ja hyotyjen suunnittelu ja seuranta | valtiontalouden tarkastusviraston tarkastuskertomukset 19/2017. Technical report, National Audit Oce of Finland, 2017. [9] Jurgen Habermas. Between facts and norms (w. rehg, trans.). Cambridge: PolityPress, 1996. [10] Kalle Lyytinen and Rudy Hirschheim. Information systems as rational discourse: an application of habermas's theory of communicative action. Scandinavian Journal of Management, 4(1):19{30, 1988. [11] Richard B. Saltman and Juha Teperi. Health reform in nland: current proposals and unresolved challenges. Health Economics, Policy and Law, 11(3):303{319, 2016. [12] STM and VM. Digitalisaatio, 2017. http://alueuudistus./soteuudistus/digitalisaatio Accessed: 2017-07- 12. [13] Sipila and Vehvilainen. Digitalisaatiolla tuottavuusloikka. avoin kirje paaministerilta seka kunta-ja uudistusministerilta. Open letter, Valtiovarainministeri o, Ministry of Finance of Finland, 2014. [14] OM. Sade-ohjelman kansalaisen osallistumisymparisto -hankkeen loppuraportti. versio 1.4. Report, Oikeusministerio [Ministry of Justice], 2016. 4

17:00
Mobile Health Information Systems in Less Developed Countries (LDC): New insights, new hope?

ABSTRACT. Mobile phones have received a lot of attention from the academic and practitioner community as a suitable form of ICT that can support health information systems (HIS). Specifically, this optimism has increased with the advent of smart phones, whose features have made the gap between the capabilities of mobiles and com-puters very narrow. However, evidence from the literature shows that very little success has been achieved in this. Further, a review of the literature also shows that extensive theoretical-based knowledge that seeks to understand the process of deploying and using this technology as part of a new way of working, is lacking. This paper therefore seeks to provide new insights that can bring about new hope for implementers and sponsors of such efforts, leading to more sustainable mobile phone information systems effort. To this end, this paper uses a theoretical approach known as Actor-Network Theory (ANT) to examine a case in point of a nationwide effort to deploy a mobile phone-supported HIS in Nigeria. This system, known as MADEX (mobile application data exchange system) is studied from its inception, through development, to its eventual end. In an effort to study this case, interviews were carried out with a range of stakeholders that interacted with this system. The findings reveal that, exclusion of key actors from the net-work; the inability of the technology to develop strong relations with key actors; processes of betrayal, a lack of "network effect", evolving interests among actors and the existence of competing actor-networks can restrict the benefit of these efforts.

17:30
Does the concept of privacy mist by discourse over CCTV?

ABSTRACT. 1) Introduction. The purpose of this paper is to examine the transformation of "discourse" and gaze over CCTV, in case of two areas of Kansai in Japan. In particular, by paying attention to the discourse against CCTV installed on the street, we would like to consider the conflict between privacy and security and/or security.

Therefore, this paper has the following structure. First, we outline the discussion about surveillance and CCTV. Next, two pioneering cases where full-scale introduction of CCTV was introduced in Japan are introduced. The methodology adopted in this paper is a case study based on published data. One is Nishi’nari-ku, Osaka-city (in common name is “Kamagasaki”). This zone is famous as a pioneering case of large-scale street CCTV for maintenance of public peace in the surrounding area triggered by riots of residents. The other is Itami City. In this zone, CCTVs has been established throughout the area as a watching system for children and elderly people. With this kind of system as the core, the city has a slogan “a safe and secure city.” In fact, 850 CCTVs were introduced mainly in 17 elementary school districts. Finally, by comparing with these cases, we would like to reveal that the discourse on CCTV will mist the privacy concept.

2) Surveillance and CCTV In Japan, a case in which CCTV set up on the street or inside the store became the deciding factor for arresting suspects in Japan in 2015 was widely reported (Takatsuki Junior High School 1st year male and female slaughter cases). In this case, multiple surveillance or security camera footage became the deciding factor for arrest (cf. Japan Today, 25/Aug/2015; 31/Aug/2015). In the other word, security cameras had caught suspect visiting several places.

This case suggests that surveillance by the CCTV is (1) expansion of the surveillance target (diversification of monitoring composition) and (2) multilayered surveillance level. First, the object of surveillance has been replaced by “a series of personal data” rather than “a living human being” (Lyon, 2001). In addition, the composition of the surveillance society at digital age has diversified with the progress of technology. According to Murakami-Wood (2017), for example, the composition of monitoring is categorized as (a) panoptic, (b) oligoptic, (c) synoptic, (d) perioptic, and (e) adiloptic. As we will see below, in the two cases it is clear that are transitioning from panoptic to oligoptic in the discourse over CCTV.

Second, the progress of information and communication technology has given the public a tool that can easily realize surveillance. Specifically, citizens can use surveillance technology in the form of self-defense rather than state (Eshita 2005). As we will see below, in the two cases, it is clear that the discourse over CCTV shifts emphasis from administrative to self-defense nuances.

3) Case study As mentioned earlier, in this section two cases are introduced.

3-1) Nishi’nari-Ku, Osaka Zone of Nisi’nari-Ku, Osaka-city is known as “Kamagasaki.” It is a zone born as a result of the slum clearance that took place in 1891. Due to the accumulation of cheap accommodations, it became famous as a town of day laborers. In this zone, riots repeatedly occurred since 1961 due to distrust of the police (21 times from 1961 to 1973). To maintain security, Osaka Prefecture installed 16 CCTVs on the street of this zone of only 0.62 square meters. Residents complained about privacy invasion. In the Osaka District Court ruling, the grounds for establishing individual cameras were questioned. And only the cameras installed in front of the leftist activist's base were made illegal.

Osaka Prefecture announced that it will install another 32 units after switching over existing 13 units to high-performance cameras in 2014. Its aim is surveillance of drug trade and illegal dumping of garbage. Therefore, the controversy over street cameras is still continuing.

3-2) Itami-City Itami-City has the majority of the runway of Osaka International Airport (also known as Itami Airport). It is one of the bed-towns in big cities (cf. Osaka and Kobe) with a population of 190,000 people.

In Itami-City, the slogan “Japan’s most secure and safe city” is mentioned. And CCTV is set up in all elementary school districts, and “watching service” of elderly people and children using it is provided.

In the background, there is an incident that occurred in Kobe City in 2014. A girl of a first-grade primary school was kidnapped and murdered. There was mass media report that CCTV contributed to the arrest of the suspect. For that reason, Itami-City also proposed the installation of 850 CCTVs in the 17 elementary school districts of the city.

From December 2014 to January of the following year, a regional roundtable was held in all elementary school districts. The total number of participants in all elementary school districts was 520 people. Most of the 510 citizens approved the installation. After that, in June and August 2015, we held a local briefing session again in all the elementary school districts (total attendance was 473 people) in order to inform the outline of the system. In addition, the city accepted public comments from July 2015 to August 2015. There were three postings, and all were “agreement.” As the result, the CCTV system was not named a surveillance camera, but it was named “a safety and security watching camera.”

In addition, in Itami-City, “location information notification service” using not only full-scale operation of CCTV but also technology called beacon was began. The service is a mechanism for installing a beacon receiver in CCTV already installed and acquiring position information on children (and elderly people with dementia) with small beacon transmitters. It was the first time in Japan that such a service was offered by the executive branch, so it gained much attention nationwide in Japan.

4) Discussion In the case of Osaka, there was a composition that police power monitors residents. At that time, many residents thought “It does not matter to me” or “nothing-to-hide” (cf. Solove, 2012). However, of course, it produced a conflict structure between the monitored side and the monitored side.

In the case of Itami, no confrontation composition is found. It is thought that the watch over was delegated from the residents to CCTV. As a result, CCTV was accepted as an extension of the gaze of residents.

From the above discussion, I would like to draw the following conclusion. The driving force of CCTV demand in Japan in recent years is not a progress of the physical characteristics of the system but a change of the discourse over CCTV. It is widely accepted by appealing the sense of security supported by watching rather than security maintenance and crime prevention (multiple cities are considering CCTV introduction with reference to case of Itami). However, such a trend has the potential to mist the gaze of monitoring for the monitoring system.

In the case of Itami, the watchfulness of children and elderly people registered in advance is emphasized. In other words, the surveillance in this case can be understood as the monitoring of a specific subject (oligoptic). Therefore, in the case of Itami, the image of panoptic surveillance is weak. However, its architecture is nothing but a panopticon. Therefore, it is difficult to judge the properties of the monitoring system only by physical characteristics. At the same time, it is impossible to deny the possibility that the watching system will change to big brother by "secondary use" of the monitoring system. In order to establish peace of mind and safety, constant monitoring of the operation of the monitoring system is indispensable.

Reference [1] Murakami Wood, D (2017) “The Global Turn to Authoritarianism and After,” Surveillance & Society, 15(3/4), pp.357-370. [2] Lyon, D. (2001) Surveillance Society: Monitoring Everyday Life, McGraw-Hill. [3] Eshita. M. (2005) New dimensions on surveillance society, Journal of Policy Studies, Vol. 20, pp.206-207. (in Japanese) [4] Solove, D.J. (2011) Nothing to Hide: The False Tradeoff Between Privacy and Security, Yale University Press.

Website [4] The Japan times, Osaka murder suspect has history of similar crimes, 31/Aug/2015, https://www.japantimes.co.jp/news/2015/08/31/national/crime-legal/osaka-murder-suspect-history-similar-crimes/ [5] The Japan times, Traces of blood found in minivan of suspect in child killings, 25/Aug/2015, https://japantoday.com/category/crime/traces-of-blood-found-in-minivan-of-suspect-in-child-killings [6] http://www.hanshin-anshin.jp/machinaka/ [7] http://news.mynavi.jp/articles/2017/02/16/mimamorume/ [8] https://www.kobe-np.co.jp/news/hanshin/201705/0010190503.shtml [9] http://icais.or.jp/activity/doc/h28/170330_kenkyukai_Part1.pdf [10] http://www.sankei.com/west/news/160119/wst1601190087-n1.html

19:30-22:00 Gala Dinner

To be held at Restauracja Cała Naprzód, Tokarska 21/25, 80-888 Gdańsk, Poland.

There will be a bus taking people to the restaurant.

**IF YOU ARE STAYING AT THE SHERTON, PLEASE LET MARTY WOLF KNOW**

The bus leaves Focus Sopot at 6:45

The bus leaves SWPS at 6:55 (Villa 33 guests, please walk to here.)

The bus leaves Novotel Gdansk Marina at 7:00

The bus leaves Mercure Gdansk Poseidon at 7:05

The bus leaves Focus Gdansk at 7:15

Everyone will receive a train ticket for returning to their hotels.