ETHICOMP 2018: ETHICOMP 2018
PROGRAM FOR MONDAY, SEPTEMBER 24TH
Days:
next day
all days

View: session overviewtalk overview

09:30-10:45 Session 2: Jarosław Greser Keynote: Law or Ethic? Personal Data Regulation and Internet of Things

Law or Ethic? Personal Data Regulation and Internet of Things

There are no doubts that the Internet of Things is next ‘big thing’ in the computer industry. It is estimated that 20 billion IoT devices will be connected by 2020. That will change markets from retails to services and help to develop concepts like smart cities and introduce new products like autonomous cars.

The concept of IoT based on creating and processing data. Most of them refer to information relating to an identified or identifiable natural person. For instance, IoT devices can collect an identification number, location data or an online identifier. Therefore this information should be treated as personal data and if so they are protected by national regulation, international agreements and self-regulation.

Due to regulation challenges to the internet environment national legal framework is not sufficient for setting an adequate standard of protection for natural person. The cross-border nature of IoT activities should lead us to international law all the more that protection of personal data is part of the right to privacy. Nevertheless, the protection cannot be considered as being sufficient as the result of limited application to companies.

The weakness of self-regulatory mechanisms is lack of sanction. Due to its voluntary character they cannot be applied to everyone on the market. In effect only, motivated or principled enough companies take part in them as market pressure is not strong enough to oblige everyone to adopt the respective rules.

The issue I would like to discuss in my presentation is how to regulate personal data protection to make it adequate for the protection of human rights and simultaneously enable technological and economic development. It is especially important in the light of EU’s General Data Protection Regulation which comes into force on 25th May of 2018. I will base on these rules to demonstrate strengths and weakness of legal regulation comparing to solutions based on ethics.

Location: Theatre 0.4
10:45-11:15Break and Refreshments
11:15-12:45 Session 3A: Privacy
Location: Room 0.8
11:15
The privacy and the publicness in Japan, East Asian countries and Southeast Asian countries in the information era

ABSTRACT. Abstract. In this paper, I will focus on the following IIE(intercultural information ethics) topics in Japan, East Asia and Southeast Asia: 1) what do people in Japan, East Asia and Southeast Asia think about privacy or the related matter in the information era ?; 2) what kind of values or broader perspectives do lie behind these views on privacy and the related topics in Japan and East(Southeast) Asia?; 3) what can we think about (or reconsider) the definitions of ‘privacy’ and ‘the publicness,’ after we see Eastern people’s ideas about these matters which might be different from those in the ‘West’? I will examine these points by analysing my research data I have collected and the related literature on this subject.

Outline of this paper. 1. Poverty of ethical discussions on privacy in the ‘Far East’ ? In my view, the ethical discussions on privacy and the related ethical topics in the information era are not so ‘hot’ in Japan and other East Asian countries compared to the ‘Western’ countries. This might be explained in various ways: (1)people in the ‘Far East’ have a strong orientation to collectivism rather than to individualism and this causes lack of interest in privacy; (2) The contradictory meanings of ‘private’ or Watakushi (synonym of private with various nuances in Japanese: Watakushi is important for inner dependency but is under pressure from orientation to family, state or the publicness) might make this matter more difficult for Japanese people and Asian people to evaluate, as I have explained in another place(Author et al., 2005).

2. Discussions on privacy in the East and in the West: what are privacy, the private and the publicness? Although the ethical discussions on privacy in the East are not so ‘hot’ in the East, this doesn’t necessarily mean that there are no discussions on this matter among the scholars in the East and in the West. In fact, on the contrary, as we will see later, some scholars show interest in this topic. We can make a summary in regard to the list of discussions on this topic as follows. 1) Discussions on the cultural differences and on the undoubted cultural presuppositions behind Western concept of privacy. Rafael Capurro says that comparison of concepts of privacy in the ‘Far East’ and ‘Far West’ is important, but this is not meaningless if we are unaware of the undoubted cultural presuppositions behind Western concept of privacy. Capurro continues to say that we forget the importance of the whole communication structures from where different views on privacy would come, i.e. the structures including human relations, political structures, people’s ways of life and the sense of what is a good life. Capurro says that even in the West people forget the tension between the public and the private which is deeply rooted in the Greek distinction between oikos and polis (Capurro, 2005; Capurro and Author, 2009). And furthermore, Capurro insists that we today forget the distinction between ‘who am I?’ and ‘what am I?’ as a form of self-identification. This is important when we discuss the whole scope of matters of privacy. He says that ‘what am I ?’ is related to data protection, definitions of public and private data and so on, while ‘who am I?’ is related to the interplay of a human being’s life as shared with others. The possibility of hiding, of displaying or showing oneself off to others is a way of our self-identification (Capurro,2013; Capurro, Eldred and Nagel, 2013). 2) Comparison of the East and the West. What Charles Ess tries is to examine the differences (and similarities too) between the meanings of privacy in the East and the West, paying an attention on distinction between individualism and collectivism (Ess, 2005;Ess, 2006). 3) Unawareness of characteristics of one’s own cultural traditions in the East. In my view, similar things of ‘cultural unawareness’ described by Capurro happen in the East too. As I mentioned before, the term of Watakushi has complex and contradictory meanings. Watakusi is often associated with egoism, selfishness and unfairness. At the same time, Watakusi is associated with sources for inner values, emotional meanings, Mononoaware (a subtle sensitivity to nature and life) or the schema of self-reflection to the Japanese minds (Capurro and Author, 2009). The discussion on Watakushi goes back to the Kokugaku tradition in the feudalistic era of Tokugawa (Morse,1974). But people often forget this tradition (Hideo Kobayashi, 1979; and Kojin Karatani, 2001). 4) Unawareness of distinction between ‘particularism and universalism’ or ‘the public and the private.’ According to Sakuta(1972),using T. Parsons’ terms, differences in terms of ‘universalism vs. particularism’ and ‘achievement (what did he/she?) vs. ascription(who is he/she?)’ are important to understand human relations and people’s ways of life in ‘Far East.’ In this sense, the distinction between ‘who’ and ‘what’ might be more complicated in the East. It seems that ‘who’ is interrelated with ‘what’ in the East, at least in China.

3. Finding of potential ethical discussions on privacy and the related topics in the East. The following table (Table 1) shows the findings gained by my researches done in Japan and other Asian countries (‘2010CG’ means that the research was done in 2010 in China). In a way, the findings shown in this table are the confirmation of appropriateness of discussions mentioned above, but in my view, this confirmation is the first achievement in the field of IIE gained by empirical researches. The questionnaires have been prepared through examination of discussions mentioned above and my interviews with Japanese and Asian people. We found: 1) Eastern people think that ‘privacy’ is an important human right and value. 2) On the other hand, Eastern people don’t think that privacy is the absolutely important value beyond other values. We can find some sort of tension between these values. 3) To disclosure part of one’ inner world to others is highly evaluated by majority of Eastern people as means for better mutual communication ( see Capurro, 2013).

Table 1. Views on privacy in Japan and East (Southeast) Asia (What are your thoughts about various views on privacy shown in the following list?) (deleted for allowed space)

4. Findings of inner structure of privacy in Japan and East(Southeast) Asia. In order to know the broader scope of meanings of privacy (as Capurro suggests), I did additional analyses on the research data. The figures of Table 2 and 3 show part of findings gained by these analyses. These are: a) Views on privacy are interrelated with other cultural-ethical-existential views on life and this world. Or the views on privacy are located in a broader cultural, existential, communication structures(s) as Capurro said. b) We found statistically significant correlations between ‘privacy’ factors (two factors can gained through factor analysis on the privacy-related items listed in Table 1) and people’s tendency to accept, reject or show sympathy with various types of values in life (denial of selfishness, contempt for material wealth, sympathy with human values such as honesty and so on)(Table 2). c) We also found statistically significant correlations between views on privacy and views on the meanings of ‘social sacrifice or victims in the case of disasters and accidents’ (Table 3). d) The finding, privacy in a broader cultural, existential, communication structure(s), is found in Japan and other East (Southeast) Asian countries too.

 Table 2. Correlations between ‘privacy’ and ‘criticism of modern civilization’ and ‘orientation to human relations based on mutual-respects’ (data: 2015CG China) (deleted for allowed space)

Table 3. Correlations between ‘privacy’ and ‘sympathy for social sacrifice’ (data:2015CG China) (deleted for allowed space)

11:45
Current situations in Japan under privacy concerns on household robots

ABSTRACT. Ethics of AI: At Home and in the Workplace

12:15
Homo-Cyber-Connecticus: Atapuerca and the moral dilemmas beyond fitness wearables

ABSTRACT. New Year. 2018 has arrived full of healthy intentions. Among others, enroll in a gym and start practicing sports to reduce exceeding kilograms gained at Christmas meals. Feeling healthier is a strong motivation, but not the only one. Being able to show our goals to the world, makes more attractive the tough effort of practicing sports such as jogging, hiking, skiing, aerobics, acting or spinning.

Behind each journey to get rid of our excess of calories, there is a wearable/insideable acting as witness of our incredible achievements; moreover, our electronic acquaintance will record, calculate and chart all our progress, making all data ready for publication in social media.

Halfway between the pleasures of showing we are able to make sports and being a “Geek”, it exists the moral dilemma of whether a person is more a cyborg than a human being or more human being than cyborg. Tavany (2012) tried to find a definition of cyborg by considering these questions as well, “are humans becoming more computer-like or are computers becoming more human-like?” And halfway between the human being that thinks as human being or a human being that thinks as a cyborg, industries are taking advantage of the consumption fever, whose wrapping paper is pure Artificial Intelligence in itself.

Since the paleontological site of Atapuerca, cradle of Homo-Antecessor, humans have evolved not only in appearance, but in their behavior. Nowadays we can be named as Homo-Cyberneticus (Duus, 2017) or Homo-Connectus (Case). The first one had privacy and security problems; the second one as well, even more.

Undoubtedly, we are technologically progressing very fast; there are several concerns about whether we are doing it cleverly. Social media has made room in our lives, draining a huge portion of our time, bringing benefits in a short period of time but certainly at a high emotional cost in the medium and long term. Recent studies are trying to find out whether this trunk full of techno gadgets is, somehow, altering the brain activity by disturbing its evolution. It may be causing a kind of regression that some authors think it can cause damages leading to a lack of communication (York), emotional disorders (stress, anxiety, phobia, etc.) or, more specifically, techno stress (Craig).

Each year we count with an increasing number of electronic devices ready to provide us figures and exact values about our day to day activities and routines. They give us the feeling of needing to be under control of a higher intelligence that dictates us what and when to eat, at what time we should go to bed. It records how much time we sleep, which is the quality of our night rest and even, if it is time to go for a walk, spend time to meditate or time to rest. We fear that human being is leaving too much decision-making to an unknown entity. It seems alarming and scary. Because, is it feasible that human being could live under technology control in the future? Will wearables or insideables make decisions on behalf of the humankind? Will individuals lose their freedom and privacy? Am I, are you, a cyborg already? If we are not yet, is that moment far away?

All these futuristic questions begin to make sense checking quantitative studies. Recent researches from Ericsson ConsumerLab (2016) point out that “two out of five users feel naked when they don´t have their wearables on, whilst around a quarter even sleep with them”. Deloitte (2015) reveals the top 10 European Fitness Markets Penetration: “based on membership, Germany is the largest market in Europe with a total of 9.5 million. The UK is second with 8.8 million members followed by France (5.2 million), Italy (5.1 million) and Spain (4.9 million)” (p.7). Also, it points out that there are 8.332 clubs, 9.46 million members, 45.3 € average monthly membership fee, 11.6% penetration and 4.830 million total revenues. Regarding Spain, in 2017, fitness market increased in a 1.92% the revenues, 5.06 million members (+2.4%) and 10.9% penetration (Hollasch, 2017). Mc Fit, Basic-Fit and Pure Gym are the 3 top operators in volume in Europe (EuropeActive, 2017). On the other side, rates from the NPD Connected Intelligence/WEAR points out that the percentage of Smartwatch Owners that use functions daily (activity tracker) is a 45% (U.S. consumers aged 18+) (Scott, 2017); the 24% of Spanish citizens monitor their health with fitness band, clip or smartwatch in 2016 (Statista, 2018).

Hence, keeping in mind the quick acceptance of fitness devices in Europe, and specifically in Spain (sixth country where citizens monitor their health) (Redacción Médica), the emerging dependence of them, the loss of self-control in favour of the technology, the loss of privacy and security and the negative consequences of these factors, it is worth devoting a further study and tackle the mystery surrounding the future of humanity; real or fictitious scenarios or, maybe, as Botsman chronicles in his book, Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart (2017).

And, through all this wade of statistics and data, emerging products, visible and ostensible market, etc., “the real questions about the future of trust are not technological or economic; they are ethical. If we are not vigilant, distributed trust could become networked shame. Life will become an endless popularity contest, with us all vying for the highest rating that only a few can attain”. (Botsman).

Finally, it is necessary to clarify that this study does not aim to deny the benefits of the technology, but to provide a humanizing approach to them. All in all, what is indisputable is that when we are under the look, we make an effort to behave better.

The purpose of this paper is to analyze the moral dilemmas that carry the fitness devices in Spain, its rhythm of market penetration, the role of wearables/insideables consumers and the governing regulations in privacy and surveillance rules associated with this apparently healthy world. This study is a cross-cultural analysis completed with Chile, Mexico, USA, Japan, India, China and Germany data. The research group is named C3O (Cross Cultural Cyborg Observatory) and it is composed by researchers from above mentioned countries.

11:15-12:45 Session 3B: Video Games
Location: Room 0.9
11:15
The Road to Gamification is Paved with Good Intentions: A discussion on gamification’s ethicality

ABSTRACT. During the past few years, gamification has emerged as an ever-growing area of research and new applications. The concept refers to the use of game design elements in non-game context and it has often been applied to improve people's motivation in otherwise arduous and repeating tasks. Consequently, the ethical questions are prevalent when solutions are gaming with people’s natural playfulness. This study departs from the extant literature that has focused either bright or dark side of gamified solutions, and focuses on the question whether gamification itself is ethically justified at all. The ethical discussion of gamification presented in this study is based on Kantian deontology, utilitarian consequentialism and Aristotelian virtue ethics. This study shows that gamifying work can cause more problems than it solves.

11:45
Are Video Games Designed to be Addictive?

ABSTRACT. The year of 2018 began with a bad shock for gamers worldwide, as their favourite pastime activity has been classified as a disorder by the World Health Organization [1]. A group of scholars have opened a debate regarding the Gaming Disorder Proposal, and have brought up arguments such as “low quality of the research base, the fact that the current operationalization leans too heavily on substance use and gambling criteria, and the lack of consensus on symptomatology and assessment of problematic gaming” [2]. Even though they have appeared in the early 50s, videogames didn't receive much popularity until the 70s and 80s [3] as arcade games and arcade shops rose to fame. Since then, the gaming culture has rapidly increased, stretching among various types of platforms, target audiences, etc. However, with its rise to becoming a multimillion industry, has the goal of video games converted from creating a free time activity game to creating a low quality and recycled content falling under a money-making scheme? The most played game up to this day is Tetris, a game developed in 1984. The reason why the game is so popular with no chance of being toppled down the leader board any time soon, is due to its simplicity and the effect that occurs caused by playing the game for a longer period of time. Users that devote themselves for a long period of time in playing Tetris have experienced the Tetris Syndrome / Effect, an effect that causes people to involuntarily visualize shapes, patterns, and even experience dreams related to the game pattern [4]. Many video games have been known to cause a similar effect, but not with the same level of intensity, therefore the effect lasts for a short time period and some game companies aim to re-ignite that effect by releasing more and more content. Gamers have experienced specific events occurring in their everyday lives that they have seen in video games, such as markers, score tables, in-game tips, background music, in-game sounds etc. [5]. Depending on the user and depending on the game, the Tetris effect can occur and it can affect the way people do things in their actual lives. It has been noticed that even movies have a short term Tetris Effect, for example, the movie Psycho has caused a pathological fear of showers or an episode of Dr. Who that has caused people to be afraid of statues due to their horrifying representation in the series [6]. Some table top games have been known to leave an after effect, as seen in Dungeons & Dragons players, where players have categorized people from their daily lives into character sheets with specific abilities and skills [6]. The Tetris Effect is a behaviour that is observable in actions that occur repeatedly even in the non-gaming world. But is this element implemented in the game mechanic of the video game with a sole purpose to make the video game more addictive? Do game designers and game companies purposefully add a repetitive mechanic, in order to maintain their target audience and attract more and more players? This paper will be focusing on what elements within games trigger certain behavioural patterns within users that causes them to remain engaged in a game for a long period of time. According to Davidow: "Much of what we do online releases dopamine into the brain's pleasure centres, resulting in obsessive pleasure-seeking behaviour. Technology companies face the option to exploit our addictions for profit. Application providers were simply supplying customers with services that made their products more appealing."[7] Gaming has developed from a single-game per console to the same game on multiple consoles, allowing the users to access the game from different devices just by accessing their user account. This phenomenon became popular with the rise of Facebook, when social games began to appear such as FarmVille where players were given a virtual farm that they had to maintain with the help of other users. The game reached the peak of its popularity with its release in 2009 with a total of 83.76 million monthly active users [8]. The game maintained a high position on the list of most played games on Facebook for a time period of two years with proper updates, holiday events and bonuses, until it began to experience its downfall. Do companies purposefully aim create an addictive relationship between the game and the player? Do designers aim for their games to be addictive, or do the users allow the simpler games to rise above the more complex ones? Are players too gullible and attach themselves too fast and too easily to games that distract them from their surroundings and responsibilities? Are parents to blame for introducing their children to simple video games, so they can be distracted long enough while they take care of chores and free time activities? Do parent given restrictions cause addictive behaviour [9]? Are we to blame the designers or are we to blame ourselves? Does the rewarding system make gamers more satisfying with their accomplishments in the virtual world in comparison to the ones in real-life? This paper examines all these questions, trying to give a convincing answer to all.

12:15
Come for the game, stay for the cash grab: the ethics of loot boxes, microtransactions, and freemium games

ABSTRACT. One of the biggest current controversies surrounding video games involves loot boxes. A loot box is a virtual container in a game which, when opened, contains one or more in-game items selected randomly from a list of possibilities. On its face this may not seem so controversial: video games have been putting random items in treasure chests for decades. However, loot boxes are typically purchasable for real world currency; they are not obtained solely by defeating enemies in the game or accumulating in-game currency.

The prospect of obtaining in-game items for real money (generally referred to as "microtransactions") has been controversial for years. In 2009, Blizzard Entertainment started selling in-game pets to World of Warcraft players for real money. These pets were purely cosmetic items; they provided players with no advantage in the game. Nonetheless, there was controversy over whether this would open the door to being able to buy in-game gear with real money. While that has not materialized in this game, many single-player games have offered such items for pre-ordering a special edition of a game, so there is some precedent for the fear.

Mobile games often use a financial model in which the game itself is free but players pay real money for in-game items; frequently this takes the form of a currency which can be used to speed up various time-consuming actions in the game. Referred to as “freemium” games, players often grumble about them but do seem to recognize that some kind of payment is necessary for the labor that goes into designing a game. However, the practice of including microtransactions in a game that players have already spent $50 or $60 on has generated much more controversy.

This has come to a head recently with Middle-earth: Shadow of War and Star Wars: Battlefront II, both of which announced that they would contain microtransactions (although the latter had them removed prior to release due to the overwhelmingly negative backlash.) A number of jurisdictions have indicated legal interest in whether loot boxes constitute a form of gambling and thus should be subject to similar regulation. Regardless of the ultimate legal decision, this leads to a number of interesting ethical questions.

In this paper I will be investigating the ethics of freemium games, microtransactions, and loot boxes. Two distinctions are relevant here. First, there is a difference between a fixed-reward microtransaction and a random one, such as a loot box. In the former, a player knows exactly what she is purchasing and how much it will cost her. In the latter, a player knows how much she is paying for the loot box, but she does not know what is inside; as such, it is far more difficult to calculate whether the expenditure is worth it. Furthermore, there is a concern that the reward mechanism is addictive, so a player may end up spending money far past the point of reason in pursuit of an item. (With that said, even fixed microtransactions may be problematic – a number of freemium games seem to trade on the sunk cost fallacy; if I have already invested several hundred dollars in defending my city from other players, it is hard to stop. While not quite the same as addiction, there is a similar concern.) Of course, collectible card games function on a similar principle – players do not know what cards will be inside a particular pack they buy – so it is worth considering why loot boxes seem so different to most gamers.

Second, there is a difference between cosmetic items and those which affect gameplay; this is particularly pronounced in multiplayer games, where a player might have an advantage over another through the expenditure of real money. Cosmetic items are not terribly concerning, since they are not required to play a game and give a player no advantage. A player can play League of Legends just as well in the default appearance (or "skin") of a character as he can in a fancy one. While there are still ethical issues concerned with, say, pricing of those skins, they are not markedly different from selling any piece of artwork, whether virtual or physical.

Gameplay-affecting items are a different story. Most multiplayer games trade on the idea of skill: some players do better than others because they are more skilled, and a player can do better by becoming more skilled – in-game effort will be rewarded. The ability to do better by spending money (what gamers derisively call "pay to win") seems to fly in the face of this. It seems unfair.

I am not convinced it is quite this simple, however. Players have long accepted the idea that advancing and becoming skilled in a game requires a lot of play time. At a superficial level, this appears to be a simple trade: spend time in-game and become better at things in the game. However, the ability of players to spend that time is due to factors which are external to the game. A person who is working multiple jobs or who has family commitments may not be able to spend the same amount of time as someone who does not. As such, the idea that paying money in a game is radically different than paying time to play a game falls apart; they both are subject to the real world situation of the player.

Nevertheless, I believe that “pay to win” is still problematic because frequently having money and having time correlate; people who are financially comfortable do not have to work extra jobs and thus have more time. While there is a swath of gamers that this would assist, particularly those with family commitments, it would widen the gap between gamers in the lowest socio-economic brackets and those in higher ones: people who have neither time nor money will be worse off than people who can use one of these factors to advance, and much worse off than people who can use both of these factors to advance. Given that most societies are not as financially mobile as we might like, to a large extent one’s socioeconomic status is involuntary. In multiplayer games, players are ideally on an equal footing – I do not believe the adoption of microtransactions will help this, in general.

Single-players games are somewhat different, since fairness is less of an issue when there is no competition between players. In this case, having optional extras available for purchase is less ethically problematic, so long as the financial model itself is ethical. Star Wars: Battlefront II fell afoul of this because many players felt that a game should not charge both for the base game and also to unlock specific characters. This is debatable, since characters have been added in downloadable content for a long time. However, while wishing to sidestep the debate over day-one downloadable content as much as possible, suffice to say that gamers tend to think that a game should include everything that has been created for the game; they are willing to pay extra for future content, but not for what seems to be current content. As such, microtransactions are still a subject for discussion in single-player games, although for different reasons than in multiplayer games.

Partial Bibliography

Evers, E. R. K., van de Ven, N., & Weeda, D. (2015). The Hidden Cost of Microtransactions: Buying In-Game Advantages in Online Games Decreases a Player’s Status. International Journal of Internet Science, 10(1), 20-36.

Georgieva, G., Arnab, S., Romero, M., & Freitas, S. d. (2015). Transposing freemium business model from casual games to serious games. Entertainment Computing, 9-10, 29-41. doi:10.1016/j.entcom.2015.07.003

Grundy, D. (2008). The Presence of Stigma Among Users of the MMORPG RMT: A Hypothetical Case Approach. Games and Culture, 3(2), 225-247.

Hamari, J., Alha, K., Järvelä, S., Kivikangas, J. M., Koivisto, J., & Paavilainen, J. (2017). Why do players buy in-game content? An empirical study on concrete purchase motivations. Computers in Human Behavior, 68, 538-546. doi:10.1016/j.chb.2016.11.045

Hamari, J., & Keronen, L. (2017). Why do people buy virtual goods: A meta-analysis. Computers in Human Behavior, 71, 59-69. doi:10.1016/j.chb.2017.01.042

Hamari, J., & Lehdonvirta, V. (2010). Game design as marketing: How game mechanics create demand for virtual goods Int. Journal of Business Science and Applied Management, 5(1), 14-29.

King, C. (2017). Forcing Players to Walk the Plank: Why End User License Agreements Improperly Control Players' Rights Regarding Microtransactions in Video Games. William & Mary Law Review, 58(4), 1365-1401.

Macey, J., & Hamari, J. (2018). Investigating relationships between video gaming, spectating esports, and gambling. Computers in Human Behavior, 80, 344-353. doi:10.1016/j.chb.2017.11.027

Shi, S. W., Xia, M., & Huang, Y. (2015). From Minnows to Whales: An Empirical Study of Purchase Behavior in Freemium Social Games. International Journal of Electronic Commerce, 20(2), 177-207. doi:10.1080/10864415.2016.1087820

Valdes-Benavides, R. A., & Hernandez-Verme, P. L. (2014). Virtual Currencies, Micropayments and Monetary Policy: Where Are We Coming from and Where Does the Industry Stand? Journal of Virtual Worlds Research, 7(3).

11:15-12:45 Session 3C: IT, Civic Life & Political Culture
Location: Room 1.1
11:15
Creating an alternative narrative about gun control: Narrative analysis of #Guncotrolnow

ABSTRACT. This paper considers the narrative that emerges out the analysis of the tweets related to #Guncotrolnow. Using nearly 11,000 tweets generated after the Las Vegas shooting in the USA in 2017 the analysis creates a narrative map from the tweets - also called narrative bits (narbs). The narrative map offers a story about the way the tweets show the ongoing relationship between the tweeters and their opinion about the status of guns in American society.

11:45
Civic Social Media Engagement Strategies: Are Political Counter-Memes Influential, Symbolic, and Persuasive?

ABSTRACT. In the wake of the 2016 U.S. election, social media companies such as Facebook and Twitter revealed that the Russian government had utilized their platforms to spread covert political ads. This study draws on literature related to memetic audience behavior and misinformation and will involve the creation of political counter-memes as a way to combat misinformation and propaganda. A qualitative analysis of the counter-meme campaign was conducted to summarize and investigate if they were influential, symbolic, and persuasive.

12:15
We've Got the Old Voting Technology Blues

ABSTRACT. WE’VE GOT THE OLD VOTING TECHNOLOGY BLUES

By William M. Fleischman and Kathleen Antaki

Keywords: electronic voting devices, election infrastructure, election law, technology and the law Categories: Applied Computing – Computing in Government – Voting/election technologies; Law, social and behavioral sciences – Law Corresponding Author: William M. Fleischman Email: william.fleischman@villanova.edu

Introduction

The 2016 Presidential Election in the United States revealed once again widespread shortcomings in the electronic infrastructure of registration and voting systems. In this paper, we discuss system vulnerabilities that raise questions as to the integrity of national, state, and local elections, and make several recommendations for mitigating them. We concentrate on three aspects of the infrastructure – electronic voting machines; hardware, software, and algorithms involved in maintaining voter registration lists; and the influence of social media in defining the informational landscape of the election process.

Everything pertaining to elections in the U.S. is local. Local control of election equipment, processes, and procedures is jealously guarded. Where federal intervention has occurred due to efforts to exclude or discourage voting by certain groups – primarily African-Americans in the states of the former Confederacy – there has been resistance. Recent lawsuits and political measures have sought to terminate federal oversight of elections in these contexts.

Nevertheless, the bitterly contested aftermath of the 2000 U.S. presidential election provided the precedent for federal legislation – the Help America Vote Act (HAVA) – concerning voting machines and registration procedures binding on all the states. Unfortunately, the legislation was hastily drafted and carelessly implemented, giving rise to problems that have plagued recent elections throughout the country and, in 2016, to suspicions that electoral processes were vulnerable to attack by external agents.

Although our discussion is focused on the United States, the questions raised – tensions between technology and the law; ethical aspects of the development of hardware and software systems with public-welfare critical significance; ensuring the security of these systems; careless or unsupervised application of personalization algorithms on social media leading to problematic or undesirable consequences – are of general relevance.

Electronic Voting Devices

Direct Recording Electronic (DRE) voting machines have a serious flaw.,, Without any tangible permanent record (on paper, for example) that permits the voter to see how his choices have been recorded, there is no way to conduct a credible, independent audit of any election in which the outcome is contested or where there is suspicion that voting machines have malfunctioned in recording and storing the results. The latter is something that has actually happened. In one case, at least, owing to the small number of votes cast, this was confirmed when a sufficient body of voters filed affidavits revealing their actual votes.

In addition, several researchers have demonstrated hacks of DRE machines that can be carried out given ten minutes physical access (a credible condition due to lax chain of custody practices frequently observed in advance of an election.) These hacks alter the votes in a manner prescribed by the attacker while maintaining consistency among all internal records of individual (manipulated) totals and actual votes cast.,

While critics have pointed out the folly of attempting to influence the result of any significant election by altering the function of a single voting machine, a more damaging hack has been demonstrated by Andrew Appel of Princeton University. It exploits a design vulnerability in a dual function cartridge that serves as both an aid to visually impaired voters and as the means of recording the actual vote totals. In this two-stage hack, the changes written to the cartridge of a single voting machine infect the election district machine that aggregates the votes. The virus is then broadcast to the entire election district when this backend computer is used to set up machines for the next election.

Given these defects, how is it that such devices are still being used in U.S. elections? The problems here revolve around legislatures that are either unaware of or resistant to acting on technological developments that argue for changes in laws governing electronic voting systems. In one striking case, persistent litigation led by Appel and Penny Venetis of the Rutgers University School of Law eventually led in 2006 to the passage of a state law in New Jersey mandating that every electronic voting machine produce a voter verifiable paper audit trail (VVPAT) by 2009. Nevertheless, having passed the requirement to equip its DRE machines with VVPAT, the legislature discovered that there were no funds available to implement the measure. Therefore, an act was passed (in 2008) temporarily suspending the VVPAT requirement, a “temporary” suspension that is still in effect in 2017.

Often, questions dealing with deficiencies or vulnerabilities in voting systems land in state courts where there is a tradition of deference to legislative intent. In Pennsylvania use of DRE machines without a voter verified paper trail has been upheld in recent court decisions that explicitly refer to the intent of the legislature, based on a law authorizing use of electronic voting devices passed in 1980 and never amended. Deference to the intent expressed in legislation nearly forty years old and oblivious to significant changes in technology, security, and modes of attack in the interim seems misplaced. In recent jurisprudence concerning use of novel surveillance devices like the StingRay, lower court judges familiar with the technology have recognized the need to reinterpret Fourth Amendment protections where troubling innovations in surveillance are involved. Such enlightened action, when legislative remedies are absent or late in coming, might provide a model for similar judicial rulings concerning electronic voting devices.

Electronic Infrastructure for Voter Registration

Although the voting booth is the primary place where voters might experience concerns about the integrity of their vote, the presence of their names on the registration lists maintained by the individual states is the essential precondition for exercising the franchise. An inaccurate registration list can have a far greater effect on the outcome of an election than any single voting machine.

Technology is implicated in the choice and implementation of procedures for maintaining currency of state registration lists as well as the means adopted for ensuring their security. Two 2006 studies, carried out by committees of distinguished computer scientists and public figures, commissioned by the U.S. Public Policy Committee of the ACM and the U.S. National Research Council considered these matters.,

The ACM study notes that “HAVA requires that states authenticate each potential voter by cross-checking with other state databases … . Because other databases can be inaccurate as a result of ambiguous or incorrectly entered data or computer-related problems, wholly automated procedures are risky. Consequently, we recommend that other databases not be used to enroll or de-enroll voters automatically. … [B]ut an appropriate election official should perform any final determination of voter eligibility or ineligibility.” And notification of any change in eligibility should infallibly be provided to the affected individual.

The choice of matching criteria is particularly important – strict criteria result in fewer matches, looser criteria generate a greater number of matches. Depending on the operation – confirming eligibility or purging ineligible voters – these criteria can be used selectively to achieve political ends rather than fulfill the intended civic purpose of registration list accuracy.

There are additional opportunities for administrative mischief in the voter registration process. Lack of transparency in the application of automated procedures, selectively closing Department of Motor Vehicle offices where automatic voter registration occurs, imposing new voter identification requirements for absentee or mail-in ballots that are not publicized widely or in a timely manner are a few of the documented strategies for suppressing the vote of selected groups of voters.

Heightened attention to the integrity of states’ voter registration lists has been generated by phishing attacks, apparently by Russian hackers, attempting to penetrate the computers of state officials and employees of private firms responsible for maintaining these lists.

The fear that malicious foreign agents might hack and alter them argues for serious and transparent security measures. Several guides have been prepared, for securing the resources of political organizations and campaigns. These would seem a good starting point for the protection of state voter registration lists against external attack.

Nearly twelve years have passed since the publication of the ACM and National Research Council studies. Significant changes in hardware and software for both registration and voting, and changes in the capabilities of adversaries seeking to undermine their integrity, argue for renewed efforts by the ACM to commission, publish, and widely publicize a second study marshalling the insights and recommendations of computer scientists and public policy experts on technological best practices in conducting elections.

Social Media and the Informational Landscape of the Election Process

As Cass Sunstein noted, “… the risks posed by any situation in which thousands or perhaps millions or even tens of millions of people are mainly listening to louder echoes of their own voices … [are] likely to produce far worse than mere fragmentation.” The experience of the 2016 U.S. presidential election has shown that many actors – domestic and foreign, human and bots – have found the means of exploiting this fragmentation through sophisticated social media posts in order to influence the actions of large groups of voters.,

A study published by several University of Michigan scholars studied election incidents in several states during the 2016 election cycle. Their methods, involving the analysis of observations extracted from Twitter concerning the performance of election officials, suggested an experiment which we conducted during the course of several highly contested off-year statewide elections in New Jersey, Pennsylvania, Virginia, and Alabama during the fall of 2017.

We gathered samples with appropriate temporal and spatial specificity from the Twitter Streaming API and applied keyword filters suggested by events surrounding the various contests to the text fields of these messages. In addition, we inspected all images associated with the Tweets in our samples. Although this was a preliminary step to a larger study, we made several striking discoveries.

In the Virginia gubernatorial election, for example, we found repeated references to messages, independently substantiated by news reports, sent to African-American voters informing them a day before the election, incorrectly – and apparently with malicious intent – that their polling place had changed. We also retrieved one of multiple tweets “informing” voters that, if they were concerned about the weather or about their vote not being counted, they could vote for the Democratic candidate by simply sending a text message to a number provided in the tweet. A similar voter deception effort went viral during the 2016 presidential election. The Twitter account that posted the 2017 message was active for nearly three hours while polls were open on election day.

As a result, we have established an archive to catalog evidence of efforts at voter suppression and deception that circulate on Twitter. The goal is to maintain, and make publicly available, a permanent record of what would otherwise be ephemeral messages attempting to interfere with the right to vote. Considering that the public discourse and conversations on social media have been saturated with unsubstantiated accounts of massive voting by ineligible individuals, it seems properly ironic to turn the evidence from social media into a tool for combatting attempts to disenfranchise legitimate voters.

12:45-14:15Lunch
14:15-15:45 Session 4A: Governance & Voice in IT
Location: Room 0.8
14:15
Ownership and Distributive Justice in Blockchain-based Value Transfers

ABSTRACT. Introduction

While prime examples of blockchain-based applications such as cryptocurrencies (e.g. Bitcoin) increasingly draw the attention of the mainstream media (Heath 2017; Hutton 2017), in the philosophical and political discourse blockchain is mostly conceptualized not on an applicational level, but in a more abstract and inclusive way as a governance-technology. Blockchain in this sense is understood as a technology that “potentially allows individuals and communities to redesign their interactions in politics, business and society at large, with an unprecedented process of disintermediation on large scale, based on automated and trustless transactions” (Atzori 2015, p. 4). Even though the relevant technological features of blockchain are evident, it is a controversial matter how the way blockchain-governance is ordering human activity should be interpreted against the background of political philosophy. Inspired by Winner (1980)’s question “Do artifacts have politics?” one tends to ask: “What politics do these artifacts have”? While authors like Reijers et al. (2016, pp. 139–140) suggest that “any absolute claim of defining a ‘blockchain ideology’” should be the subject of skepticism, two – prima facie incompatible – interpretations seem to prevail in the recent development of the discourse. One the one hand, there is an interpretation – especially advocated in the developers community – that holds up the participatory character of the distributed ledger technology whose consensus-driven governance-mechanisms supposedly share significant traits with social contracts (Buterin 2014; Reijers et al. 2016) and are construed as inherently egalitarian due to flat-hierarchy governance-mechanisms (Filippi 2017, p. 68). On the other hand, there is an interpretation – advocated by proponents and critics alike – focusing mainly on disintermediational aspects and the scarcity of opportunities for private-, state- and supra-state institutions to assert themselves by the means of the blockchain-enabled governance-mechanisms and to perform regulatory tasks. Accordingly, blockchain tends to be described as a libertarian or anarchic governance-technology. Authors like Filippi (2017) recently pointed out that the above mentioned governance-mechanisms are not the same and in fact are grounded on different governance-levels. Nevertheless, the academic discourse still tends to confuse these distinct sets of governance-mechanisms and persists at applying concepts of political philosophy to blockchain-governance as such. The aim of my paper is to disentangle the arguments concerning the politics behind the blockchain technology by developing a taxonomy of four different types of blockchain-governance which then will be used to lay bare the differences in the conceptual assumptions underlying the respective arguments.

Links between Political-Philosophy and Blockchain-Governance

Libertarian traits of blockchain-governance are closely connected to libertarian guiding principles such as almost unconditional ownership rights and the limitation of state-regulation. The potential of technologies like blockchain in this context is mainly seen in its ability to replace institution-based “[c]entralized vertical authority” as “the main organizational model in society” with a technology-based “horizontal and distributed diffusion of authority, in which the source of legitimacy are the individuals themselves” (Atzori 2015, p. 7). The (in)famous cryptocurrencies illustrate this process of disintermediation. While a ledger-based fiat money system “relies heavily on trust in a few institutions” (Wood, Buchanan 2015, p. 389) such as banks which ultimately function as a gatekeeper to the monetary system, verify the legitimacy of funds and perform monetary transactions as well as states which are in control of the monetary supply and guarantee the acceptance of the currency, blockchain-based cryptocurrencies do not. They repeal (the necessity for) these intermediaries (e.g. gatekeeper) or replace them by either technology (e.g. trust, control of the monetary supply) or a technology-enabled community (e.g. verification of transactions). However, even among those who share this perception of blockchain-governance, there is a multitude of different assessments. On the one hand, there are proponents of the technology who promote disintermediation for various reasons and with varying radicality. While more moderate advocates perceive blockchain-governance as just a process of disintermediation in governance-services without any aspiration to abolish governmental institutions (Atzori 2015, p. 4), less restrained advocates perceive blockchain-governance as a tool to “win a battle in the arms race” with governments in what is abstractly described as a “fight for freedom” (Filippi 2017, p. 60). Paquet & Wilson (2015) describe the spirit in the promotion of technologies that enable alternative modes of coordination in that manner as “dissociative anti‐government […] in the way of failing to see its [the governments] value adding contribution”. Critics who don’t share this “libertarian dream”, on the other hand, worry instead about a “regulatory nightmare” (Filippi 2014) and a blockchain-enabled “pre-political” society “in which the law of might – or the laws of the market – prevails on common good” (Atzori 2015, p. 23). In contrast to the disparate normative evaluations of nonetheless acknowledged libertarian characteristics of blockchain-governance, in the context of social contract theories the links to political philosophy appear to be much more conceptually diverse. They range from mere resemblances from philosophical concepts and decision-making processes defined by the blockchain-protocol to the justification of the use of blockchain applications (Reijers et al. 2016) up to the identification of game-theoretical assumptions which are underlying both, the blockchain protocol and certain approaches of social contract theories (Buterin 2014). The commonality of most of these claims is its focus on consensus-mechanisms defined by the blockchain-protocol and consensus-mechanisms for design-decisions on the protocol as the main modes of blockchain-governance.

Four Types of Blockchain-Governance and Their Reflections in Political Philosophy

I argue that these two understandings of the politics or the political philosophy behind blockchain-governance are based on two different ideas of on what level blockchain-governance takes place. Filippi (2017) describes one as the “governance-level”, in which the blockchain technology enables new governance-mechanisms for a given society. The other one she considers the “infrastructure level”. Notably, what is considered as governance on an infrastructure-level by Filippi (2017) is considered the actual blockchain-governance in the blockchain developers community (Buterin 2017) and is exactly what the majority of the arguments regarding social contracts mentioned above aims at. Here, the focus is on governance-mechanism within the blockchain-community, which forms the distributed infrastructure. Therefore, I propose to distinguish not between a governance-level and an infrastructural level, but between blockchain-governance on a societal level and blockchain-governance on a community level in order to disentangle the aforementioned discourse. Furthermore, on both levels – the societal level and the community-level – an additional distinction is conducive. On the societal level this is a distinction between the above mentioned supposedly libertarian approach of “governance through blockchain” and an – as for now – in the academic discourse less covered “governance of blockchains”, a subset of internet governance focusing on blockchains. These two modes of governance on a societal level ought to be understood as counterparts each focusing on one side of the reciprocal process of technology shaping human interaction and vice versa. On the community-level, on the other hand, not contrasting but further grading is necessary. On a lower level there are governance-mechanisms specified by the blockchain protocol that define e.g. how users of a cryptocurrency authenticate users and validate transactions in a collaborative manner, whereas on a higher level there are governance-mechanisms which are only partly defined by the protocol and deal with decisions on the protocol itself. Hence, higher-level community-governance defines the way lower-level community-governance operates. The distinction between governance on these two levels is necessary, because the arguments referring to social contract theories – especially in the context of the justification of blockchain-governance – are to a large extent only applicable to lower-level community-governance, whereas the main discourse in the developer’s community deals mostly with higher-level community-governance (Buterin 2017; Zamfir 2017).

Outline of the paper

In my paper I will first present the multiplicity of links that have been drawn between blockchain-governance and political philosophy. Secondly, I will perform an analysis of the conceptions of blockchain-governance underlying the respective links and arguments. Hereafter, I will present the further developed taxonomy of the four types of blockchain governance which I will use in the fourth and last step to disentangle the discourse by classifying the arguments and links by the means of the underlying conception of blockchain-governance.

Publication bibliography

Atzori, Marcella (2015): Blockchain Technology and Decentralized Governance. Is the State Still Necessary? In SSRN Journal. DOI: 10.2139/ssrn.2709713.

Buterin, Vitalik (2014): Cryptoeconomic Protocols In the Context of Wider Society. Ethereum Meetup. London, 2014. Available online at https://www.youtube.com/watch?v=S47iWiKKvLA.

Buterin, Vitalik (2017): Notes on Blockchain Governance. Vitalik Buterin's website. Available online at http://vitalik.ca/general/2017/12/17/voting.html. Filippi, Primavera de (2014): Bitcoin. A regulatory nightmare to a libertarian dream. In Internet Policy Review.

Filippi, Primavera de (2017): In Blockchain we Trust. Vertrauenslose Technologie für eine vertrauenslose Gesellschaft. In Rudolf-Augstein-Stiftung (Ed.): Reclaim Autonomy. Selbstermächtigung in der digitalen Weltordnung. Berlin: Suhrkamp (edition suhrkamp, 2714), pp. 53–81.

Heath, Thomas (2017): Bitcoin is going mainstream. Here is what you should know about it. Edited by The Washington Post. Available online at https://www.washingtonpost.com/news/get-there/wp/2017/12/04/bitcoin-is-going-mainstream-here-is-what-you-should-know-about-it/?utm_term=.b5c63550f5e6.

Hutton, Will (2017): Bitcoin is a bubble, but the technlogy behind it could transform the world. Edited by The Guardian. Available online at https://www.theguardian.com/commentisfree/2017/dec/24/bitcoin-is-a-bubble-the-technology-behind-could-transform-world.

Paquet, Gilles; Wilson, Christopher (2015): Governance failure and the avatars of the antigovernment phenomena.

Reijers, Wessel; O'Brolcháin, Fiachra; Haynes, Paul (2016): Governance in Blockchain Technologies & Social Contract Theories. In ledger 1, pp. 134–151. DOI: 10.5195/ledger.2016.62.

Winner, Langdon (1980): Do artifacts have politics? In Daedalus, pp. 121–136.

Wood, Gavin; Buchanan, Aeron (2015): Advancing Egalitarianism. In : Handbook of Digital Currency: Elsevier, pp. 385–402.

Zamfir, Vlad (2017): Against on-chain governance. Refuting (and rebuking) Fred Ehrsram's governance blog. Edited by Medium. Available online at https://medium.com/@Vlad_Zamfir/against-on-chain-governance-a4ceacd040ca.

14:45
“…they don’t really listen to people”. Young people’s concerns and recommendations for improving online experiences.

ABSTRACT. The importance of internet platforms and websites behaving ethically towards children and young people has become a prominent issue amongst legislators (Children and the Internet inquiry by UK House of Lords), and national (e.g. 5Rights, in the UK) and international (Unicef) organizations that protect the rights of children (Third et al 2014; Kidron and Rudkin 2017). For example, the UK government are currently being lobbied by a cross-party campaign to make amendments to their Data Protection bill in an effort to champion children’s digital rights, and ensure that they are guarded by the online world (The Guardian 2017). Exploring young people’s concerns around internet transparency is a critical part of the UnBias research project, with some expressing feelings of disempowerment in their internet use; “just realising there is nothing you can do about it.” Others reported how they “…can’t really talk to the creator of the website because they don’t listen to people”, indicating that some young people feel that they are overlooked and silenced by a digital world that does not take account of their needs and opinions.

In this paper we argue that whilst movements have been made to promote the adoption of a more ethical approach towards the user in the digital world, considerable work still remains to improve young people’s experiences. A recent report on ‘Digital Childhood’ by Kidron and Rudkin (2017) strongly criticised the providers of digital services by arguing that they have failed to consider how the internet may have a detrimental effect on children’s development. Building on the contributions of this report and other recent studies (e.g. Children’s Commissioner 2018; Livingstone and Bulger 2014), we argue that there is a need for industry, companies and the government to recognise both the need but also the voices of children, which are often overlooked, and to take responsibility for how the digital world may be adapted to inform, protect and empower young people.

Drawing on data from the UnBias[1] project’s Youth Juries, which are interactive youth led discussions between young people aged 13-17 years of age, this paper will build on the existing, but limited, literature that explores children’s experiences of the internet. Fourteen Youth Juries have been conducted, with over 140 young people taking part. The purpose of the Youth Juries was to bring the voices of young people to the fore, by understanding how young people use the internet and to gather a sense of what they believe happens to data that is shared online. The Youth Juries were designed to understand the thoughts, ideas and recommendations of young people about their experiences of the internet, through facilitated group discussions around scenarios of internet usage experiences that were co-created with age-matched peers (Perez et al 2015; Coleman et al 2017).

The Youth Jury approach uncovered how young people have and use their agency in multiple ways whilst online, for example in verifying information “If I saw it on Facebook and I don’t know if it was true, I would search it up on Google to see if there’s any more articles about it….”, in finding ways to counteract the power of an algorithm “…I can change my IP address and use other browsers so that I can trick the algorithm…”, and in protecting their personal data “…I wouldn’t let anything go on social media that I wouldn’t want anyone to see…” Whilst this is an encouraging indication of how young people may use their agency in a digital context, there were also many ways that the young people expressed feelings of disempowerment in their online experiences.

This paper addresses the concerns that jurors raised in relation to their lack of agency. Expressions of disempowerment permeated the discussions amongst young people, as many alluded to a disparity in power between the platform and the individual “and it’s just scary how much information they have about you, just like you sharing some of your information, because that shouldn’t be how it is, because you should be able to do your own stuff.” Moreover, not only did many jurors point to feeling excluded and disempowered by their online experiences, this was exacerbated by their experiences of online terms and conditions.

The jurors believed that online terms and conditions were not accessible “…they’re so long…, it is standard English but it’s not written in a way that you can easily understand it.” One juror believed that terms and conditions were particularly exclusionary towards young people; “…especially for people our age, they definitely don’t target it towards us.” Some raised concerns that online terms and conditions were not transparent and that users remained wholly unaware of what happens to their information when signing up to applications “them selling of your data to external companies, not actually being consented at all. They should make that more obvious that they’re going to do that.” Other jurors believed that internet platforms and companies deliberately obfuscate their terms and conditions to ensure that individuals sign up, prioritising their profits over any moral or ethical obligation to ensure that users fully understand and consent to how they operate “…companies do it on purpose, they blatantly make them as confusing as possible so people don’t take them up on it.” This is a concerning indication that internet platforms are attracting young people to engage in an environment that contravenes numerous articles of the UN Convention on the Rights of the Child (e.g. Article 42 “knowledge of rights”, Unicef 2010), and UK guidelines for online child safety (UKCCIS 2015).

For some, this has fostered a sense of resignation, for example many jurors said that they accept the user conditions of websites and applications, as specified in the Terms and Conditions, because they wish to use them, despite believing that they were neither accessible nor transparent; “I might read the bottom bit but most of the time I don’t understand it…so I just tick it anyway.” Thus, we argue that the digital world has been complacent in not offering any form of meaningful transparency to their users, and as a consequence this is unethical as it has fuelled a sense of exclusion amongst many young people. However, efforts are under way to ensure that Terms and Conditions are accessible to all, through changes to the General Data Protection Regulation (EUGDPR 2018).

This paper concludes by arguing that it is vital that solutions are put forward that promote and cultivate agency amongst young people. As part of the Youth Jury methodology, we reiterated the value of eliciting the views of young people, and we encouraged them to make recommendations for how the internet could be improved and made to be a more ethical environment, particularly for their age group. As a result, the jurors made many recommendations about how to improve the accessibility and transparency of online platforms, including their terms and conditions, to ensure that meaningful informed consent is achieved “…it should be made fair so that the public can understand at a general level what’s going on when they’re agreeing to the terms and conditions,” and to counteract the sense of powerlessness that many of the young people alluded to when ‘consenting’ to existing conditions of online services. This paper will put forward some of the jurors’ recommendations to help to remedy the young people’s sense of exclusion, and to contribute to fostering a more ethical and empowering internet for all.

[1]UnBias project supported by EPSRC grant EP/N02785X/1 (http://unbias.wp.horizon.ac.uk/)

References

Children’s Commissioner (2018). ‘Life in ‘likes’. Children’s Commissioner report into social media use amongst 8-12 year olds’. Available at: https://www.childrenscommissioner.gov.uk/wp-content/uploads/2018/01/Childrens-Commissioner-for-England-Life-in-Likes.pdf. [Accessed: 05.01.2018].

Coleman, S., Pothong, K., Perez Vallejos, E., & Koene, A. (2017). ‘The Internet on Our Own Terms: How Children and Young People Deliberated About Their Digital Rights’. Available at: http://casma.wp.horizon.ac.uk/wp-content/uploads/2016/08/Internet-On-Our-Own-Terms.pdf. [Accessed: 04.01.2018].

European Union General Data Protection Regulation (2018). ‘GDPR Key Changes’. Available at: https://www.eugdpr.org/the-regulation.html. [Accessed: 05.01.2018].

Kidron, B and Rudkin, A (2017). ‘Digital Childhood. Addressing Childhood Development Milestones in the Digital Environment’. Available at: http://5rightsframework.com/static/Digital_Childhood_report_-_EMBARGOED.pdf. [Accessed: 21.12.2017].

Livingstone, S and Bulger, M (2014). ‘A Global Research Agenda for Children’s Rights in the Digital Age’, Journal of Children and Media, Vol. 8., No. 4., pp317-335.

Perez Vallejos, E., Koene, A., Carter C J, Statache R., Adolphs, S., O’Malley, C., Rodden,T., McAuley, D., Manoff, G., Dowling, R., Pothong, K and Coleman, S “Juries: Acting Out Digital Dilemmas to Promote Digital Reflections”, ETHICOMP 2015, Leicester, 7-9 September, 2015, ACM SIGCAS Computers and Society 45(3): 84-90

The Guardian (2017). ‘Lords push for new regulations to protect children online’. Available at: https://www.theguardian.com/society/2017/nov/18/lords-push-for-children-to-be-protected-against-tech-giants-by-law. [Accessed: 03.01.2018].

Third, A., Belerose, D., Dawkins, U., Keltie, E and Pihl, K (2014). ‘Children’s Rights in the Digital Age: A download from Children Around the World’. Available at: https://www.unicef.org/publications/files/Childrens_Rights_in_the_Digital_Age_A_Download_from_Children_Around_the_World_FINAL.pdf. [Accessed: 21.12.2017].

UK Council for Child Internet Safety (2015). ‘Child Safety Online. A Practical Guide for Providers of Social Media and Interactive Services’. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/487973/ukccis_guide-final__3_.pdf. [Accessed: 05.01.2018].

Unicef (2010). ‘A Summary of the UN Convention on the Rights of the Child’. Available at: https://downloads.unicef.org.uk/wp-content/uploads/2010/05/UNCRC_summary.pdf?_ga=2.170284576.1894284748.1515149992-579898435.1515149992. [Accessed: 05.01.2018].

15:15
Anticipatory Governance and Epistemic Authority in Data driven Environments: Issues in Medical Epistemology

ABSTRACT. for the open track or whatever track you prefer

14:15-15:45 Session 4B: IT, Civic Life & Political Culture
Location: Room 0.9
14:15
Ethical Issues of Attendance and Unintentional Bias in the E-Learning Environment

ABSTRACT. E-learning is considered an established teaching method. Numerous studies investigate the scale of positive impact of e-learning on the performance of various target groups. For instance, Demian & Morrice (2012) investigate the impact of Virtual Learning Environments (VLEs) on academic performance in based upon a survey involving a total of 157 participators. They conclude that the effect varies from negligible to moderate and that a wider range of resources for the VLE could enhance its potential and aid students in achieving intended learning outcomes better. Beevers & John (2011) lay out the lessons learned from deploying SCHOLAR a wide-scale e-learning program for more than 400 secondary schools and colleges across Scotland in subjects ranging from science and mathematics to business and languages. The SCHOLAR program is considered to be cost-effective online and interactive. McGovern et al (2015) find that the availability of online lecture materials does not negatively impact upon lecture attendance. The study also found that students primarily recourse to on line material leading up to assessments and to narrow aspects associated with the assessment. Davies & Graff (2005) shed some doubt on the reported beneficial effects of online participation and interaction. Their research shows that such participation does not necessarily translate into higher grades at the end of the year, with students who participated more frequently not being significantly awarded higher grades. However, students who failed in one or more modules did interact online less frequently than students who achieved passing grades. On the other hand, the authors' findings from surveys and interviews suggest that e-learning capabilities might also have a negative effect, for example, being among the factors for non-attendance while at the same time positively affecting the engagement of students whose attendance of classroom based learning is impeded by, for example, financial or family issues. Causal factors for nonattendance, apart from the availability of on line learning materials, were investigated by numerous scholars around the world. Bukoye & Shegunshi (2016) conduct research in a step towards understanding the reasons for students’ non-attendance and the impact of an engaging teaching model in improving student attendance. They do not however consider e-learning as one of the possible factors for non-attendance. Quaye & Harper (2015, eds.) widely address students' engagement, again, not addressing the impact of e-learning upon attendance. In addition, the authors find that the question of the potential for unintentional bias in delivery of online lectures for university students studying law has attracted scant attention. This compares with the great deal of attention paid to potential unintentional bias affecting the delivery of on line legal services, for example, Greenwood and Hume (2016). This paper discusses the first stage of a planned two-stage research process devoted to addressing (1) key factors associated with e-learning identified in the international environment as disincentives to physical participation in learning and to unbiased learning. (2) Verifying and identifying from the literature other perceived success factors of e-learning products. The authors address goals at (1) by conducting mixed methods research within the international environment. After running pilot study, we decided to slightly tweak some of the questions on our survey so it is clearer for participants. First, we held workshops that helped us identify the most important factors regarding non-attendance. We critically analysed those findings with those from data obtained from direct interviews and a questionnaire based survey conducted at private and public universities and academies in the United Kingdom and Poland. The potential for unintentional bias in the e-learning environment of students studying law is researched through analysis of the literature and comparative critical analysis of learning outcomes from online lectures and learning outcomes from classroom based learning. E-learning solutions with varying degrees of universality and clarity are examined, including: • A commercial networking-oriented e-learning platform (www.netacad.com), featuring media-rich e-learning content, practical labs as well as diverse assessments. • The Blackboard virtual learning environment and course management system. • An academy-specific blended-learning solution (https://edux.pjwstk.edu.pl), including - but not limited to – e-learning content comprising mainly presentations and exemplification of uploaded images and project repository. The following research hypotheses (H) were put forward within the first stage of the research process: H1: Teaching content in e-learning currently widely used at universities is not researched in terms of its adequacy as a substitute for traditional activities and thus is not assessed as a factor discouraging participation in traditional classes. H2: At technical faculties, the negative impact of e-learning on the didactic process is smaller compared to the social and economic sciences. H3: Factor trends are convergent across countries. As limitations of the study and future research are concerned, the study was carried out in two European countries, the United Kingdom and Poland. In order to shed the light on the unique determinants of the non-participation in classes taking into account the intensive use of e-learning in other countries, the research should be expanded. Secondly, the contents of this contribution cover the first of the research objectives exclusively, i.e. exploration of the main factors identified in the international environment as obstacles to participating in physical activities in terms of e-learning as a disincentive. Completing the second stage (2) of the research exercise, i.e. verifying and identifying from the literature other perceived success factors of e-learning products requires further research activities. In this paper we question the implications of online learning for physical classroom attendance and the potential for bias in the delivery of online class material. These questions relate therefore to the normative implications of online learning technologies for society. Critical research methodology facilitates the identification, investigation and prescription for societal issues associated with, for example, technologies; it identifies what needs to be changed and signposts how (Fay, 1987:36). Questions of bias and its implications in online teaching technologies are especially amenable to critical research since critical research enables understanding of what is really being said, (Tatyana and Huub, 2003: 2), the ethical implications of discourse on the wider world and how these are achieved (Potter, 1996:3)

Bibliography: Beevers, C.E. and John, P., 2011 A Case Study: How Scotland Has Leveraged e-Learning to Improve Student Outcomes online at: http://scholar.hw.ac.uk/export/sites/scholar/Information_About/Publication_Documents/SCHOLAR_A_Case_Study_for_Gates_Foundation.pdf Bondarouk, T. and Ruel, H., 2004. Discourse analysis: making complex methodology simple. Bukoye, O.T. and Shegunshi, A., 2016. Impact of engaging teaching model (ETM) on students’ attendance.Cogent Education, 3(1), p.1221191. Davies, J. and Graff, M., 2005. Performance in e‐learning: online participation and student grades. British Journal of Educational Technology, 36(4), pp.657-663. Demian, P. and Morrice, J., 2012.The use of virtual learning environments and their impact on academic performance.engineering education, 7(1), pp.11-19. Fay B. 1987. Critical Social Science: Liberation and Its Limits, Cornell University Press Greenwood, D and Hume, K., 2016 Legal Ethics and Machine Learning, lecture to the Social Physics graduate seminar at the MIT Media Lab, December 30th, 2016. Harper, S.R. and Quaye, S.J., 2015.Making engagement equitable for students in US higher education. Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations, pp.1-14. McGovern, A, et al, 2015 How Video Lecture Capture Affects Student Engagement in a University Computer Programming Course: Attendance, Video Viewing Behaviors and Student Attitude. Conference paper, the European Educational Research Association, November 9th, 2015. Potter, J., 1996. Discourse analysis and constructionist approaches: Theoretical background. British Psychological Society [in:] Richardson, J.E., (Ed), Handbook of qualitative research methods for psychology and the social sciences, Leicester: British Psychological Society

14:45
Using the application Friendly Schedule on a tablet to promote independence in children with autism spectrum disorder.

ABSTRACT. The prevalence of autism spectrum disorder (ASD) has increased markedly in recent decades. As a society we must be prepared to deal with this problem. The need to start and continue providing evidence-based practices in the field of ASD is evident and is also growing. The Institute for Child Development (IWRD) is offering science-based intervention to children with autism and is the only dissemination site of the Princeton Child Development Institute in Poland. Over two decades of research and clinical experience show that activity schedules are very effective in teaching people with autism many new skills. However, activity schedules in the “traditional” paper version also have some disadvantages – they could lead to stigmatization of students with autism while used in the social environment. It is essential to give people with autism spectrum disorder socially acceptable tools, which can help them to function more independently, because many of them will require a lifetime of treatment. The intensive development of modern technologies as well as easy access to various types of mobile devices inspired us to implement tablets into our treatment. Friendly Schedule is an application for children and youth with autism and related disorders, which was developed as a joint initiative of the Gdansk University of Technology and the Institute for Child Development in Poland. The application was created as a “non-profit” project to be widely available for parents and teachers. When a child with autism is at the beginning of the treatment in IWRD, we first introduce activity schedules in the paper version, presented in book form, but when our student reaches the prerequisite skills, we transfer to a modern version on a tablet. As experienced practitioners, we decide to use the same well-documented methods which were effective for teaching children with autism to follow activity schedules in the paper version. The data from our research show that manual prompts are very effective in teaching children with autism to follow activity schedules on a tablet. All of our participants learned to use the application Friendly Schedule to complete five tasks independently without any help from adults. In our daily therapy based on applied behavior analysis, we still use the application Friendly Schedule to teach children with autism a variety of new skills, including verbal and social behaviors.

15:15
ICT and the political culture in the information society

ABSTRACT. Please find file attached

14:15-15:45 Session 4C: AI Ethics
Location: Room 1.1
14:15
Social robots and Childcare: Ethical concerns in dehumanizing childrearing

ABSTRACT. Short abstract of the extended abstract How do highly developed technologies affect our lives? There are many occasions to see and use machines and robots equipped with artificial intelligence (AI) in daily life. Actually, we achieve great efficiency and get enjoyment by using them. However, they bring us classic and novel ethical issues behind great benefits. This study explores the relationship between social robots and parenting, especially focusing on interaction between children and social robots from the ethical perspective.

1. Life in high-tech society When the prediction by Frey and Osborne was announced in 2013, people stirred up strong fear of losing jobs in the near future. They pointed that highly advanced technologies, especially AI technology, would promote rapid innovation in business and many human workers would be replaced to AI technology and lose their jobs in 10 to 20 years (Frey and Osborne 2013; Frey, Osborne and Citi 2015). After Frey and Osborne’s analysis, the report about employment and technology was also published by World Economic Forum in 2016 (World Economic Forum 2016). These researches show people who engage in non-skilled or manualized work would face risk of losing their jobs in the future, worse some of jobs might be totally eliminated from the earth. On the other hand, new or more job opportunities would open for workers with high-skill or creativity. Life including both working life and daily life would be influenced by technologies regardless of whether we like it or not.

Once AI takes over our jobs and we have more free time, what do we use free time for? Some people might expect to have much more time with their families and take more care of their children. However, AI and high technologies are deployed and equipped for daily use at home, and take over household work and daily chores. Moreover, social robots with AI stay with children, play together and entertain them at home. In high-tech society, we take technologies as given commodity and suppose to make life more efficient and effective by utilizing them. Whereas AI supports us in any aspects of daily lives, injudicious and strong dependence on AI could dehumanize life and evoke classic yet new ethical dilemma: how we live and what is good/right. This study explores how AI affects daily life from the perspective of computer ethics, especially focusing on parenting with social robots at home.

2. Family robots and friend robots In June 2015, Softbank robotics which is the subsidiary company of a major telecom company in Japan has released “Pepper” in the mass market. Pepper is “the first robot designed to live with humans” and has a human shape and the ability to read/express emotions (Emotion Engine), and communicate with humans1. Pepper supposes to be our companion to stay together, entertain us, and make human life happier, not help housekeeping or bringing a heavy box. Bruno Maisonnier, who is Aldebaran robotics CEO and the responsible of Pepper project, explained that“(t)he most important role of robots will be as kind and emotional companions to enhance our daily lives, to bring happiness, to surprise us, to help people grow” (Guizzo 2015).

And also, Cynthia Breazeal, who is the roboticist at MIT’s Media Lab, announced she would launch the social robot JIBO in 2015. JIBO has “skills” to recognize emotions and “is designed as an interactive companion and helper to families, capable of engaging people in ways that a computer or mobile device aren't able to” (Guizzo 2014). Both robots, Pepper and JIBO, are designed to communicate and interact with humans continuously and AI equipped on both robots learns about users through updating users profiles and favors constantly. And eventually both robots aim to be a member of family or a friend of users. Because of this purpose, both robots suppose to have users who don’t acquire enough media literacy or computer skills, such as small children.

Family robot, such as Pepper and JIBO, are categorized into “personal service robot” according to the categorization by International Federation of Robotics (IFR) and International Organization for Standardization (IOS) 2. Generally, “personal service robot” is used for a personal task, not for a commercial task, for example automated wheelchair, and personal mobility assist robot. In terms of family robot, the robot aims to be recognized as social existence through communication and interaction with users, rather than helping users daily life practically. Therefore, family robot is called as “social robot,” “sociable robot” and “social intelligent robot” (Breazeal 2002; 2003: Fong et al. 2003; Dautenhahn 2007).

3. Social robot and childcare Social robots as family robot or friend robot equip generally three basic functions; entertainment function (singing, dancing and playing game); security function (monitoring through webcam, talking from a distance via Internet); facilitation and revitalization of family communication (providing family a trigger of conversation) (Asai 2017). Although social robots cannot clean the house or cook foods, they can sing a song with children, read a book for children before going to sleep, or check children and house via webcam when parents are absent at home. In light of definitions of “care robot”, social robots could function as a caretaker (van Wynsberghe 2016; Vallor 2016).

Social robots have a great possibility to support parents and families in childrearing. Especially working mothers in gendered society might reduce their workload and stress of taking care of children by using social robots. And its monitoring function could give parents a secure feeling while they are absent for work and children stay at home alone. Furthermore, when social robots improve functions to communicate with users and take care of childrearing or household more and more in the future, people who currently live in obedience to gender norms might be able to be free from their gender roles.

Social robots constantly collect/store the enormous amount of information, connect to cloud date and update their abilities. For users, giving their own information to social robots is necessary to improve their robots’ functions (IBM Japan 2014). Once social robots are recognized as a family member by users and stay together all the time, the robots can gather various kinds of personal information including sensitive information. While a huge amount of personal information improves the usability of social robots better and better, we need to be aware of ethical concerns behind it.

There are three typical ethical concerns in operating social robots (Asai 2017). 1) As long as social robots function based on our personal data, there is a risk to breach privacy or leak personal information. 2) In order to manipulate social robots, we need to use a kind of “robot infrastructure” to operate cloud AI and robot OS for collecting and analyzing data. On the other hand, social robots are operated with the collaboration and cooperation of various technologies by various companies. How and who manage and control the robot infrastructure is critical to protect our privacy and personal data. 3) Social robots are customized for particular users through the interaction and communication with them. Each social robot is made up by the collaboration of robot designers, engineers, venders, operators and users, and its function is differentiated depending on users. In this context, is it possible to consider a customized robot as intellectual property? If so, who is allowed to own the robot? And also if your customized robot compose a beautiful song or do a painting nicely, who can own the intellectual property right of those creations? When thinking about ethical concerns in the operating process, we need to see and check the problems from legal and political aspects as well as from an ethical aspect.

4. Invisible ethical concerns in dehumanizing childrearing Once ethical problems cause in the operating process, those problems would be recognizable for users. In the worst case, social robots might stop functioning because of those problems. However, more serious ethical problems with social robots for childcare use are hard for users to see and recognize. First of all, as previous researches have already shown, it is very difficult to be free from embedded values in designing technology (Friedman et al. 2006; Nissenbaum 2011). Even what kind of picture books the robot reads for children could have influence on their thoughts and lifestyles.

Second, generally technologies including social robots are utilized to reduce our workload and improve efficiency in daily life. Increasing interaction between children and family robots might decrease communication between children and parents. The typical example is that family robots read a book for children before going to sleep, instead of parents. Indeed, the absence of parents could be complemented by social robots. Basically family robots don’t say not to children and don’t deny them. However, they learn to be independent through experiences of holding and refusal by parents (Okonogi 1992; Winnicott 1988). Having a faithful companion for children might interfere the process of developing children’s independence.

And thirdly, technologies intervene the childrearing environment and dehumanize or artificialize childrearing. Dehumanized environment by technologies is always reasonable and rational based on well-calculated and well-programmed algorithm. However, in actual, we are sometimes overwhelmed by the irrational and unreasonable world. We have learned how to deal with and solve problems in perverse situations through experiences since childhood. The dehumanized but well-programmed environment inhibits children from developing the ability to get along with difficult situations when things don't go as they want. When we make a decision, values, independence, and problem-solving skills are key elements to make a better decision. In general, technology could be used for dual- or multi- purposes, and sometimes it is used for unexpected purpose. Social robot is no exception in this regard. Although benefits from social robot are remarkable and attractive for us, we need to recognize how we use social robot and see how it influences on our lives from the ethical perspective.

Note 1. Softbank Robotics, “Robot: Who is PEPPER?” available online: https://www.ald.softbankrobotics.com/en/robots/pepper. 2. International Federation of Robotics (IFR), “Service Robots - Definition and Classification WR 2016,” available online: https://ifr.org/img/office/Service_Robots_2016_Chapter_1_2.pdf. and International Organization for Standardization (IOS), “Robots and robotic devices,” available online: https://www.iso.org/obp/ui/#iso:std:iso:8373:ed-2:v1:en.

References Asai, R. (2017) “Techno-Parenting”, The Journal of Information and Management, Vol. 37, No. 2, pp.6-21. Breazeal, C. (2002) Designing Sociable Robots, MIT Press. Breazeal, C. (2003) “Toward Sociable Robots,” Robotics and Autonomous Systems, 42,  pp. 167-175. Dautenhahn, K. (2007) “Socially Intelligent Robots: Di-mensions of Human-Robot  Interaction,” Philosophical Transactions of the Royal Society B, 362, pp. 679-704. Fong, T., Nourbakhsh, I. and Dautenhahn, K.(2003) “A Survey of Socially Interactive  Robots,” Robotics and Autonomous Systems, 42, pp. 143-166. Frey, C.B. and Osborn, M.A. (2013) The Future of Employment: How Susceptible Are    Jobs to Computerisation? Oxford Martin School, University of Oxford. Frey, C.B., Osborne, M.A. and Citi. (2015) “Technology at Work: The Future of  Innovation and Employment,” Citi GPS: Global Perspectives & Solutions, February  2015. Friedman, B., Kahn, P.H. and Borning, A. (2006) “Value Sensitive Design and  Information Systems,” in Zhang, P. and Galletta, D. (Eds.) Human-Computer  Interaction and Management Information Systems: Foundations, M. E. Sharpe (republished by Routledge in 2015), pp. 348-372. Guizzo, E. (2014) “Cynthia Breazeal Unveil Jibo, a Social Robot for the Home,” IEEE  Spectrum, posted 16 Jul 2014, available online: http://spectrum.ieee.org/       automaton/robotics/home-robots/cynthia-breazeal-unveils-jibo-a-social-  robot-for-the-home. Guizzo, E. (2015) “A Robot in the Family,” IEEE Spectrum, January 2015, pp. 26-29,  p.54. Nissenbaum, H. (2001) “How Computer Systems Em- body Values,” Computer, 34(3),  March 2001, pp. 118- 120. Okonogi K. (1992) Jikoai Ningen, Chikuma shoten (in Japanese). Vallor, S. (2016) Technology and the Virtues: A Philosophical Guide to A Future  Worth Wanting, Oxford University Press. van Wynsberghe, A.(2015)Healthcare Robots: Ethics, Design and Implementations,  Ashgate Publishing(re- published by Routledge in 2016). Winnicott, D. W. (1988) Human Nature, Free Association Books. World Economic Forum (2016) Global Challenge Insight Report the Future of Jobs:  Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution,  January 2016.

14:45
East Asian values in the information era -the cultural-ethical traditions behind East Asian people’s evaluation on the phenomena happening around them such as human-robot-interaction, privacy-related problems, AI in the information era-

ABSTRACT. Abstract: In this paper, I will examine ‘how do Japanese people and East Asian people understand and interpret the phenomena and problems happening around them such as human-robot-interaction, privacy-related problems, AI in the information era?’ This is a kind of topic of information ethics or IIE(intercultural information ethics) in a broad sense, if we follow Rafael Capurro’s explanation, i.e. information ethics should be based on ontology rather than metaphysics. Capurro says that in the ontological perspectives, ‘Being itself’ is a major topic, while in the metaphysical perspectives ‘beings’(things, object matters, Vorhandenheit) is a dominant topic(Capurro, 2006). In my view, if we want to talk about life in the information era, this might be a topic of ontology first of all in the sense that people’s life is not the subject of the metaphysical discourses, even if people’s life today is surrounded by a lot of information technologies. As we will see later, East Asian people’s eyes on the matters in the information era are under strong influence from their traditional culture-existential perspectives, so far as my researches done in Japan and East Asia show. This might be understood in various ways: they don’t live in an ‘advanced’ society compared with the ‘Western’ people and so on. But if we follow Capurro’s ideas, ‘Being’ or life can’t be reduced to the problem of ‘beings.’ In this paper, I want to see East Asian people’s life in the information era from a combined perspective of ontology and ‘metaphysics.’ As will see later, East Asian people’s life has a strong orientation to the question, ‘what is a good and virtuous life?’ This question seems to derive from the traditions of Buddhism, Shinto(indigenous religion in Japan), Confucianism, Taoism and the past history which puts emphasis on na(norms, the fixed standards of evaluation) rather than jitsu (wealth, happiness in the everyday life)(Nobuyuki Kaji,2013). This might be a difficult topic because we almost always think about only ‘beings’ and often forget ‘Being.’ But on the other hand, the topic in this paper might provide us with a chance to break out of undoubted metaphysical presuppositions. The concrete purposes of this paper will deal with the following matters. 1) First, we will see the research findings which I gained by my past resent researches in East Asia on people’s life in the information era, paying an attention on East Asian people’s views on material wealth, ethical meanings of disasters, good human relations, importance of mutual reliance, criticism of natural selfishness and so on(views on good and virtuous life). 2) Secondly, we want to know how people’s views on ‘good and virtuous life’ are related with their views on the matters of privacy, human-robot-interaction, AI. As we will see later, I found through analysis on my research data that these views are correlated with each other. In my view, this can be interpreted in such a way; people’ ontological ways of life determine their views on the matters of privacy, human-robot-interaction, AI and so on.

Outline of this paper: 1. Japanese people and East Asian people have potential cultural-ethical attitudes toward the problems in the modernized society. The following table (Table 1) shows how East Asian people think about the problems and matters around them in the information era. The figures of this table are based on my previous and recent researches done in Japan and other (East and Southeast) Asian countries and regions. As the figures of this table show very clearly, people in Japan and other Asian countries have strong interest in the cultural-ethical matters in the information era.(The views used in these researches are adopted from (a) a content analysis on the discussions on the characteristics of Japanese culture(s) and East Asian cultures by some authors such as Kitaro Nishida, Tetsuro Watsuji, Hideo Kobayasji, Bin Kimura, Yujiro Nakamura and others, (b) examination on the results of my depth interviews with Japanese and Asian people, (c) a content analysis on newspaper reports and magazine reports about Japanese attitudes toward disasters and wars and (d) psychological scales to measure Japanese personality and values. ) What we can know from this table is that people in Japan and Asian countries are still under the strong influence by the cultural-ethical-existential traditions. In fact, Buddhism, Confucianism, Taoism, Shinto (in Japan) seem to exert strong cultural-ethical power upon their attitudes toward life. For example, the percentage of the respondents who show strong or fairly strong affirmative answer to the view, ‘People will become corrupt if they become too rich” (Honest poverty)’, is 80.4% in Japan (2014HG research). This is a very surprisingly high figure when we consider the fact that Japanese people have been called ‘economic animals.’ This shows that Japanese people regard ‘richness in terms of material wealth’ far less important than other values such as honesty, mutual reliance and mutual respect, unification with nature and others. They seem to prefer virtuous life based on traditional Buddhism wisdom to material richness, although it is not clear whether they are aware of this tendency in their mind. It seems that they are under influence from Buddhism, Confucianism, Shinto and other cultural traditions. The similar tendency is found in the case of Asian people too. 76.3% in China, 67.3% in South Korea, 91.1% in Taiwan say ‘yes’ to this view. Similarly, the percentages of people who show sympathy with the view, ‘Occurrences of huge and disastrous natural disasters can be interpreted as warnings from heaven to people,’ are high in East Asia. Historically speaking, this idea, i.e. the disasters can be regarded as warning or punishment from the Heaven, derives from the ancient China of thousands years ago. This idea was born in the ancient cultural traditions in China, i.e. the combination of Onmyōdō(way of Yin and Yang; mythical divination system based on the Taoist theory of the five elements), Confucianism and esoteric Buddhism(Concerning Ten-ken-ron, see Ikutaro Shimize, 1970). Of course, this doesn’t mean that East(Southeast) Asian people denies the scientific theory as a theory which explains the natural disasters scientifically and logically. Their sympathy with this idea seems to be related with a kind of symbolic association.

2. The cultural-ethical attitudes toward the meanings of life in Japan and East Asia envelop their potential attitudes toward robots, AI, privacy and the related topics. The figures of Table 1 show us (or we can interpret so) that people in Japan and East (Southeast) Asia are still under the strong influence from the traditional cultural-ethical views. This is a very interesting and surprising fact when we consider the truth that Japan and other East Asian countries and regions are one of the most highly developed countries and regions in the world in terms of diffusion of information technologies and computer-mediated communication tools and devices. And more surprisingly, my researches done in Japan and Asian countries show us that people’s orientation to acceptance of the views on ‘good and virtuous ways of life’ listed in Table 1 have strong or fairly strong correlations with people’s views on privacy and ethical problems of robots. Table 2 shows us this finding. The figures of Table 2 show us that people’s views on ‘good and virtuous ways of life’ (‘criticism of modern civilization’ factor gained by factor analysis on the views listed in Table 1, i.e. “People will become corrupt if they become too rich”; “People have a certain destiny, no matter what form it takes”; “In our world, there are many things that cannot be explained by science”; “In today’s world, what seems cheerful and enjoyable is really only superficial” and so on) have statistically significant correlation with people’s views on robots and privacy. Table 2 shows the finding gained through the research in Vietnam. But similar findings were found through other countries’ researches too.

3. How can we interpret these findings? These findings seem to suggest us that we, people in Japan and East Asia need different terms and schemas of discussion than those ‘popular’ in the West such as ‘autonomy,’ ‘responsibility,’ ‘individuality’ and so on in order to evaluate the potential ethical discussions on the information society in the East. And this turns our eyes toward the importance of reconsideration on our cultural-ethical-hermeneutical heritage included in various texts written by some authors such as Kitaro Nishida, Tetsuro Watsuji, Hideo Kobayashi, Hiroshi Ichikawa, Motokie Tokieda and others who are interested in oneness or undifferentiated of the subject and the object or of direct experience and reflection on one’s life(experience). (We need to add the names of authors in other Asian countries to this list. This will be our next task.) Concerning this point, for example, Nishida( Kitaro)’s ideas about ‘pure experience’ is worthy to reconsider in the sense that ‘pure experience’ is in fact nothing but the wholeness of our experience, our knowledge, our perception and our orientation to action.

Table 1

Table 2

15:15
Looking for the Full Story: Ethical Issues Associated with Session Replay Scripts

ABSTRACT. Introduction

Session replay scripts collect data that are used for analytics. They record all of a user’s interactions with a particular website or application (we’ll use the word “program” to mean both), including mouse movement, keystrokes (which includes information that is entered and then deleted), timing information, and potentially any other information that is available to the program in the context in which it is being run. In some cases the recordings are sent to the owner of the program and in other cases they are sent to third party companies whose stated purpose is to provide analysis. These third parties claim to help website owners correct bottlenecks and identify navigation difficulties in order to improve the user experience on their websites.

In the paper we will rely on Helen Nissenbaum’s notion of privacy as contextual integrity to drive our analysis. The analysis will examine two different uses of session replay. In one, we consider the case where the program owner is responsible for the development and deployment of the session replay scripts. The second use is the one mentioned above: a website owner contracts with a third party for the scripts and the data analysis. Here we briefly review Nissenbaum’s theory of Privacy as Contextual Integrity and our initial analyses of the ethics of these two different uses of session replay scripts.

Privacy as Contextual Integrity

According to Helen Nissenbaum (2004), people’s activities take place within a context that is regulated by a set of norms that govern and limit the flow of personal information within that context. There are two types of informational norms in Nissenbaum’s privacy scheme: (a) norms of appropriateness, and (b) norms of distribution. The first of these determines whether a given type of personal information is appropriate to divulge within a particular context; the second set of norms restricts the flow of information within and across contexts. When either of these norms is “breached,” a “violation of privacy occurs” (Nissenbaum, 2004: 125). Nissenbaum’s “decision heuristic” provides us with an approach with which to analyze privacy, as well as other ethical, concerns in the use of session replay scripts.

Nissenbaum’s nine steps of analysis are comprised of five that require description of the new technology (session replay scripts in our case) and four that are normative in order to assess the new technology vis-a-vis privacy as contextual integrity.

Session Replay Scripts Without a Third Party

Nissenbaum’s fifth step is to locate applicable entrenched informational norms and identify significant points of departure. Thus, we liken the case of a program owner collecting and analyzing session replay information to that of a brick and mortar store employing CCTV to monitor movements of customers through their stores. There are certainly differences; most importantly the “users” of the physical store are typically anonymous to the store owner. Even if the user of a website does not enter any personally identifying information, the use of cookies allows a website owner to track an “anonymous” individual over time. Application owners often have access to information about the users and about the particular computer on which her browser is running.

Nissenbaum’s assessment framework also has one consider moral and political factors affected by the practice in question and ask how the system or practices directly impinge on values, goals and ends of the context. In applying these steps in the paper, we will build a case that for a program that runs on a desktop or laptop computer and is connected to the Internet with a flat rate connection, collecting and reporting of session replay information is ethically justified. Further, we suggest that there may even be a case for a program owner to be ethically obligated to do this when doing so improves the program’s usability by all people and overall security.

In the paper we will consider other contexts, such mobile applications and visiting websites on a mobile device. Furthermore, we will analyze programs that have different types of relationships with users. E-commerce, banking, and social media sites each have access to different types of data. These different types of data are important to Nissenbaum’s first step, which involves the analysis of session replay in terms of information flows.

Session Replay Scripts With a Third Party

When session replay is used with third party scripts and a third party is responsible for the analysis, there is a clear disruption of information flows. This is a significant conclusion coming out of a recent study by Steven Englehardt in which he discovered that six widely used replay scripts all exposed users’ private moments to varying degrees (Goodin, 2017:224). Rather than the information flowing from user to website owner, it flows from user to website owner and to a third party. In order to maintain the norms of distribution, website owners would have to examine each transaction and exclude private information from the recordings so that no private information was leaked. It is worth looking at the practices of these companies to determine how they use the information that they obtain as part of the session replay process. In this abstract, we briefly look at one company, FullStory.

FullStory (2016) has its own privacy policy that makes clear that its policy becomes subsumed under the policy of the website owner (Customer) that hired FullStory, thus removing itself from direct responsibility for privacy violations.

FullStory collects information on a User under the direction of its Customer, and has no direct relationship with the User whose information it processes. It is important to understand that when a User visits other websites that use the FullStory Services, the FullStory Customer’s privacy policy apply to that information collected instead of this Privacy Policy. FullStory may use your information as permitted by the FullStory Customer in accordance with its privacy policies.

One way of exonerating itself from privacy responsibility is to caution customers of its services about inappropriate distribution of content. It’s Acceptable Use Policy (2014) states:

•Use and maintain your list of Excluded Elements in the FullStory configuration to ensure that you never record sensitive information related to credit cards, government-issued ids, etc. Even if your website already stores this sort of data for its own functionality, it isn't relevant for FullStory, and so we want to never see it.

•Do not use any information in any way that is inconsistent with a user's intent. For example, if a user begins to fill in an email field in a sign-up form but then does not submit it, emailing the user would not be consistent with her or his intent.

Customers who try to contain the distribution of private information by using automatic redaction are often thwarted either by a lack of understanding about what exactly is being redacted, or by the additional steps they need to take to make redactions happen. “FullStory redacts credit card fields with the ‘autocomplete’ attribute set to ‘cc-number’, but will collect any credit card numbers included in forms without this attribute” (Englehardt, 2017). According to Nissenbaum’s theory, contextual integrity is breached when the flow of information leaves the published website via the session capture script. When third party script vendors realized that they were capturing more than browsing modes, they did little except push the burden of responsibility for securing the privacy of users onto the website owners who were expected to redact all private and sensitive data. Despite attempts of some website owners to comply, manual redaction is still insecure according to Englehardt. For example, Englehardt discovered that sensitive information about medical conditions and prescriptions along with user names were sent to FullStory because access to earlier verification questions and mouse tracking did not ensure the privacy of the prescription data.

This shifting of responsibility is an abdication of responsibility by third party vendors. There is an opportunity here for collaboration between both the website owner and the session replay company to recognize the importance of user privacy and develop best practices for protecting user privacy. Furthermore, the session replay companies may have a responsibility to develop tools that help website owners to consistently implement those best practices.

Conclusion

With session replay there are tradeoffs between privacy, usability, and security. The data gathering is always intrusive, often goes beyond the stated objectives, and sometimes happens without user knowledge or consent. In our paper, we will identify instances in which replay scripts violate both the norms of appropriateness and distribution, and hence violate the privacy of users.

References

Englehardt, S. (2017) No boundaries: Exfiltration of personal data by session-replay scripts. https://freedom-to-tinker.com/2017/11/15/no-boundaries-exfiltration-of-personal-data-by-session-replay-scripts/ November 15, 2017. Accessed December 9, 2017.

FullStory Acceptable Use Policy (2014) https://www.fullstory.com/legal/acceptable-use/ Accessed December 12, 2017. FullStory Privacy Policy (2016) https://www.fullstory.com/legal/privacy/ Accessed December 12, 2017.

Goodin, D. (2017) No, you’re not being paranoid. Sites really are watching your every move, https://arstechnica.com/tech-policy/2017/11/an-alarming-number-of-sites-employ-privacy-invading-session-replay-scripts/. Accessed December 1, 2017.

Nissenbaum, H. (2004) “Privacy as Contextual Integrity,” Washington Law Review, Vol. 79, No. 1, pp. 119-157.

15:45-16:15Break and Refreshments
16:15-17:15 Session 5A: Student Track
Location: Room 0.8
16:15
The State of the Responsible Research and Innovation Programme: A Case for Its Application in Additive Manufacturing (3D Print)

ABSTRACT. Since the inauguration of the responsible research and innovation (RRI) framework programme in 2011, RRI has been actively promoted around science and technology communities all over Europe. This article examines some of the issues surrounding RRI, and describes the nature of its application in different fields. It then goes on to make a case for its use in the additive manufacturing industry.

16:45
The path toward an ethics of distributed autonomous organizations (DAOs)

ABSTRACT. See attached paper

16:15-17:15 Session 5B: Women in STEM
Location: Room 0.9
16:15
There’s no such thing as “a woman”: Observations on the Human Brain Project’s approach to equality in Neuroscience and ICT

ABSTRACT. The Human Brain Project The Human Brain Project (HBP) is a European Commission-funded Future and Emerging Technologies Flagship project funded under the Seventh Research Framework Programme (FP7) and Horizon 2020. The HBP aims to construct research infrastructure to support developments in computing, medicine, and neuroscience. It is an enormous scientific endeavour with nearly unmatched potential to benefit society, but with that enormity and potential come significant challenges. One of these is ensuring that the many different types of research carried out are conducted in keeping with Responsible Research and Innovation practices (RRI; e.g. Von Schomberg 2011; Stilgoe et al. 2013). Substantial advances have already been made in terms of general compliance with many RRI ideals (Aicardi et al. 2017; Rainey et al. 2018; Stahl et al. 2016).

Gender in the HBP and Research Communities In an EU policy context, gender has become a fundamental thematic component of Responsible Research and Innovation across the European Research Area. This is intended to address a gender imbalance issue that is well-documented, especially in STEM subjects and ICT in particular. If the HBP is taken as an example, demography of the membership of the twelve Sub-projects (especially the ten “Scientific” Sub-projects) suggests that women in neuro-ICT endeavours are few and far between; that an attenuation takes place between researcher and PI career stages; and that women in leadership roles are the exception to the rule. Furthermore, administrative tasks and additive roles such as serving as an Ethics Rapporteur for a Sub-project are more frequently undertaken by female researchers than male.

This pattern reflects a well-trodden path of established understandings. Common responses to similar trends often consist of variations on: Women take on the “drudgery” or administrative work in academic contexts, to the benefit of male careers (Angervall et al. 2015), partly due to the academic system and social factors; the “leaky pipeline” leading to fewer female leaders exists at least in part because women are assumed to both want a family and to be primarily responsible for it (Carr et al. 2015); and that socialisation from an early age influences these trends (Shapiro et al. 2015). Underlying these responses is the idea that such examples are simply another reflection of institutionalised or systemic sexism. Typical responses to this include some variation of three concepts, summarised as: fix the women; fix the men and/or the community; and fix the system, environment, or infrastructure. Another approach is providing equality of opportunity, however philosophically problematic this may be (e.g. Rosa Dias and Jones 2007; Dworkin 2002).

HBP Gender Objectives and Potential Issues Three objectives presented in the Horizon 2020 Strategy on Gender Equality are intended to address this: Fostering gender balance in Horizon 2020 research teams; ensuring gender balance in decision-making; and integrating gender/sex analysis in research and innovation, which have a legal basis in Regulation (EU) No 1291/2013. On the surface, it appears to be fair and equitable (and in keeping with the principles of RRI) for women to have equal representation across all research contexts and at all levels. However, addressing the gender imbalance is often justified by the potential impact of the overall programme. The impact of equal proportions of women in research teams, decision making bodies, and their representation in research (for example, collecting data from both male and female brains) amounts to the production of higher-quality knowledge and economic benefits. In other words, there is a “business case” for ensuring that the representation of women is balanced.

Conceptually, this resembles a version of “Trans-national Business Feminism”, the idea that “investing in women will expand the pool of talented workers and consumers, thereby increasing corporate competitiveness and profitability” (Roberts 2015, 2), but re-shaped for a neo-liberal research framework. The results of approaching equality from the perspective of women as a resource are unlikely to be successful, and may re-entrench negative patterns that discourage the involvement of women in ICT (Roberts 2015).

Another issue is that fundamentally, there is no such thing as “a woman.” The HBP Gender programme reflects Horizon 2020 wording, and currently portrays gender as a singular dimension, which will not address equality issues sufficiently. Women come in a range of ages, national origins, ethnicities, sexual preferences, socio-economic backgrounds, religions, and so forth, to say nothing about fluidity and change in these many aspects (Alfrey and Twine 2017). Decades of work have established that such factors are not mutually exclusive (e.g. Carvalho and Santiago 2010; Crenshaw 1991), and to deny them by omission is to ignore their significant impact and effectively exclude their substantial influence on women’s career trajectories in ICT and research more widely. Intersectionality theory is well-established, and the incorporation of intersectional methodologies has been considered not only in the process of analysis and interpretation of data, but also in the processes of research design and empirical data collection (Windsong 2016).

The HBP, in funding a programme to address gender balance, appears to be making strides toward meeting one of the goals of RRI and adhering to principles enshrined in EU law. At this time, a series of conferences, education opportunities, and training are planned for female members of the HBP (although some of these require applications, which may discourage some women from involvement). Women do benefit from training, flexibility mechanisms, and support structures. Mentorship in particular can have a powerful impact upon women’s careers in ICT research (Flick 2015). However, deployment of these events and initiatives is in process, and the motivation and theoretical approach behind these initiatives is not yet clear. If they are in line with supporting a “business case” for gender balance, this may prove problematic. In light of the lack of consideration for intersectionality in EU policy and thus its apparent omission from consideration in the HBP’s response to the need for gender balance, the implementation of such efforts could potentially worsen the very problem it is intended to solve.

A series of gender equality-focussed events sponsored by the HBP will be attended, and by combining these experiences with first-hand observations already gathered from previous HBP conferences, we will present a critical report on the approach to gender equality which is currently being taken by the HBP. As a biological scientist who has shifted focus to responsible research and innovation in ICT, the lead author will bring a unique disciplinary perspective to examining and reporting on the HBP’s approach to equality, particularly with reference to gender and intersectionality.

References Aicardi, C., Reinsborough, M. and Rose, N. 2017. The integrated ethics and society programme of the Human Brain Project: reflecting on an ongoing experience. Journal of Responsible Innovation, pp. 1-25. DOI: 10.1080/23299460.2017.1331101 Alfrey, L. and Twine, F.W., 2017. Gender-Fluid Geek Girls: negotiating inequality regimes in the Tech industry. Gender & Society, 31(1), pp. 28-50. Angervall, P., Beach, D. and Gustafsson, J., 2015. The unacknowledged value of female academic labour power for male research careers. Higher Education Research & Development, 34(5), pp.815-827. Carr, P.L., Gunn, C.M., Kaplan, S.A., Raj, A. and Freund, K.M., 2015. Inadequate progress for women in academic medicine: findings from the National Faculty Study. Journal of women's health, 24(3), pp. 190-199. DOI: 10.1089/jwh.2014.4848 Carvalho, T. and Santiago, R., 2010. New challenges for women seeking an academic career: the hiring process in Portuguese higher education institutions. Journal of Higher Education Policy and Management, 32(3), pp. 239-249. Crenshaw, K., 1991. Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, pp. 1241-1299. Dworkin, R., 2002. Sovereign virtue: The theory and practice of equality. Harvard University Press, Cambridge. Flick, C., 2015. Mentorship in computer ethics: ETHICOMP as a “community mentor” for doctoral and early career researchers. Journal of Information, Communication and Ethics in Society, 13(3/4), pp. 326-345. DOI: 10.1108/JICES-10-2014-0052 Rosa Dias, P. and Jones, A. M. 2007. Giving equality of opportunity a fair innings. Health Economics 16, pp. 109–112. DOI:10.1002/hec.1207 EUROPEAN COMMISSION, Directorate-General for Research & Innovation. 2016. H2020 Programme Guidance on Gender Equality in Horizon 2020, v. 2.0. http://eige.europa.eu/sites/default/files/h2020-hi-guide-gender_en.pdf Regulation (EU) No 1291/2013 of the European Parliament and of the Council of 11 December 2013 establishing Horizon 2020 - the Framework Programme for Research and Innovation (2014-2020) and repealing Decision No 1982/2006/EC Rainey, S., Stahl, B., Shaw., M., Reinsborough, M. 2018 (Forthcoming). Ethics Management and Responsible Research and Innovation in the Human Brain Project. In: R. von Schomberg (ed.), Handbook - Responsible Innovation: A Global Resource. Edward Elgar Publishing Ltd. Roberts, A., 2015. The Political Economy of “Transnational Business Feminism” Problematizing the Corporate-led Gender Equality Agenda. International Feminist Journal of Politics, 17(2), pp.209-231. Shapiro, M., Grossman, D., Carter, S., Martin, K., Deyton, P. and Hammer, D., 2015. Middle School Girls and the “Leaky Pipeline” to Leadership: An Examination of How Socialized Gendered Roles Influences the College and Career Aspirations of Girls Is Shared as well as the Role of Middle Level Professionals in Disrupting the Influence of Social Gendered Messages and Stigmas. Middle School Journal, 46(5), pp.3-13. Stahl, B.C., Rainey, S. and Shaw, M., 2016. Managing Ethics in the HBP: A Reflective and Dialogical Approach. AJOB Neuroscience, 7(1), pp. 20-24. DOI: 10.1080/21507740.2016.1138155 Stilgoe, J., R. Owen, and P. Macnaghten. 2013. Developing a framework for responsible innovation. Research Policy 42(9), 1568–80. DOI: 10.1016/j.respol.2013.05.008. Von Schomberg, R., ed. 2011. Towards responsible research and innovation in the information and communication technologies and security technologies fields. Luxembourg: Publication Office of the European Union. http://ec.europa.eu/research/science-society/document_library/pdf_06/mep-rapport-2011_en.pdf Windsong, E.A., 2016. Incorporating intersectionality into research design: an example using qualitative interviews. International Journal of Social Research Methodology, pp. 1-13.

16:45
Evaluation Framework for Promoting Gender Equality in Research & Innovation (EFFORTI)

ABSTRACT. Our paper presents how the EU funded H2020 project EFFORTI (Establishing an Evaluation Framework for Promoting Gender Equality in R&I) might contribute to an increase of women in STEM by designing a tailored evaluation framework.

The basis for EFFORTI is the observation that a better integration of women into the research and innovation systems has an impact on the methodology and the quality and relevance of research and innovation results. Since decades, policy-makers, companies, higher education institutions etc. implemented numerous measures and strategies in order to atract more women for STEM. Thus, there is less a lack of good will but a certain lack of evidence which types of interventions are most suitable for a given organisational or national context to promote gender equality.

EFFORTI therefore aims to measure the progress in the area of gender equality (GE) and research and innovation (R&I) policy, including the stock-taking and further development of tools, methods and criteria to evaluate gender equality policies in national R&I systems. The ultimate aim of EFFORTI is to contribute to better GE policy making across Europe by analyzing a broad range of different GE policy measures with regard to their impacts on gender equality, research, innovation and competitiveness, but also on the solution of Grand Challenges and the promotion of responsible research and innovation (RRI).

16:15-17:15 Session 5C: AI Ethics
Location: Room 1.1
16:15
Hate speech recognition AI – a new method for censorship?

ABSTRACT. A new term has arisen to the center of modern western discourse: hate speech. In both US and Europe the term has been used to counter far-right, alt-right and nationalistic speech against immigrants and minorities [1]. European Commission (EC) has acted to diminish illegal hate speech which leaves the legal questions to the member states. [2] But what is hate speech? A clear definition has been lacking and different dictionaries and authorities give different answers. Moreover what is illegal hate speech, which EC opposes? That of course depends on the legislation of the current area. EC Recommendation No. R (97) 20 defines hate speech as following: “For the purposes of the application of these principles, the term "hate speech" shall be understood as covering all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.” If we read this as it is written, a statistic (e.g. [3] on crimes committed towards and by immigrants) can clearly indicate that people of certain origin are more likely to commit certain kinds of crimes. This information – as published – can indeed spread and promote xenophobia – fear of the foreign – and is thus hate speech. Are we truly counting using research and statistics in the same basket with KKK agenda or Mein Kampf? The problem has of course landed to the remote corner of Europe called Finland. Finnish police has already increased the amount of Virtual Corner Police, also known as “hate speech police”, from 3 to 40 during the last two years (and 48 high-ranking police officers to oversee them). [4] In addition to the overseeing the internet activity these hate speech police try to act as advertisements of the Finnish Police force and seemingly influence the political atmosphere of Finland with political statements (see e.g. [5]). Whether or not this is a good practice for a modern society is a different discourse. The government of Finland has started several programs to counter hate speech (see e.g. [6, 7]). In these hate speech is described as: ”Hate speech is the kinds of words, expressions or pictures which spread or advocate hate towards an individual or a group of individuals.”[6] and ”Hate speech is communication which intentionally violates, reduces or threatens other people." [8] Therefore it seems that the concept of hate speech varies a lot – even within one country’s social programs. Finnish law does not recognize hate speech and still the Finnish Police has declared having “zero tolerance” on hate speech [9, 10]. This creates a dilemma which may easily lead to the police using this newfound power and mandate in situations where they do not have the support of the law on their side. Despite the law not directly recognizing hate speech, there have been several cases ([11, 12, 13, 14] where convictions have been based on such principles. For example, in 2012 the Finnish Supreme Court found a Finnish congressman guilty of “breach of the sanctity of religion” and “ethnic agitation”. Interestingly, the judgement was made mainly based on the alleged hurtful intent and offensive nature of the writing, and noted in decision point 21 that the attempt to prove the factual accuracy of the text doesn’t lessen its slanderous nature. The prosecutor claimed in point 32 that the text was “so called hate speech that does not enjoy the protection of freedom of speech”. [15, 16] The “hate speech police” has also given different guidelines on how to talk in the Internet about immigration (see e.g. [17]). The problem in the Finnish example is that the examples of “illegal speech” have writing style of a person less educated while the “more accepted” ways to speak in the example are ones often acquired by more educated persons – those who are less likely to criticize immigration in the first place. Thus according to the police the only eligible critique is academic in nature and the only eligible criticizers are those proficient in the art of academic speech and writing. While the hate-dripping ranting of uneducated in the Internet is indeed annoying and shameful to watch, should it be made illegal? Should we punish people for having opinions and not being able to formalize them in a politically correct manner? Are we creating a new class divide between those who are allowed to express their opinions and others who are not? As an interesting side note, Finnish police forces had their own police-only Facebook group where “hate speech” was common. The leader of the hate speech investigation group was also a moderator of that particular group. [18] Also the Finnish police has declared that the hate speech police is to target “fake news”. How, is not certain. [19] But only with mere 40 personnel it is not possible to cover the whole Finnish-speaking Internet of 5 500 000 Finnish-speakers. As a solution, researchers at Aalto University (former Helsinki University of Technology) have created a hate speech recognition AI which is being trained to locate and report hate speech from blogs and social media. This far it has been used to attempt to find hate speech from the candidates of municipal elections of 2017. [20] Although the AI still requires a human to monitor and verify the results of the findings, as far as we know of AIs their learning curve can to be exponential. Thus it is only a matter of time when the AI is capable of recognising the unwanted texts from the internet. While the AI has only been used “against” politicians, it also can be used against any other group of people – or against all of them. Therefore this tool can (and presumably will) be used against those that disagree with those who have set the parameters for the AI, because the AI itself has no concept of hate or love, just the concept of the task. Hence it seems that Finns have created an AI to seek unwanted opinions. It also seems clear that there will be both false positives and false negatives, although this is not our main concern. Mill (On Liberty) has said that all opinions need to be able to be expressed, as otherwise we do not know which opinions are good and which are not – only through discourse the good can be separated from the bad. Mill says “If all mankind minus one, were of one opinion, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind.” [21] So, is hate speech an opinion? Yes it is, but it is also an opinion which can harm other people either directly or indirectly. Thus, is it acceptable to normalize hateful opinions? No, but is it acceptable to censor any opinion which one does not want to be normalized? No, again. Thus, the problem is what to censor – and this of course, comes back to opinions – because we cannot draw a clear line, hate speech seems to be like pornography, “what you like is porn, what I like is erotica”. Most speech considered hate speech is hate speech because it is considered unwanted, distasteful, disturbing, even repulsive, just like pornography. Thus, as with pornography it is advised that if you do not like it do not watch it, do not read it, do not listen to it, but if it is criminalized it will only get worse because there are people who are drawn to the illegal, unwanted and disgusting. Mill’s definition of harmful speech in On Liberty also explicitly excludes offensive speech, as “offence is given whenever an attack is telling and powerful”. Inversely, the Ethical Journalism Network’s definition, which is also used by the Finnish police [20], includes points like the status of the speaker, reach of the speech, intent and objectives of the speech, content and form of speech and the economic, social and political climate [22, 23]. Moreover, when the “hunting” of hate speech is automated the hunt will become more widespread. It will not only be used against politicians or other citizens endorsed with a position of trust, but against every citizen. And if we accept the presumption that this automation is done only with good intentions in mind (see e.g. Kant), and we can prove that it only does good (or more good than harm, see e.g. Mill) we still have a huge problem: who will use it next and for what? AI does not have benevolent purposes, as it has no purposes of its own, but only fulfills the purposes of its users. When we automatize searching of hate speech, as it is currently defined, we leave a window for the future governments – be they populist or not – to use it to discover other differing opinions to those they currently use the system to find. The AI has no sense of good or evil; it will search for and find what it is programmed for. Thus, we are creating an application which the future governments can use to persecute those having differing opinions. In the full paper we will go more in-depth for idea of the hate speech, hate speech recognition AIs, privacy issues, and the possible implications of this kind of technology to the human rights.

16:45
The Envelopment Principle for Artificial Intelligence and Robotics

ABSTRACT. General ethical issues like privacy (e.g. Barocas & Nissenbaum, 2014), over-trust , bias (e.g. Friedman & Nissenbaum, 1994), etc. have long been looked at by scholars with regard to Artificial Intelligence. The ethics of AI and robotics in specific domains have also had serious scholarly discussion. With AI and robotics moving into our lives in a more explicit way these existing ethical issues are only enhanced and new ethical issues will rise. As a solution to many of these ethical issues, some scholars in the field of machine ethics have argued that AI and robotics should be endowed with moral reasoning capabilities to ensure that these algorithms and robots don’t harm human beings or tread on values important to human beings. It is not my purpose here to argue against this solution (see van Wynsberghe and Robbins (forthcoming) for such an argument). Amanda Sharkey (2017) has argued that instead we should ensure that we do not put AI and robots into situations which demand moral competence. I present here a principle to guide the development and implementation of AI and robotics in a way which will prevent the need for moral reasoning capabilities in robotics and AI and better prevent harm to human beings. I call this principle ‘The Envelopment Principle’: Robots and AI algorithms should be clearly enveloped within a working environment which is precise with regard to its boundaries, functions, inputs, and output.

I borrow the term ‘envelope’ from the field of robotics. “The ‘envelope’ of a robot is the working environment within which it operates or, more precisely, the volume of space encompassing the maximum designed movements of all the robot’s parts” (Floridi, 2011a, p. 228). Luciano Floridi has discussed envelopment as a process which allows for robots and AI to be more effective. He argues that driverless cars will be successful insofar as the world has adapted to their limited functionality. He gives the example of how robotic dishwashers: “We do not build robots that wash dishes like us, we envelop microenvironments around simple robots to fit and exploit at best their limited capacities and still deliver the desired output.” (Floridi, 2011b, p. 113). Dishwashers are effective because they have been properly enveloped within an environment conducive to its operations (a closed box we call a dishwasher). The alternative is a humanoid robot which would be decidedly ineffective with regard to washing dishes. Floridi is concerned with ensuring that this process of envelopment occurs with our foresight and guidance to prevent a world which works well with robots and AI but is not desirable to human beings. He is discussing a process which occurs after robots and AI have been put into the world.

Here, I am more concerned with ensuring that robots and AI which are put out into the world are already clearly enveloped. We should not put driverless cars out into the world until we are much more clear on their limitations (i.e. the environments that envelop them in such a way that they are effective. This also means that we should not be putting out digital assistants like Google Home and Amazon’s Alexa out into the world without clearly defined functionality.

Using Flroidi’s dishwashing robot as an example we can see two broad sets of issues with regard to non-enveloped robotics and AI. First, the humanoid robot would constantly face novel scenarios in which it would have to make judgments which could result in harm. I would consider myself deeply harmed were such a robot to scrub my new Le Creuset non-stick skillet with an abrasive brush. Add in crystal wine glasses and razor sharp knives and we can see a few of the many complex decisions such a humanoid robot will encounter. Furthermore, this robot would have to share its environment with humans. This increases the potential for ethical dilemmas and harm to humans.

Second, the task for the robot is ill-defined. “Wash dishes” is not specific enough. This could mean finding dirty dishes throughout a household, washing and drying those dishes, and, putting them away. Giving a robot this umbrella task, one could easily envision further tasks which would need to be added on: notifying a human that the soap is running out, sweeping broken glass, etc. Human users of such a robot may justifiably expect the robot to do things it simply is unable to do. These two sets of issues (harmful judgments and undefined task) should not occur in robotic and AI systems. The envelopment principle prevents these problems.

The word ‘environment’ in the envelopment principle is construed broadly. Not only does it mean physical environment in case of a robot, but also a virtual environment which refers to the possible inputs (or types of input) in the form of data that it could encounter. ‘Boundaries’ refers to an algorithm’s or robot’s expected scenarios. For example, AlphaGo expects as an input a GO board with a configuration of white or black pieces. AlphaGo is not expected to be able to suggest a chess move based on an input of a chess board with a configuration of pawns, knights, bishops, rooks, queens, and kings on it. An algorithm playing chess is fine, but is a different algorithm than AlphaGo.

The envelopment principle also points to a clearly defined function. This is meant to ensure that the possible outputs all are in response to a task given to the AI system. In the AlphaGo example above, the output is a move in the game of GO. We might be shocked by it making a particular move, but it is nonetheless a legal move in the game of GO. It would be strange if the task of AlphaGO were defined as “not letting an opposing player win” and instead of making a move its output was to mess up the board (because it knew there was no chance of winning and this was the only way to ensure that the other player did not win).

This principle, at first glance, appears to be a threat to innovation of robotics and AI. Surely the promise of robotics and AI should have “light touch regulation” to ensure that we can realize its potential value. As we will see below, much more than this principle is needed to ensure that AI is developed responsibly. However, in response to those who would describe this principle as a threat to innovation, or as Elon Musk once put it in response to those who expressed concern with autonomous cars, that the principle would “kill people”, I will show that successful AI to date has already followed this principle and is successful in part because it followed this principle. The possibilities for the development of AI which follows this principle seem unlimited and potentially beneficial to society at large. In contrast, current AI systems not following this principle are at best merely gimmicks which have the potential to erode consumer trust, deceive consumers, and worst of all, harm human beings.

It must be said that this principle is not enough on its own. This principle says nothing of what tasks should be assigned to robotics and AI Systems. It is easy to conceive of a robotic or AI system following this principle which is tasked with creating a superbug, or killing someone. This question is actively debated in the field of robot ethics and the ethics of AI. Tasks that are deemed unethical for AI systems, therefore, should not be considered by developers, and the envelopment principle only applies to those tasks deemed ethical.