next day
all days

View: session overviewtalk overview

08:30-09:30 Session 1: Registration
Location: Main Hall: building D2, base floor
09:30-10:30 Session 2: Plenary: Opening Remarks and Welcome
Location: Aura Magna: building D2, base floor
11:00-12:30 Session 3A: Law
Location: Room H3: building D4, 2nd floor
Threats Of The Internet Of Things In A Techno-Regulated Society

ABSTRACT. Technology has been rapidly changing the way we interact with the world around us. Companies, aiming to meet new consumer demands, are developing products with technological interfaces that would have been unimaginable a decade ago. Automated systems turn on lights and warm meals as you leave your work, intelligent bracelets and insoles share with your friends how much you have walked on foot or on bike ; sensors that automatically warn farmers when an animal is sick or pregnant . These examples are all manifestations associated with the concept of “Internet of Things” (“IoT”). There are strong disagreements regarding what IoT stands for. There is no such thing as a unanimously well-defined concept for IoT. More broadly, it can be understood as an interconnected environment of physical objects linked to the Internet through small built-in sensors, that creates a computer-based ubiquitous ecosystem, in order to facilitate and introduce functional solutions for daily routines and activities . Even though it might resemble a futuristic scenario, this kind of technology is already part of the present. Bracelet computers, smart watches, health devices, smart houses, cars and smart cities, are all manifestations of the “Internet of Things”. However, despite the present context, it is still a fairly recent culture based on the new relations we are forging with machines and interconnected devices. It is estimated that the number of “things” connected to the Internet have surpassed the number of people, what further confirms this new human-machine relationship. Estimations tells that in 2020 the quantity of interconnected objects will overcome 25 billion, being able to reach a mark of 50 billion of smart devices. All this hyperconnectivity and continuous interaction between gadgets, sensors and people, points to the rise of data and logs being produced, stored and processed both virtually and physically. On one hand, this may produce innumerous benefits to consumers. Interconnected health devices allow constant and efficient monitoring as well as greater interaction between doctor and patient. Residential automated systems will enable users to send messages to their home devices even before they arrive, performing actions such as opening the garage door, turning off alarms, turning on the lights, preparing a hot bath, cooking dinner, playing that special song, and even shifting the rooms` temperature. Moreover, what the future holds for IoT is yet to be discovered. On the other hand, the large amount of connected apparatuses will accompany us daily and regularly in our everyday life, and therefore collecting, transmitting, storing and sharing an enormous amount of data – most of it strictly private and even intimate. With the exponential rise of such devices, we should also pay attention to the potential risks and challenges that this increase may bring to fundamental rights. Those challenges can be investigated through a wide variety of lenses. For example, the new technological scenario is occasioning several changes on regulation and in jurisprudence of consumer’s law. Nevertheless, despite the variety of areas covered by this discussion, the analysis intended in this paper will try to investigate those challenges especially through the lens of privacy, freedom of expression and protection of personal data. Although some of the threats and risks of the IoT scenario do not seem novelty, considering how recent this context of hyperconnectivity is, we are not yet fully conscious of the possible damages that are dramatically enhanced in an IoT environment nor do we have sufficient legal regulation to avoid losses that could arise from the unclear processes of storage, treatment and sharing of our personal data in a context of IoT. Besides, while we are failing on having an adequate regulatory framework upheld by the law, we are experiencing a strong auto-regulation from the market, a regulation that, at many times, is made through code design , what we may call a “techno-regulation”. It is crucial to analyze what are the new legal challenges in this context that forces us to think about an adequate legal framework to respond to those challenges. With that in mind, this paper is structured in two main sections. The first introduces the concept of IoT as well as shows how the focal point of this discussion goes beyond the IoT itself, linking up to the concepts of interconnectivity and Web 3.0. To reflect on the IoT nascency, it is important to take a step backwards and look carefully into the impacts of (the promise of) hyperconnectivity. That is why the next section, even though titled “The Internet of Things”, is not restricted to IoT, it encompasses the development of the Web – showing how the user’s experience has changed in a context of greater interactivity and connectiveness. The second section of this essay tries to sustain the importance that the law advances in the search for a new regulation, especially in Brazil, that is both adequate to new technologies and that fits the new IoT context, preventing a negative scenario where the techno-regulation overlaps the regulatory framework based on the rule of law and controls us in an insurmountable way, potentially violating several fundamental rights. Based on a theoretical and constitutional approach to current technological evolution with particular regard to the Internet of Things and its privacy dimension, the purpose of this preliminary effort is to trigger further reflections about the regulatory challenges posed by greater (inter)connectivity.

The rule of law and EU data protection legislation

ABSTRACT. The rule of law is a philosophical, political and legal concept that has been adopted as a fundamental political and legal principle due to its importance in modern society. The complexity of its understanding is as a result of the conceptual unity of ethical, political and legal elements which form the basis of a variety of definitions. From the aspect of legal theory, the rule of law is a normative concept indicating the ideal for construction and functioning of the legal system. In this sense, it is a standard for assessing the quality of law that from analytical perspective is considered in contemporary theory as a dynamic system of principles and rules, general and individual that defines its own internal validity criteria. The article aims to analyse issues related to the development of a new concept for the rule of law with the aim to respond to the challenges of the modern society. With reference to its requirements it discusses the recent amendments in the EU data protection law following the adoption of the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). The rule of law as a concept of political philosophy and legal principle adopted in the positive law has different manifestations in the national legal systems due to specific political, cultural and legal traditions. At the same time the aim in the frames of international law and EU law is to develop a common concept of the rule of law on the basis of different national traditions that is adequate to the challenges of the modern society: first, the trend towards strengthening of the importance of international and transnational law in the context of the idea of globalization, which is closely linked with the development of the Internet; and second, it is necessary to develop a definition for the rule of law that is practicable and measured through specific indicators. With this view definitions of the rule of law have been adopted by UN, CoE and EU. The modern concept of the rule of law is aimed not only at limiting the arbitrary power, but also at creating legal certainty and predictability, and protecting individual and human rights. The minimum standard of the concept is composed of formal and material part. The formal part includes a set of standards and requirements for the quality of law based the principles of formal justice. Their content is a projection of the main functions of the law to guide the behavior of individuals and balance their interests. The material part of the minimum standard of the concept of rule of law requires the protection of individual rights and notably the human rights that are indispensable element of social justice. In this sense, the concept of the rule of law is aimed at establishing effectively functioning legal system. The set of standards and requirements for the legal system that form the formal part of the concept of rule of law is dynamic. The main requirements that have evolved as a mandatory element of the formal aspect of the concept of the rule of law are the requirements for generality, publicity, limiting norms with retrospective effect, clarity, lack of controversy, feasibility, stability, consistency and compliance with the principle of proportionality. The formal aspect of the concept of rule of law also covers requirements for the institutional mechanism of law as separation of powers, limitations on discretionary powers and standards related to judicial procedures. The concept of the rule of law is closely associated with the formal justice, the core of which is the idea of equal treatment. The formal justice sets standards both with respect to legislative process and content of legal rules, and also as regards to the application of the law. The material part of the rule of law concept requires protection of individual rights and human rights, which are a necessary component of the modern concept of social justice. The perception of human rights as individual rights, not as legal principles would contribute to their effective protection. Against the established framework for the modern concept of the rule of law the new amendments in the EU data protection law will be discussed. In a recent report adopted by the Venice Commission collection of data and surveillance are considered as an area where the application of the rule of law standards is of high importance. The fast technological development and growing use of personal data require implementation of effective legal infrastructure in order to deal with the privacy and data security implications that are raised. The article provides an analysis of the data protection principles and concepts in view with the rule of law standards, starting from the point that the main aim of data protection regulation is to protect personal sphere of individuals, from one side, and to support the free flow of information and data, from the other side. The issues are discussed and conclusions are drawn in the light of the need to ensure globally consistent regulatory framework in order to protect effectively the rights of the parties concerned.

Company Law and Autonomous Systems: A Blueprint for Lawyers, Entrepreneurs, and Regulators
SPEAKER: unknown

ABSTRACT. The paper we propose is the outcome of a workshop organized on 7-8 July in Munich. It shows ways to vest artificially intelligent entities with legal personality under national legal orders as they stand now. More specifically, the paper asks on which national legal order would he or she who creates an artificial intelligence (A.I.) rely so as to render the A.I. capable of acting legally to the greatest extent possible. Which are the best legal vessels national legal orders currently have on offer for making an A.I. as autonomous as possible? The paper is special in that it explores this question from a lege lata perspective. It does not propose ways to change the law, but works with the law as it exists now. Based on Shawn Bayern’s proposal to grant an autonomous A.I. legal personality by hosting it within a US company (a limited liability company, LLC; see S. Bayern, The Implications of Modern Business Entity Law for the Regulation of Autonomous Systems, 7 European Journal of Risk Regulation 2, 297 et sqq.), the paper first examines the British, German, and Swiss legal orders in order to identify the implications of Bayern’s proposal. It explores the potential of the British limited liability partnership, the German “Gesellschaft mit beschränkter Haftung” and the Swiss Foundation and compares them with the US LLC, all with a view to identify the best way of awarding legal personality to an A.I. In doing so, the paper comes to common elements and issues that arise under all four legal orders, namely the company law problem of creating an entity in which no human being is involved any longer (so-called no man-company), the transitory nature of the existing legal rules that enable the creation of such a company, the problem that an A.I. having legal personality could collide with the constitutional guarantee of human dignity, etc. The paper is firmly situated within comparative law. While intellectually stimulating, it is not just academic. Rather it seeks to guide creators and owners to the legal order best suited for their needs. At the same time, it instructs policy makers on the potential and the dangers of the existing national legal orders – which allows them to identify the rules to be changed or adopted so as to prevent A.I. from acquiring legal personality under rules which were not originally intended to serve that purpose.

11:00-12:30 Session 3B: Big Data + RRI
Location: Room H4: building D4, 2nd floor
Big Data and Algorithmic Decision-Making: Can Transparency Restore Accountability?

ABSTRACT. Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Can transparency contribute to restoring accountability for such systems? Several objections are examined: the loss of privacy when data sets become public, the perverse effects of disclosure of the very algorithms themselves (‘gaming the system’ in particular), the potential loss of competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms are inherently opaque. It is concluded that transparency is certainly useful, but only up to a point: extending it to the public at large is normally not to be advised. Moreover, in order to make algorithmic decisions understandable, models of machine learning to be used should either be interpreted ex post or be interpretable by design ex ante.

“Speeding Up Engagement” - A Systematic Approach for Making Use of Facebook Comments for Upstream Engagement
SPEAKER: unknown

ABSTRACT. Deliberative activities constitute an essential part of Responsible Research and Innovation (RRI). Within the deliberative dimension of RRI, so-called “upstream engagement” covers activities which try to legitimize, authorize and prioritize research agendas and intentions. Although upstream engagement is an effective approach to include the public in research, its implementation requires time and effort. To address this challenge, we have developed a systematic, Facebook-specific approach for capturing feelings, ideas, options and priorities towards a certain type of technology and integrating them into technology-related research. To illustrate our proposed approach, we have applied it to ‘virtual reality’ and ‘affective technology’ as exemplary cases.

Big Data And Price Discrimination At The Bottom Of The Pyramid
SPEAKER: unknown


The proliferation of fast and cheap methods of securing personal information with smart devices has fostered increasing use of dynamic pricing to advertise products to consumers. Big data provides detailed personal profiles of shoppers using tracking technologies and mining of data to personalize products and prices. Consumers are grouped according to characteristics and product preferences that may or may not be accurate in describing the customer. Data brokers attempt to define and categorize a consumer to be able to advertise personalized products and predict reactions to dynamic pricing and marketing.

The Internet has unfortunately not eliminated asymmetries of information. Consumers use price comparison sites to get the best deal, but retailers use pricing technologies to their advantage by offering different prices to customers creating tiered pricing. This practice enables individually tailored pricing by the identification, observation, and tracking of customers. The ability of consumers to be anonymous is widely debated in the literature and predominantly assumed to be impossible with the aggregation of data from various sources.


Price discrimination occurs when there are price differences in the price of the same good that are not based on cost (DePasquale, 2015), and there are existing mechanisms such as the Robinson-Patman Act to protect consumers from price discrimination (Shepard, 1991). However, despite this, there is evidence in the literature that price discrimination exists in forms of personalized pricing (Miettinen and Stenbacka, 2015).

The use of big data in the setting of personalized prices can lead to price discrimination in the way of higher prices. For example, Satter (2015) reported that the use of price optimization software in the car insurance sector, lead to higher insurance costs for consumers. Also, the use big data for pricing decisions is impacting areas such as electronics, transportation, health insurance (Broderick, 2015). Further, price discrimination might occur “per se” rather than on purpose. For example, there is evidence in the credit card industry (Murphy and Ott, 1977), Rewards Programs (Hartmann and Viard, 2005), Pharmaceutical Prices (Lichtenberg, 2010), among others.

The literature on price discrimination also presents a case of “per se” price discrimination in the marketing function oriented towards consumers at the bottom of the pyramid in what marketers refers as Bottom of the Pyramid Marketing (Hann, 2014). Accordingly, marketers aim for affordable prices by working on volume and scale (Pitta et al., 2014), but that can be more expensive in the long run. For example, some authors refer to the cause of a “per se” price discrimination to a paucity of consumer understanding, limitations on product engineering (Mascarenhas et al., 2005), disparate approaches to corporate social responsibility (Davidson, 2008).

This research aims to provide evidence from the retail environment of a “per se” price discrimination at the Bottom of the Pyramid. Accordingly, this research will analyze data sets existing in online retail prices using techniques such as web scrapping of retail price data (Polidoro et al., 2015), big data providers, and statistical techniques. The population group to study is located in the Greater Bridgeport, CT area that is regarded as a city with a diverse population, including consumers that are at the Bottom of the Pyramid (Kotler, 2002).

The researchers will attempt to prove that a “per se” price discrimination exists by comparing average retail prices, of consumer goods, between two populations groups such as consumers at the Bottom of the Pyramid and middle-class consumers. Furthermore, such discrimination is not as a result of specific marketing campaigns or companies interested in price discrimination.


Big Data tools enable retailers to see real-time product movement and the effectiveness of advertising as it occurs versus after the fact effectiveness of campaigns. Products and services can be tailored and personalized using web browsing behavior (Shiller, 2014:22). For example, using customization, a redesigning of marketing from the perspective of the client, companies can offer highly personalized products in a wide range of categories. These companies are transforming the practice of marketing from being “seller-centric” to being “buyer-centric” (Wind and Rangaswamy, 2001:14). Using information and analytics, organizations can reshape the customer value proposition on three levels by enhancing, extending or redefining the value of the customer experience (Berman, 2012:19).

Advertisers can target preferred customers with discount coupons, and special offers to loyal customers using electronic coupons that give additional tracking information to the seller. In essence, retailers adjust prices continually after checking competitor strategies using price-comparison bots. A price can differ for every individual shopper at various times during the day – all without the knowledge of the consumer. Consumers do have the right to opt of out tracking if offered by the data broker and if the consumer is aware and savvy enough to navigate the opt-out function. A few brokers offer the customer the ability to view and correct profiles. Retail data is aggregated using purchase history and loyalty card information to tailor discounts to desirable customers by generating electronic coupons and other discounts. Data brokers also have data on competitor ‘s customers to enable sellers to target non-customers by using their purchase history of competitive products. Location of the buyer is also used to determine wealthier customers based on zip code and address or distance from competitors. This practice can benefit higher income consumers who have more options versus lower income areas with fewer competitors. Ethical issues of individual privacy, lack of transparency and erosion of trust are of concern in a marketplace where economic decisions have social consequences.


Micro-categorization of consumers means that ads are targeted to us directly, but we do not see other ads. Are we accurately portrayed in the subgroup that defines us by our Internet tracks? Real-time bidding for customers makes the online market an uneven playing field by allowing firms to send loyalty points and discounts to some but not others. “Personalization can lead you down a road to a kind of information determinism in which what you have clicked on in the past determines what you see next – a Web history you are doomed to repeat. You can get stuck in a static, ever-narrowing version of yourself – and endless you-loop” (Rosen, 2013:45).

Ethical questions such as the following need to be addressed. For example:

• How to protect competitively sensitive data or other data that should be kept private? • What are the intellectual property rights attached to data? • Who owns the data? • What defines the fair use of data? • Who is responsible when an inaccurate piece of data results in negative consequences?

Targeted advertising using big data raises ethical issues of third party usage of personal data, privacy, ownership, and the lack of transparency of how applications of the main big data players are interacting with our data.


Berman, S. J. (2012). Digital transformation: opportunities to create new business models. Strategy & Leadership, 40(2), 16-24.

Broderick, M. (2015). What's the price now?. Communications of the ACM, 58(4), 21-23

Davidson, D. K. (2008). When CSR meets BoP: Ethical concerns at the base of the pyramid. Sustainability challenges and solutions at the base of the pyramid, 462-474.

DePasquale, C. (2015). The Robinson-Patman Act and the Consumer Effects of Price Discrimination. The Antitrust Bulletin, 0003603X15602382.

Hahn, R. (2009). The ethical rational of business for the poor–integrating the concepts bottom of the pyramid, sustainable development, and corporate citizenship. Journal of business ethics, 84(3), 313-324.

Hartmann, W. R., & Viard, V. B. (2005, May). Quantity-based price discrimination using frequency reward programs. In Summer Institute in Competitive Strategy, Conference presentation, University of California, Berkeley. Kotler, P. (2002). Marketing places. Simon and Schuster.

Lichtenberg, F. R. (2010). Pharmaceutical price discrimination and social welfare. Capitalism and Society, 5(1).

Mascarenhas, O. A., Kesavan, R., & Bernacchi, M. (2005). Global marketing of lifesaving drugs: an analogical model. Journal of Consumer Marketing, 22(7), 404-411.

Miettinen, T., Stenbacka, R. (2015). Information Economics & Policy, 33, 56-68. 13p

Murphy, M., Ott, M. (1977), Retail Credit, Credit Cards and Price Discrimination, Southern Economic Journal, 43(3), 1303-1312

Pitta, D. A., Guesalaga, R., & Marshall, P. (2008). The quest for the fortune at the bottom of the pyramid: potential and challenges. Journal of Consumer Marketing, 25(7), 393-401.

Polidoro, F., Giannini, R., Lo Conte, R., Mosca, S., & Rossetti, F. (2015). Web scraping techniques to collect data on consumer electronics and airfares for Italian HICP compilation. Statistical Journal of the IAOS, 31(2), 165-176.

Rosen, J., Marlene Y. (2015), Investment Advisor, 35(9), 76-76.

Satter, M. (2015). Investment Advisor. 35(9), 76-76.

Shepard, A. (2015), Price Discrimination in Retail Configuration, Pol. Econ, 30

Shiller, B. R. (2014). First-degree price discrimination using big data. Presented at The Federal Trade Commission.

Wind, J., Rangaswamy, A. (2001). Customerization: The next revolution in mass customization. Journal of Interactive Marketing; 15(1), 13 – 15.

11:00-12:30 Session 3C: Theory
Location: Room H5: building D4, 2nd floor
Language Matters - Words Matter

ABSTRACT. In his profound and prophetic paper, “On the Impact of the Computer on Society,” Joseph Weizenbaum exhorts us, as computer professionals, to recognize that “[t]he nonprofessional has little choice but to make his attributions to computers on the basis of the propaganda emanating from the computer community and amplified by the press. The computer professional therefore has an enormously important responsibility to be modest in his claims.” [Weizenbaum, 1972] This is advice that on the evidence is sometimes difficult to take. Although a tendency to extrapolate with just a small excess of enthusiasm may be understandable in an environment where researchers are competing for preference in applications for a limited pool of grant money, there are negative consequences when the resulting claims are taken up and broadcast by an unsophisticated or partisan press. These exaggerations seem to be particularly endemic to the discourse among certain researchers working in specific areas of artificial intelligence. The tenor of this discourse, as represented in papers published over a long span of time and, more recently, in public declarations disseminated by video presentations on a variety of platforms and social media, encourages not only unrealistic (and occasionally absurd) projections but also the systematic denigration of the reach and richness of human intelligence and the robustness of human judgment. This devaluation of human capacity seems particularly harmful when it is taken as a commonplace or received wisdom concerning the relations between humans and machines, and the future of humanity itself. Among the specific ways in which this underestimation, bordering on contempt, of human capacity is expressed are: the systematic and ahistorical under-appreciation of the robust acuity of human judgment; the implicit conflation of human intelligence with what is derivable through a computation [Warwick, 2007, Moravec, 1998]; the undervaluation of human creativity [Minsky, 1982; Herbert Simon, as recorded in Stewart, 1994]; the disparagement of the richness and expressive value of human language [Warwick, 2007]; and the spurious attribution of human capacity to computer-based systems [Turkle, 2010; more dangerously, Arkin, 2009] In this paper, I will argue that these exaggerations and distortions echo the warnings articulated by Weizenbaum in his 1972 paper. Moreover they entrain immediate harms as well as consequences over the longer term that are damaging to individuals and society.

Underestimation of the Acuity of Human Judgment: Long ago, Borning [198?] and Parnas, van Schouwen, and Kan [Parnas, et al, 1990], counseled caution in regard to reliance on fully automated systems that would respond to threats of nuclear attack on the basis of satellite and sensor data. They provided ample evidence of the robustness of human judgment in assessing accurately whether a purported hostile attack detected by these means was real or a spurious artifact of failure of system hardware or software. In the face of these and other indications of the fragility of machine judgment, we have the edifying example of an AI researcher, Ronald Arkin [Arkin, 2009] who insists that it is possible to develop lethal autonomous robotic weapons equipped with an infallible ethical governor, implemented in software, that will, in Arkin’s words, “be more humane than a human combatant” in applying lethal force.

Conflation of Human Intelligence with What Can Be Computed: In a brilliant video clip, once easily available on YouTube, Kevin Warwick contrasts the pitiful limitations of human intelligence with the incomparably superior potential of machine intelligence in these terms: “ Humans think about things three dimensionally. That’s fine. That’s what we were evolved to do. A superintelligent machine can think in ten or twenty dimensions. How can you reason? How can you bargain? How can you try and understand how that machine is thinking when it is thinking in dimensions you can’t conceive of?” Trying to interpret this baffling statement, absent any explanation of what is meant by thinking in three, ten or twenty dimensions, it does not seem unreasonable to characterize this as an implicit conflation of “superintelligent” thought with the processes involved in solving optimization problems or systems of equations by means, for example, of operations on higher order matrices. Many authors, writing about strong AI, are obsessed with computational power, Moore’s Law, and the MIPS/memory tradeoff. The diagrams relating to the “co-evolution” of machine intelligence and that of living organisms in “When Will Computer Hardware Match the Human Brain?” [Moravec, 1998] are instructive. In this article, the inevitable crossover between the limits of human and machine intelligence is adumbrated, for example, by the fact that in 1996, “a theorem-proving program called EQP running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years.” [Moravec 1998] Without minimizing the significance of this accomplishment, it still seems relevant to ask, “Boolean algebras? How did these come into being?” We also note that the conjecture itself was the product of a human mind.

Undervaluation of Human Creativity: In his article, “Why People Think Computers Can’t” Marvin Minsky speculates as to whether machines can be creative: “We naturally admire our Einsteins and Beethovens, and wonder if computers ever could create such wondrous theories or symphonies.” [Minsky, 1982]. For Herbert Simon, it’s an open and shut case: “Harold Cohen, an English painter at the University of California at San Diego, wanted to understand how he painted, so he tried writing a computer program that could paint in an aesthetically acceptable fashion. This program called AARON has gone through many generations now. AARON today makes some really smashing drawings. I've got a number of them around my house. It's now doing landscapes in color with human figures in them.” [Simon, as quoted in Stewart, 1994] A selection of these paintings is available on the web [see, for example, the external links at Wikipedia, 2016] for the reader to judge whether, beyond staking a limited claim to aesthetic acceptability, there evidence of creativity in AARON’s output. And again, with respect to musical composition: “Hiller and Isaacson at the University of Illinois used a computer to compose the ILIAC Suite and the Computer Cantata. Without identifying the music, I played records of these for several professional musicians, and they told me they found it aesthetically interesting — I didn't say it had to be great music — so that passed my test.” [Simon, as quoted in Stewart, 1994] In this case, time has rendered a judgment for us. The Hiller and Isaacson works remain curiosities, rarely if ever programmed in the five decades since their composition. In the intervening period, there have been recurrent reports of significant progress in machine composition of music in many genres. When one considers only the works of 20th century composers – Bela Bartok, Alban Berg, Dmitri Shostakovich, Kurt Weill, Charles Ives, Benjamin Britten, John Coltrane, Thelonius Monk, Charles Mingus among many others, it would seem the Scotch verdict is appropriate: “Not proven.” In the passage cited above, Minsky continues in this vein: “Most people think that creativity requires some special, magical ‘gift’ that simply cannot be explained. If so, then no computer could create since anything machines can do (most) people think can be explained. To see what’s wrong with that, we must avoid one naïve trap. We mustn’t only look at works our culture views as great, until we first get good ideas about how ordinary people do ordinary things.” [Minsky, 1982] Here, Herbert Simon’s protégé, Hans Moravec, comes to our aid. In a 1990 debate with Joseph Weizenbaum at ZZZZZZZZZ University, Moravec asserted that the human brain is simply 1011 neurons and we’re soon going to be able to build that in silicon. [This is a close paraphrase of Moravec’s words from the debate at which the author was present.] But whose brain are we talking about? Whose brain are we going to re-create in silicon? If one imagines that Minsky’s “ordinary” person is somehow a generic unit, as Moravec seems to imply, there is limited possibility of discovering anything useful about creativity. If one admits that each individual has an intelligence that is conditioned by heredity, a particular upbringing, and an evolving lifetime of experience, the problem of getting good ideas about how ordinary people do things – both ordinary and far from ordinary – is a good deal more complex than Minsky’s proposed first step on the road to understanding human creativity.

Disparagement of the Expressive Richness of Human Language: On language, we have once again the sovereign and sweeping judgment of Kevin Warwick: "...and as a cyborg, if a human came to see me and it starts making silly noises, a bit like a cow does now. If a cow comes to me and says 'Moo, moo, moo', I'm not going to say, 'Yeah, that's a great idea, I'm going to do what you tell me', so it will be with a human. They'll come in and start making these silly noises we call speech and human language and so on. And with these trivial noises, I'm not going to do those silly things. Why should I? This creature's absolutely stupid in comparison to me." [Warwick, 2007]

Harms Entrained by These Examples of Immodesty: Among the harms we will discuss are, first of all, the upending or reversal of the reliance on human judgment in regard to the application of lethal force and, ultimately, the decision to wage war. Secondly, we note the comfort they provide to the view that a narrow and skills-based public education is sufficient for survival in the world of the twenty-first century. The perspective underlying the proposal advanced by one of my students is very telling. “Wouldn’t it be good,” he asked, “if we could get an implant with all the things we need to know?” Surely there are unhealthy consequences for individuals and society itself that flow from the idea that there’s no real need for an education that nourishes the ability to wonder nor any value in cultivating the sense of wonder at things of great beauty or profundity. And finally, we discuss how this technological triumphalism can induce the dismal state in which human education, activity, and expectations slowly converge on mimicry of what can be done by a machine.


Arkin, Ronald (2009), Ethical Robots in Warfare, IEEE Technology and Society Magazine, vol. 28 no. 1, Spring 2009, pp. 30-33.

Borning, Alan (1987), Computer System Reliability and Nuclear War, Communications of the ACM, vol. 30, no. 2, February 1987, pp. 112-131.

Minsky, Marvin (1982), “Why People Think Computers Can’t,” AI Magazine, vol. 3, no. 4, pp. 3-15

Moravec, Hans (1998), “When Will Computer Hardware Match the Human Brain?”, Journal of Evolution and Technology, vol.1, 1998, available on the web at

Parnas David L., A. J. van Schouwen, and S.P. Kwan (1990), Evaluation of Safety-Critical Software, Communications of the ACM, vol. 33, no. 6, June 1990, pp. 636-648

Stewart, Douglas (1994), “Herbert Simon on Thinking Machines,” transcript of an interview from 1994, first published in Omni Magazine, available on line at http://www.omnimagazine.com/archives/interviews/simon/index.html, last accessed 19 August, 2016

Turkle, Sherry (2010), Alone Together, Basic Books

Warwick, Kevin (2007), Unreasonable Man, YouTube videoclip, available at http://www.youtube.com/watch?v=WesVCmadBkQ&feature=related, last accessed 19 August, 2016

Weizenbaum, Joseph (1972), On the Impact of the Computer on Society: How Does One Insult a Machine?, Science 176, 609-614.

Wikipedia (2016), Harold Cohen (artist) available on line at https://en.wikipedia.org/wiki/Harold_Cohen_%28artist%29

A discussion of Bynum's Metaphysical Explanations of the Information Revolution

ABSTRACT. 1. Introduction Perhaps the most profound metaphysical claim of the 20th Century was made by Norbert Wiener in an address to the New York Academy of Sciences in autumn of 1946 and repeated two years later in his book Cybernetics:

Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.

Wiener was the first to recognize the physical nature of “environmental information”, as it is sometimes called today. This insight occurred to him while he was advising Claud Shannon on how to mathematically describe and measure the amount of information being carried within telephone wires. Wiener suddenly realized that information is physical and it is governed by the second law of thermodynamics. Delighted by this discovery, he walked the halls of his building at MIT, in one of his famous "Wiener walks" (Conway and Siegelman, 2005), telling everyone he met that entropy is a measure of lost information (Rheingold 1985). Information, therefore, plays a significant role in every physical entity and process. This paper focuses upon physical information and explores some ideas intended to explain its nature and to provide a theory or model of its role in ultimate reality. Section 2 explores the implications of Wiener's discovery for our understanding of the nature of the universe. At the end of that section, the "Wienerian" conception of the universe is briefly compared to that of today's physics. The remaining sections of this paper briefly discuss examples of metaphysical ideas concerning the role of physical information in ultimate reality.

2. Wiener's Universe and Today's Physics

Metaphysically, Wiener was a materialist, and his realization that entropy measures lost information provided a new way to understand the nature of physical objects and processes. Indeed, it was a new way, as well, to understand the ultimate nature of the universe. To use today’s language, Wiener's insight was that all physical entities in the world are “informational objects” or “informational processes” — an account of the nature of the universe that is worthy of today's "Information Age"! The very nature of the universe, then, explains why today’s "Information Revolution" has enabled humanity to change the world more quickly, and more profoundly, than any previous technological revolution: The Information Revolution provided scientific theories and tools for analyzing, manipulating, creating and altering physical entities at, what could turn out to be, the deepest level of their being. Thus, all physical objects and processes can be viewed as patterns of information (data structures) encoded/embodied within an ever-changing flux of matter-energy. Given this view, every physical object or process is part of a creative coming-to-be and a destructive fading-away, as current information patterns — data structures — erode and new ones emerge. This "Wienerian" view of the nature of the universe makes every physical entity a combination of matter-energy and physical information. Even living things, according to Wiener, are informational objects. They store and process physical information in their genes and use that information to create the building blocks of life, such as amino acids, proteins and genes. Indeed, they even use stored information to create new living things; namely, their own offspring. Animals' nervous systems store and process physical information, thereby making their activities, perceptions, and emotions possible. And, like every other physical entity in Wiener’s universe, even human beings can be viewed as informational entities. Thus, humans are essentially patterns of information that persist through an ongoing exchange of matter-energy. So according to Wiener,

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves. (Wiener 1954, p. 96) . . .

The individuality of the body is that of a flame […] of a form rather than of a bit of substance. (Wiener 1954, p. 102)

Significant developments in physics since Wiener's time have deepened and extended this "Wienerian" account of reality. Some important physicists have argued that even matter-energy owes its very existence to information. For example, in 1990, an influential paper by Princeton physicist John Wheeler introduced his famous phrase “it from bit” (Wheeler 1990), and thereby gave significant impetus to the development of informational physics. In that paper, Wheeler declared that “all things physical are information theoretic in origin” — that “every physical entity, every it, derives from bits” — that “every particle, every field of force, even the spacetime continuum itself . . . derives its function, its meaning, its very existence” from bits. He predicted that “Tomorrow we will have learned to understand and express all of physics in the language of information.” (Emphasis in the original.) In addition, since 1990 a number of physicists — some of them inspired by Wheeler — have made significant strides toward fulfilling his “it-from-bit” prediction.

3. Everything is a data structure!

Physical information consists of physical data, and it is syntactic, not semantic. But what is a datum? A datum is essentially a difference, so a physical datum is a difference "embodied in", "carried by" — some would say "encoded in" or "registered by" — the matter-energy of a physical being. All the differences embodied within a physical entity, right down to the subatomic quantum differences, constitute the physical structure of that entity; so physical beings are data structures, some of them enormously complex. Some differences are perceivable at the "macro-level", while others are non-perceivable ones at various "micro-levels". Removing differences — "erasing data" — embodied within a physical being erodes the physical structure of that being; and if there is enough erosion, or the right sort of erosion, the being may be significantly altered, damaged, or even destroyed. Physical beings are data structures, but the data they encode are not made of matter-energy — the data are relations, not material objects.

Consider the relata that make a house a house: A house builder may begin with a pile of lumber and a pile of bricks and a pile of metal pipes, and so on. However, piles of building supplies do not constitute a house, because they do not have the form of a house — that is, they do not embody/carry/encode the appropriate relata. If a house builder removes lumber, bricks, pipes, and so on, from the piles, and uses those supplies to build a house (thereby changing the space-time relationships among the various building supplies), that same matter, which initially comprised piles of lumber, bricks, and so on, would then comprise a house. So, it is the form of the house, the pattern of relata, the data structure, not the matter in the building supplies, that make the house a house. The form of the house is physical, in the sense that it exists in space-time, and it can be observed, measured and scientifically studied. The informational pattern encoded in the house is a physical phenomenon, but it is not matter or energy. So Wiener's statement that "information is information, not matter or energy" is fully consistent with his materialistic view of the ultimate nature of the universe. The remaining sections of this paper briefly discuss ideas of several philosophers — Plato, Randall Dipert, Eric Steinhart, Luciano Floridi, and Terrell Ward Bynum — for providing a metaphysical foundation for a Wienerian theory of the ultimate nature of the universe.


Ball, P. (2002). "The Universe is a Computer". Nature News, doi: 10.1038/ news020527-16.

Bynum, T. W. (2013). "On the Possibility of Quantum Informational Structural Realism", Minds and Machines, 23, DOI 10.1007/s11023-013-9323-5.

Chiribella, G., G. D’Ariano, and P. Perinotti (2011). "Informational Derivation of Quantum Theory". Physical Review A, 84, 012311.

Close, F. (2009). Nothing: A Very Short Introduction. Oxford: Oxford University Press.

Conway, F. and J. Siegelman (2005). Dark Hero of the Information Age: In Search of Norbert Wiener the Father of Cybernetics. New York: Basic Books.

Dipert, R. R. (1997). "The Mathematical Structure of the World: The World as Graph". Journal of Philosophy, 94, 328-358.

Dipert, R. R. (2002). "The Subjective Impact of Computers on Philosophy: Prolegomena to a Computational and Information-Theoretic Metaphysics" in J. H. Moor and T. W. Bynum, Eds., Cyberphilosophy: The Intersection of Philosophy and Computing. A Metaphilosophy Anthology. Oxford, UK: Blackwell, pp. 139-150.

Floridi, L. (2011). The Philosophy of Information. Oxford, UK: Oxford University Press.

Fredkin, E. (2003). "An Introduction to Digital Philosophy". International Journal of Theoretical Physics, 42, 189–247.

Lloyd, S. (2002). Computational Capacity of the Universe. Physical Review Letters, 88, 237901–237904.

Lloyd, S. (2006). Programming the Universe: A Quantum Computer Scientist Takes on he Cosmos. New York: Alfred A. Knopf.

Putnam, H. (1975). "What Is Mathematical Truth?" in H. Putnam (Ed.), Mathematics, Matter and Method: Philosophical Papers. Cambridge: Cambridge University Press, pp. 60-78.

Rheingold, H. (2000). Tools for Thought: The History and Future of Mind-Expanding Technology. Revised Edition 2000, Cambridge, MA: MIT Press. Originally published in 1985 by Simon and Schuster, New York.

Seife, Charles (2006). Decoding the Universe: How the New Science of Information is Explaining Everything in the Cosmos, from Our Brains to Black Holes, New York:Viking Penguin.

Shannon, C. E. (1948a) "A Mathematical Theory of Communication", Parts I and II. The Bell System Technical Journal XXVII, 379-423.

Shannon, C. E. (1948b) "A Mathematical Theory of Communication", Parts I and II. The Bell System Technical Journal XXVII, 623-656.

Steinhart, E. (1998). "Digital Metaphysics" in T. W. Bynum and J. H. Moor, Eds. The Digital Phoenix: How Computers are Changing Philosophy. A Metaphilosophy Anthology. Oxford, UK: Blackwell, 1998 pp. 117-134. Revised edition 2000.

Vedral, V. (2010). Decoding Reality: The Universe as Quantum Information. Oxford: Oxford University Press.

Vedral, V. (2011). "Living in a Quantum World" (cover story). Scientific American, 305, 38–43.

Vlastos, G. (2005). Plato's Universe. Parmenides. Originally published in 1975 by University of Washington Press, Seattle, WA, USA.

Wheeler, John A. (1990). “Information, Physics, Quantum: The Search for Links” in W. H. Zureck, editor, Complexity, Entropy, and the Physics of Information, Redwood City, CA: Addison Wesley.

Wiener, N. (1948). Cybernetics: or Control and Communication in the Animal and the Machine. Boston, MA: Technology Press.

Wiener, N. (1950/1954). The Human Use of Human Beings: Cybernetics and Society. Houghton Mifflin, 1950. (Second Edition Revised, Doubleday Anchor, 1954.)

Wiener, N. (1966). I Am a Mathematician: The Later Life of a Prodigy. Cambridge, MA: MIT Press.

Zuse, K. (1967). "Rechnender Raum". Elektronische Datenverarbeitung, 8, 336–344.

Zuse, K. (1969). Rechnender Raum, Wiesbaden: Vieweg. English translation: Calculating space, MIT Technical Translation AZT-70-164-GEMIT, Project MAC, 1970. Cambridge, MA: Ma

A conversation regarding lethal autonomous weapons

ABSTRACT. A conversation regarding lethal autonomous weapons.

13:45-14:00 Session 4: Plenary: Responsible Industry and ORBIT
Location: Aura Magna: building D2, base floor
14:00-15:00 Session 5: Plenary Host Keynote: Ciro Cattuto
Location: Aura Magna: building D2, base floor
15:30-17:00 Session 6A: Fiction
Location: Room H3: building D4, 2nd floor
Ethical Design Fiction Between Storytelling and World Building
SPEAKER: unknown

ABSTRACT. Up until the point of the actual implementation or launch any design can be essentially be seen as fiction — a concept which abductively speculates about a possible future state of the world (Martin 2009). A prototype which might be able to solve a given problem, without the designer being able to conduct a thorough prove of concept. In this setting, design fiction has in recent years become a widely recognised conceptual tool to examine the usability, utility and desirability of a design concept — especially in regard to assessing the possible consequences of advances in information technologies. At the same time design fiction enables the designer and the industry to create a discourse, in which the upcoming design can gain meaning, context and explain the yet unknown.

In this paper we examine how the ethical challenges can be approached in and through design fiction. To do so, we analyse several examples of what we see as corporate design fictions. The first is a video by Google, announcing and trying to simulate the possibility of their then work-in-progress ‘Google Project Glass’ which in Lindley’s (2015) words is a ‘vapour fiction’. The second is the decades old video ‘Apple knowledge navigator’ made by Apple to highlight their vision for the next decade of computing in the early 1990’s. We use these examples to show two very different approaches to corporate design fiction. Furthermore, we trace and discuss how design fiction in corporations can be found even further back to General Motors commercial ‘The future of the motorized city’ in 1957 is to further support our findings. Our main focus will be on design fiction within a commercial setting, connecting the notion of design fiction to the design proces within large corporations. Our three cases are supported by findings from several workshops and student projects which used animation and sketching as the groundwork for creating design fictions as means to explore the ethical impact of emerging information technology in industry cases, building upon the results from Vistisen, Jensen & Poulsen (2015).

The three main cases differ greatly in the way they present their fictional universe and tell their stories. We use these differences to show how Lindley & Coulton's (2015) defintion as well as Lindsey’s (2015) design fiction framework can be explored and further developed by incorporating storytelling and ethical challenges and possibilities into the framework.

Lindley & Coulton (2015:210) define design fiction as

“(1) something that creates a story world, (2) (…) something being prototyped within that story world, (3) [doing] so in order to create a discursive space.”

This definition focusses on the world building, the diegesis, of the storyworld. This part of the fiction is useful to explain about the challenges of usability and utility a new design faces. Tolkien (1975:36) explains the affordances of the secondary world as a place “(…) which your mind can enter. Inside it, what he [the producer of the secondary world] relates is ‘true’: it accords with the ontological laws of that world. You therefore believe it, while you are, as it were, inside. The moment disbelief arises, the spell is broken (…).” It is important to notice that disbelief comes from the worlds inconsistency, not the apparent existence of magical dragons or a futuristic design. The designer who produces a design fiction to support how the audience or other stakeholders understand the nature of said designed prototype needs to create a consistent, believable world. This world may be futuristic or resemble the primary world as we know it, still, the prototype has to have an impact in the world to show how the design is meant to be used and how people might experience its use.

This leaves some of the points of utility and desirability unexplored. By using the element of storytelling with its several important components and structures this can be remedied. Storytelling as it can be found in the actantial model by Greimas (1996), the hero’s journey as described by Vogler (1998) or in the basic narrative curve model, can create a rather different and immersive space of emotions, drama and conflict.

“(…) it is usually story that draws us into a world and holds us there; lack of a compelling story may make it difficult for someone to remain vicariously in a secondary world.” (Wolf, 2012:29)

Design fiction naturally focusses on the bits and parts that make up the story world, yet if the audience is to gain a deeper understanding and meaning with the prototype a storyline and with it plotpoints, character, development and emotions have to be found in the design fiction as well. Already in 1993 Laurel investigated the use of storytelling as a means to orchestrate response and understanding in her book ‘Computers as Theatre’. One of her main points was the insistence on existential choices. While Laurel was comparing storytelling with computer programs, the idea of existential choices which have to be conducted by the protagonists within the design fiction still stands. If a prototype has to prove its worth, its utility and desirability, the protagonist has to be faced with ‘real problems’ or what corresponds to be ‘real’ within the secondary world.

This, in turn, is of great importance for our second addition to Lindley & Coulton’s definition, namely ethical considerations based on ontological and discursive ethics as well as the rhetorical pathos, ethos and logos.

The secondary world is part of the rhetoric ethos and logos, creating the suspension of disbelief necessary for the immersion of the audience. To keep the audience immersed and involved, pathos is the next step. Emotions which are needed to move the audience, to let the audience investigate the prototype within the world. Depending on what the designer wants the audience to experience, the storytelling and world building needs to adapt.

If we want the audience to understand a new design which will change their basic perception of a given topic, we will need a kind of storytelling that shows different perspectives in which the new design will prove itself or the challenges still evident in the prototype. Main concern for the design fiction should be on the ethos aspect, the building of trust from the audience in the design, in the secondary world and through this in the company, developing the design. Furthermore, the corporation needs to respect the audience, preferably by employing user generated design processes and supporting user generated content. If the design fiction features existential conflicts and challenges, the designer as well as the audience will be forced to reflect on ethical issues concerning the prototype.

Because of the tentative character of a prototype as well as the hypothetical disposition of the design fiction, the ethical issues have to be addressed through a dialogue between the audience and the designer. Løgstrup’s ontological ethics (1997) as well as Habermas’ discourse ethics (1994) should be the basic ways to attempt a construction of this meeting between possible worlds with the primary world and its inhabitants. While the ethical issues within the secondary world might be solved or proposed through Kantian or utilitarian ethics, these approaches are less useful when the designer needs to determine how a given prototype is received and perceived by its potential users.

The contribution of the paper is a new expanded design fiction framework, which we derive from the analyses of the three main cases and the current discourse of design fiction that focuses on the storytelling and ethical issues in proposed design concepts. The following list of elements are present in different kind of ethical stances within in the design fiction:

storyworld (diegesis) prototype discourse ethical issues character development pathos ethos logos

*The figure will be shown in the pdf*

Figure 1: Preliminary design fiction framework.

In the final paper we will substantiate the categorizations in the framework, by illustrate the the quadrants through our case studies of the selected existing design fictions.


Greimas, A. J. (1996). Reflections on actantial models (pp. 76-89). New York: Longman Publishing.

Habermas, J. (1994). Justification and application: Remarks on discourse ethics.

Lindley, J. (2015). A pragmatics framework for design fiction. in The Value of Design Research, 11th European Academy of Design Conference, April 22-24 2015.

Lindley, J., & Coulton, P. (2015, July). Back to the future: 10 years of design fiction. In Proceedings of the 2015 British HCI Conference (pp. 210-211). ACM.

Løgstrup, K. E. (1997). The ethical demand. University of Notre Dame Press.

Martin, R. (2009). The design of business. Harvard Business School Publishing, Massachusetts.

Tolkien, J. R. R. (1975). Tree and leaf; Smith of Wootton Major; The homecoming of Beorhtnoth, Beorhthelm's son. Unwin books.

Vistisen, P., Jensen, T., & Poulsen, S. B. (2015). Animating the ethical demand: exploring user dispositions in industry innovation cases through animation-based sketching. SIGCAS Computers and Society, 45(3), 318-325.

Vogler, C. (1998). The Writer's Journey. Studio City. CA: Michael Wise Productions.

Wolf, M. J. (2012). Building imaginary worlds: The theory and history of subcreation. Routledge.

When AI goes to War: Youth Opinion, Fictional Reality and Autonomous Weapons
SPEAKER: unknown

ABSTRACT. Weaponisation of artificial intelligence (AI) presents one of the greatest ethical and technological challenges in the 21st century and has been described as the third revolution in warfare, after the invention of gunpowder and nuclear weapons. Despite the vital importance of this development for modern society, legal and ethical practices, and technological turning point, there is little systematic study of public opinion on this critical issue. This interdisciplinary project addresses this gap. Our objective is to analyse what factors determine public attitudes towards the use of fully autonomous weapons. To do this, we put the public at the centre of the policy debate, starting with youth engagement in political and decision-making processes. On the one hand, the international community is concerned that instead of limiting conflict, using autonomous weapons in war will proliferate it. On the other hand, defence departments and the technology sector point to many benefits of using autonomous weapons, which range from limiting military conflict to saving human lives. Instead of taking sides in the debate, our research will contextualize it by inviting young people (18-25 years old) to become part of a youth jury. The aim of the youth juries is not simply to find out what young people think and feel about fully autonomous weapons, but to discover what shapes their thinking; how they came to define certain scenarios as problematic; how they attempt to work together to think through solutions to these problems; the extent to which they are prepared to change their minds in response to discussion with peer or exposure to new information; and how they translate their ideas into practical policy recommendations. This approach has been inspired by the wave of deliberative experiments and initiatives that have been conducted in recent years on topics ranging from healthcare reform and nuclear power to local town plans and community policing. Deliberative theorists argue that there should be more to public discourse and decision-making than partisan position-taking and the employment of aggressive mechanisms to determine who ‘won’ the argument. They argue that collective judgement benefit from open discussion in which citizens are encouraged to share and contrast their preferences and values with a view to, at least, understanding why they disagree and, at best, working through their differences and seeking common ground. The theoretical assumption behind deliberation is that people are able of changing their moral, political or behavioural preferences when they encounter compelling reasons and evidence to do so. When it works well, deliberation gives fluidity to democracy and reduces the narrow meanness that is so often associated with the sordid politics of ‘winners’ and ‘losers’. It opens up a space for people to think about the future they want and how they might act collectively in ways that take all actors into account. While there is now a considerable research literature on the normative, epistemic and pragmatic value of public deliberation (Bohman, 1998; Elstub, 2010; Parkinson and Mansbridge, 2012; Steiner, 2012; Coleman, Przybylska and Sintomer, 2015), hardly any systematic research has been conducted on the ways in which young people deliberate. Valuable observational studies have explored how young people talk about political issues (Henn, Weinstein and Forrest, 2005; Blackman, 2007; Ekstrom and Ostman, 2013; Thorson, 2014), but they have not addressed the deliberative questions outline below (i.e., When AI goes to war). This is not only a gap in the literature but a missed opportunity to learn about the ways in which practical reasoning occurs within a generational group that is often dismissed as lacking sufficient maturity to contribute to public policy. The aim of this research is to observe the deliberative process. Opinion formation is messy, often framed by competing and even inconsistent values. Supporting young people to think through this messiness is a major aim of the youth jury process. The youth juries are structured with a view to encouraging an atmosphere in which unconstrained deliberation can flourish. It is important for the juries to be noisy and discursive and that jurors became aware that they are engaged in a process of collective judgement, one that calls for both candour and compromise. From the outset, the idea of being a member of a jury is emphasized and participants know that they are expected not only to offer ideas about the ethical dilemmas intrinsic to fully autonomous weapons, but to work as a group to think through a set of recommendations that adults in general, and policy-makers and the robotic/AI industry in particular, would feel compelled to take seriously. The evidence presented to the jury will be a combination of multimedia news that will present plausible -but fictitious- scenarios that will trigger discussions and elicit reflective responses. The jury will be asked to suspend their disbelief and immerse themselves in a series of sketches of fictional scenarios that will initiate the process of deliberation. The jury will consider both problems and future recommendations about the role of AI in military conflict. The scenarios will feature, among other examples, the US Harpy fully autonomous drone which can choose its own target (e.g., mobile anti-aircraft battery) within a predefined area. There is no ability to abort the mission. If civilians are used as a human shield, the weapon will ignore them and target anyway. If the drone cannot find the target before its fuel is gone, it dives to a predefined location. A different scenario will feature Rescue Action, a disaster response charity from London that uses prediction algorithms for real-time tracking of people and vehicles on the ground. Algorithm calculations are transmitted to fully autonomous drones that will operate without human guidance but acting in accordance to international humanitarian law, increasing significantly the rescue success as well as relieving suffering and distress among persons affected by war conflict as well as hazards, risks and major natural disasters across the globe. These scenarios will trigger discussions and debate among jury members who will be confronted with dilemmas and plausible risks including drones being uncontrollable in real-word environments, subject to design failure as well as hacking, spoofing and manipulation by adversaries. Jury members will be confronted with dilemmas such as; who is accountable if one of these drones doesn't function as planned? The developers of the guidance systems? The programmers? The person/entity that launches it? The manufacturer? After working with young adults, we will expand our project to include a wider demographic sample and will begin to examine how the use of robotics in war is changing public perceptions of military conflict. This study is unique because youth is often undermined and excluded from public debate. The value of this research lies simultaneously in its contribution to the emerging field of fully autonomous weapons and in generating recommendations that can influence government policy-makers, industry chiefs, and public discourse. This study is vital for a critical understanding of youth perceptions of AI in armed conflicts and its implications for the future policy and industry decisions. This project will provide industry stakeholders with a roadmap of factors that determine public opinion about autonomous weapons and help frame their research and position their products. Our findings will also contribute to select committee inquiries such as the recently created investigation into robotics and artificial intelligence by the Science and Technology Committee in Parliament. Finally, our research will inform the general public as well as bringing youth opinion into the debate about AI and military conflict. The study is being funded with £5K by The Digital Economy Crucible 2016, an EPSRC funded leadership programmed, organised by Cherish-DE at Swansea University.

References Blackman, Shane J. "Hidden ethnography': Crossing emotional borders in qualitative accounts of young people's lives." Sociology 41.4 (2007): 699-716. Bohman, James. "Survey article: The coming of age of deliberative democracy." Journal of political philosophy 6.4 (1998): 400-425. Coleman, Stephen, Przybylska, Anna and Sintomer, Yves (eds.) Deliberation and Democracy: Innovative Processes and Institutions Warsaw Studies in Politics and Society - Volume 3 (2015). Ekström, Mats, and Johan Östman. "Family talk, peer talk and young people’s civic orientation." European Journal of Communication 28.3 (2013): 294-308. Elstub, Stephen. "The third generation of deliberative democracy." Political studies review 8.3 (2010): 291-307. Henn, Matt, Mark Weinstein, and Sarah Forrest. "Uninterested youth? Young people's attitudes towards party politics in Britain." Political studies 53.3 (2005): 556-578. Parkinson, John, and Jane Mansbridge. Deliberative systems: deliberative democracy at the large scale. Cambridge University Press, 2012. Steiner, Jürg. The foundations of deliberative democracy: Empirical research and normative implications. Cambridge University Press, 2012. Thorson, Kjerstin. "Facing an uncertain reception: young citizens and political interaction on Facebook." Information, Communication & Society 17.2 (2014): 203-216.

Superheroes on Screen: Real Life Lessons for Security Policy Debates
SPEAKER: unknown

ABSTRACT. Introduction Security policy, whether computer security or societal security, creates the sharp end of ethical decisions in their application to the real world. Laws and procedures set much of the scene for the consequences of these policies, but the people empowered to implement the policies also play a part in the outcomes of those policies. These policies are often justified, and sometimes even presented, in a narrative form. Arendt (see Disch (1993)) and Žižek both stressed the importance of stories in their role of how we understand the world: “The experience that we have of our lives from within, the story we tell ourselves about ourselves, in order to account for what we are doing, is fundamentally a lie – the truth lies outside, in what we do.” (Žižek, 2008). For decades superheroes have formed a significant part of the modern cultural referents of many societies. In the past decade, the superheroes of the Marvel and DC comics have become some of the most broadly watched stories not just in the US and UK (where most of their writers originate) but around the world. The power of superheroes and the villains they battle can be (and this paper argues are) used to represent (via analogies or metaphors) the choices presented to politicians and voters about how powerful actors in the real world can be understood: their motivations, their goals, their methods, their limits and the intended and unintended consequences of their actions. The power of the superheroes often represents (directly or indirectly) the power conferred by technology: Superman's X-Ray vision versus the full body scanners now in use in many aiports. One of the most powerful individuals in the world, President Obama, used Batman metaphors in discussions with senior security advisers (Goldberg, 2016). In this paper the lens of the humanities scholar and in particular the method of close reading of texts, is applied to three recent televisual and cinematic renditions of superheroes: Daredevil (Season 2) from Netflix and Marvel Studios; Captain America – Civil War from Marvel Studios; Batman v Superman from DC and Warner Brothers. One of the key elements of each of these is the pitting of hero against hero and the relationship of the superhero to the law. Key scenes from each piece are used as data to develop key questions about the security policies of states and in particular the question of how voters and democratic institutions/politicians can deal with powerful institutional or individual actors, both those clearly within the law, those skirting the edges of the law, those breaking the law with “good” intent and those breaking the law for their own ends. Daredevil: Vigilantes, Supervillans and the Law

The series Daredevil fully articulates the triangulated and vexed relationship that exists between the law, the vigilante superhero and the supervillian. Daredevil focuses on the blind superhero named Matt Murdock. He is a seemingly contradictory hero — at once a vigilante by night and a lawyer by day; therefore, the series hinges upon the vigilante’s conflicted relationship with the law. It asks: how does the vigilante comes to resemble the criminals that both he and the legal apparatus fight against? The series also contains resonant figures to Daredevil, providing further insight into the above triangulation. These include: the vigilante Punisher (who in a quest for vengeance is willing to kill criminals he deems unredeemable) and the mob boss, Wilson Fisk otherwise known as the Kingpin. The latter supervillain uses the law to his advantage so that he might exonerate himself and consolidate power. Our analysis focuses upon scenes of interrogation. Within the realm of physical and psychological torture, Daredevil brings the viewer to inhabit a space outside the law’s borders. The series shifts the presentation of such scenes, accentuating increasingly grotesque detail to highlight the potential monstrosity of unbounded extra-legal action. However, within presentations of bloodless interrogation, the series suggests also suggests the monstrosity that the law hides. A dialogue between lawyer Murdock and the supervillain Kingpin in prison shows how the law might be exploited as a legitimizing tool for criminal actors. The section also employs Bruce Schneier’s writings on outliers and the defectors from established social norms to articulate the amorphous distinctions of which individuals and institutions hold legitimacy. It also cites Walter Benjamin’s “Critique of Violence” wherein he describes “the great criminal” and how he challenges the law by proposing an alternative legal structure, one that undermines the state’s monopoly over violence and the validity of its command of such violence. Captain America - Civil War: Power's Subordination to Process This second section covers the question of how extralegal (Superheroic) power could be controlled and brought under the confines of the law. Civil War wrestles with the processes behind this subordination and shows how superheroic acts can be discursively made illicit or illicit. The Marvel Cinematic Universe, of which the film is apart, has emphasized the intertwining of these vigilantes’ interests with those of the state. The super team, the Avengers, stand as an alliance of disparate vigilantes which ostensibly has the approval of the US state. Dissension within the Avengers' ranks is sparked by the collateral damage of these heroes’ past actions which forces the protagonists to wrestle with the need to control their power through mechanisms of civilian oversight. The heroes on opposite sides of the debate include Captain America (a “super soldier” produced by the military in World War II, only to grow evermore disillusioned with the state and its post-9/11 realist policy) and Iron Man (an Industrialist-turned-vigilante-turned willing arm of the executive branch). To understand the tensions between these two sides, this section analyses scenes presenting the heroes’ sometimes callous response to collateral damage as well as moments of debates between the various protagonists. In order to provoke a dialogue about the need for oversight, the US Secretary of State forces the Avengers to watch a montage of the devastation that they have left in their wake during their previous fights with supervillains. Some of the superheroes watch impassive, some turn away and some are affected, disquietly raising questions about the lack of empathy of such empowered actors, which the film itself compares to weapons of mass destruction. This presentation of suffering inspires Tony Stark to express one of the core concerns of Civil War: “If we can't accept limitations, we're boundary-less, we're no better than the bad guys.” We thus frame this debate as central to understanding how these spectacular productions at once frame the allure of unbounded extralegal power as well as the danger it poses. Batman V Superman: Excessional Power Levels, Fear, Trust and Security

Batman v. Superman pits the two titular heroes against each other. It goes beyond the previous texts cited for how it presents the effect of an agent outside of the context defined by the law, the vigilante superhero, and supervillain triangulation. In short, Batman v. Superman stands as a superheroic study of power and impotence. In the conflict between the human Batman and the god-like alien Superman, the film explores the feeling of powerlessness that a super-powered force might foster, not only within a mortal hero like Batman but within the state as well. By looking at such scenes and their resonances to the discourse of the Bush administration following the 9/11 attacks and in the lead up to the Iraq War, the section covers the question of how such an entity exposes the frailty of existing structures of power, forcing them to respond pre-emptively? How do members of the triangle contextualize the out of context problem for their own ends? How might the out of context problem be thus abused? In examining these questions, the section will employ Schneier’s writing on the world sized web and the problem of the un-knowable threat and the responses such a menace might provoke. It also includes consideration of theorists such as Naomi Klein on the Shock Doctrine (exploiting catastrophe), Benjamin (1986) on Divine Violence that comes outside the law, and authors on the state of exception like Carl Schmitt. Conclusion: Changing Our View on Real-World Vigilantism Our paper ultimately demonstrates how the superhero (particularly the vigilante version) helps to elucidate the difficult questions of power and control within the sociology of security. Our conclusion relates the fictional superhero to real world figures enacting civil disobedience (Snowden: a vigilante exposing the machinations of the state; vigilantes emerging within Mexico’s Drug War) and gestures to questions that merit further exploration including mercy vs. justice as well as the possibilities and limits of empathy. It will end with a reading from Batman v. Superman, wherein the latter stands in front of Congress. A bomb explodes. Superman does not sense the bomb. As the setting burns around him, the untouched hero reflects upon and regrets the limits of Superheroic power. Such is the vitality of these stories, which our paper underlines. These superhero tales offer the possibility to see expansive power critically—to luxuriate within omnipotence while also gleaning the possibility as well as the need for limits, be they ethical or legal.

References Benjamin, W. (1986). Critique of Violence. In: Reflections: Essays, aphorisms, autobiographical writings. Disch, L. J. (1993) More Truth Than Fact: Storytelling as Critical Understanding in the Writings of Hannah Arendt. Political Theory 21(4), pp. 665-694. Goldberg, J. (2016). The Obama Doctrine. The Atlantic April 2016. (http://www.theatlantic.com/magazine/archive/2016/04/the-obama-doctrine/471525/, accessed 12th August 2016) Žižek, S. (2008). Violence: Six Sideways Reflections. Picador.

15:30-17:00 Session 6B: Robots
Location: Room H4: building D4, 2nd floor
Robots in Society: Evaluating Implications for Well-Being
SPEAKER: Philip Brey

ABSTRACT. The field of roboethics has been very active in recent years, yielding many new studies of ethical issues in robotics (Lin et al., 2011; Tzafestas, 2015). Very little of the existing literature, however, has focused on the topic of well-being, and the corresponding question of how robots in society may positively or negatively impact the quality of life. In this essay, I will pose this question in a central way. My focus will be on developments in the next twenty to thirty years, as envisioned in various studies and reports that project a “robot revolution” occurring over the next few decades. I will assume that the mainstream predictions will come true, and there will be a vast increase in the quantity and quality of robots being used in society, in different sectors, including healthcare, the service industry, entertainment, the home, and others. Moreover, I will assume, in this context, a proliferation of social robots with advanced perceptual, motor and speech abilities, although with artificial intelligence that usually does not match human intelligence. A large part of the contribution that robots are believed to be able to make to the quality of life is found in the improvements to the lives of the elderly and people with disabilities and medical needs. Care robots, medical robots, assistive robots and other robots used in care contexts are believed to be able to make major contributions to the quality of life of these categories of people (Dahl & Kamel Boulos, 2014; Broekens et al., 2009). These beliefs are no doubt valid. However, in this paper I want to move away from care and medical contexts and consider other effects on well-being, those that apply to all people. This is not to deny that a complete, overall assessment of the future impact of robots on well-being would require consideration of these more specific applications as well. My approach is to consider three dimensions of the quality of life on which robots can be expected to have the most impact, positive or negative. They are (1) the quality of everyday life; (2) the quality and availability of work; (3) the quality of social relationships. As I will argue in the paper, these partially overlapping dimensions are vital components of overall quality of life. I will approach each dimension using a multi-faceted conception of well-being that I have developed in earlier work [reference to own work not given due to double-blind peer review]. This approach considers components of well-being, such as friendship, pleasure, achievement, autonomy, and self-respect, that are typically found, both in empirical and philosophical approaches, to be important to well-being, and considers them in relation to the three dimensions of well-being that I am posing to be most important in analyzing impacts of robots on well-being. My approach does not entail an embrace of objective list theories of well-being, since these components are only presented as empirical generalizations that usually hold, not as normatively valid, universal components of well-being. I will also draw from empirical research into the psychological and social impacts of robots on humans, including research that concerns quality of life aspects. First, regarding the quality of everyday life, I will investigate how robots, particularly service and social robots, may positively and negatively impact home life, leisure, and life in the public sphere. This is a complex undertaking, because there will be many such impacts (Sabur, 2015; Breazeal, 2015). I will impose some restrictions on my analysis, however. First, I will consider in general terms how robots taking over domestic work may positively or negatively influence the quality of life: what are possible advantages and disadvantages? Will the gains in terms of time and less effort outweighing the disruptive effects of robots in the home? Second, I will consider how robots may become new playmates and service providers in leisure time, and will consider general ways in which leisure activity may be positively and negatively impacted in this way. Third, I will consider how robots may take up roles in the public sphere, from law enforcement agent to clerk, and will consider some advantages and disadvantages of such roles. Moving on to the second dimension, regarding the quantity and availability of work, I will investigate, using empirical studies and projections (Frey & Osborne, 2013; Ford, 2015), to what extent robots are likely to replace workers in different sectors, and how they may change the nature of work, by enabling situations in which workers use robots as tools or relate to them as co-workers. I will consider prevailing theories of the importance of work for well-being, and of the quality of work in relation to well-being (Blustein, 2008). Then, I will assess how the introduction of robots in the workplace may both positively and negatively affect well-being. I will both consider impacts from a macro-perspective (a rearrangement of work tasks being done by humans and machines, and resulting economic effects on humans) and a micro-perspective (how the tasks and interactions of – remaining- workers may change when they work with robots). Third, regarding the quality of social relationships, I will investigate, again using empirical research as a starting point (Turkle, 2011, Markowitz, 2015; Sauppé & Mutlu, 2015), the effect of robotics on the quality of human relationships, and the ensuing results for well-being. Theories of well-being often agree that good human relationships are of vital importance to well-being. Robots, however, appear to substitute human interactions and relationships with human-robot interactions and possible ensuing relationships. This raises a number of questions. First, what types of human relationships and interactions are threatened to be partially replaced by interactions with robots? Second, to what extent do these substitutions undermine human relationships? Third, can robots also promote new and better human interactions and relationships between humans, for example by saving precious time for more social interaction, by teaching humans to become better at social interaction, or by mediating relationships between humans? Fourth and finally, to what extent can a relationship with a robot be a genuinely social relationship that embodies some of the qualities of relationships between humans and that can have a positive effect on the quality of life? Based on the analysis, I will arrive at a preliminary assessment of the overall potential effects of the introduction of robots in society on human well-being. Obviously, this assessment can only be tentative and partial, because it does not consider all dimensions of and impacts on well-being in comprehensive way, and it has to rest on a number of assumptions on how robots will be developed and used in the future. In the concluding section, I will assess the limitations of the present study and look forward to how more extensive studies can be performed that take into account a broader set of criteria for well-being, and that distinguish between a more diverse set of scenarios for the future development and use of robots in society. I will also put forward some proposals for how well-being can be taken into account in a systematic way in the design of robots [reference to own work not given due to double-blind peer review]and in the development of policies for robots and their deployment in society.

References   Lin, P., Abney, K., & Bekey, G.A. (2011). Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: The MIT Press. Tzafestas, S.G. (2015). Roboethics: A Navigating Overview (Intelligent Systems, Control and Automation: Science and Engineering). New York, NY: Springer. Dahl, T.S., & Kamel Boulos, M.N. (2014). Robots in Health and Social Care: A Complementary Technology to Home Care and Telehealthcare? Robotics, 3(1), 1-21. Broekens, J., Heerink, M., Rosendal, H. (2009). Assistive social robots in elderly care: a review. Gerontechnology, 8(2), 94-103. Sabur, R. (2015). Can robots make your life easier? We look at 14 of the best. The Telegraph. http://www.telegraph.co.uk/technology/advice/11863483/Can-robots-make-your-life-easier-We-look-at-14-of-the-best.html Breazeal, C. (2015). 5 Ways Robots Will Change Your Life In The Future. Computemidwest.com http://computemidwest.com/news/5-five-ways-robots-will-become-personal-in-the-future/ Frey, C.B. & Osborne, M.A. (2013). The Future of Employment: How Susceptible Are Jobs to Computerisation? http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf Ford, M. (2015). Rise of the Robots: Technology and the Threat of a Jobless Future. New York, NY: Basic Books. Blustein, D.L. (2008). The Role of Work in Psychological Health and Well-Being: A Conceptual, Historical, and Public Policy Perspective. American Psychologist, 63(4), 228–240. Turkle, S. (2011). Alone Together. New York, NY: Basic Books. Markowitz, J. (Ed.) (2015). Robots that Talk and Listen: Technology and Social Impact. Berlin: Walter de Gruyter GmbH & Co KG. Sauppé, A., & Mutlu, B. (2015). The Social Impact of a Robot Co-Worker in Industrial Settings. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15), 3613-3622, New York, NY: ACM.

Ethics of information education for living with robots
SPEAKER: unknown

ABSTRACT. Development of AI/robots and communication tools are drastically changing our lifestyle. A smooth interface between human and machine must be developed not only on the side of machine but also on the side of human beings. Guidelines of education and requirements to develop systems are needed, both technological and social, for sound life with AI/robots. It is because a guideline prepare a method of sharing the way of thinking for emergency or other opportunities with insufficient time of consideration, when each agents may conclude and behave differently according to their own value system. This paper will begin with the classic point on computer ethics by Moor (1985) that coherent policies need a coherent conceptual system. Guidelines are a presentation of a conceptual system: information education for human beings should include the social role of guidelines as the method of explicitly stating and sharing moral assumptions in a community. Failure of such emphasis leads to social confusion on “correct” usages of information systems. We illustrate the point by a short historical survey of information education in Japan. We will then argue that information education for human beings should be complemented by philosophy and ethics. Human rights and human dignity are the key notions of modern society, while they are not fully covered in any subject during the whole course of educational curriculum, set by Japanese government. Without emphasis on citizenship education, the word “moral” is vaguely used to mean adaptation of existing social restrictions without critical attitudes. Such a misleading design of the national curriculum actually lead confusion in classroom and society, and indirectly causes incidents online. Moreover, lack of emphasis on human rights in education deprive students to understand the social role of guidelines and mutual agreement. Vacuum of conceptual framework in fact entails vacuum of policies. We would like to emphasis that it is not just in Japan. Technological singularities are expected to bring vacuum of conceptual framework. Some forecasts that high-speed computing with a huge amount of storage will produce machines superior to human beings in any aspects of life. The notion of agency, action, free will, responsibility, and personality may be updated according to the social change. Nobody in the world has experienced any comparable change. Vacuum of policies will be inevitable. We, however, must prepare for the situation by creating a guideline to fill the vacuum. Finally, we step forward to claim that adequate guidelines should implement the notion of fairness as well as the notions above, in a readable and intelligible form for both human beings and machines. Machines must “understand” the social concepts: they, like us human beings, should try not to realize digital divide among human beings, regardless genders, races, nationality, economic situations, disabilities, and other social situations. Human beings need a theory to make the world intelligible for human, while any theories used in computers are not enough for human beings to understand. The perplexing situation appeared in the case of Alpha Go. Human beings at first do not understand why the system choose the specific move at a stage of a game. It maximizes the probability of winning the game, with calculation based on machine learning through thirty millions of games. We human beings rely on flesh. We need break and feeding, and actually behave on chemical reactions in our body without realizing what is going on behind our judgment. We do not just rely on reasoning, deductive or inductive. Remember that Alan Turing rightly pointed out that there are various ways of thinking and understanding even among human beings. We are not restricted to assume that computers think like human beings. His words come true. The notions of action, agency, free will, responsibility, goodness, and most importantly, the notion of fairness should be formulated in a readable and intelligible form for machines. Those notions should be coherent with our human-readable version, but the machine-readable version may have a totally different form as the internal structure of machines are not alike to us. To explicitly formulate such notions, conceptual analysis of the notions on the basis of philosophical theory on society will give boundary conditions of formulation. Vocabularies to characterize their logical features should be investigated in philosophical logic with examination and feedback to ethical theories to guarantee coherency with the human-readable version. We are still on the way of implementing the utilitarian notion of “better” and “best” on machines (Murakami (2005)), and the attempt of machine-readable “fairness” should refer to the experiences. Such reformation of the basic of social philosophy may lead a drastic restructure of philosophical theories in the feedback process. We might find implicit, inherent incoherency in the existing philosophical theories of society. Even without such incoherency, philosophical theories need to be updated if the assumptions of our society are to be rewritten by technological advance. We would like to claim that the guideline should offer (1) a set of assumptions to be protected from update and (2) preference in choosing a new set of assumptions. Those assumptions to realize human rights and human dignity, for example, are to be protected. In the future education, both human beings and machines should learn the updated conceptual framework in the readable way. Both sides should behave under the same assumptions on our society. Thus AI and human beings need to find a way to live together. Social rules with mutual agreement in communities are the key to create a common ground. Current technology is yet unable to model the full idea of fairness and the other social notions, however. Philosophical analysis and theory of ethics will play the main role in modelling a full-fledged version. Social AI will not come to the real world without humanities.

Reference Moor, J.H. (1985) What is Computer Ethics? Metaphilosophy 16(4) 266-275. Murakami, Y. (2005) Utilitarian Deontic Logic. AiML 5 211-230. Tatsumi et al. (2015) The Information Ethics Education in our Future. SSS 2015.

When HAL Kills, Stop Asking Who’s to Blame
SPEAKER: Minao Kukita

ABSTRACT. Daniel Dennett once asked the question ‘When HAL kills, who’s to blame?’ and suggested that it is possible to blame an artificial intelligent system with higher-order intentionality (which HAL seems to have), i.e., ability to reflect on, think about, or have desires concerning its own mental states such as beliefs, desires, and so on [1]. Dennett explored the theoretical possibility that an artificial intelligent system can qualify as a responsible agent, but today this question has gained practical importance, not because artificial systems have acquired a certain level of higher-order intentionality, but because they will kill.

Car manufacturers across the world are now competing to develop self-driving systems, with Ford recently announcing that their fully autonomous cars with no steering wheels or gas pedals will be in mass production within five years. While the United Nations Convention on Certain Conventional Weapons has had debates on lethal autonomous weapons systems for these few years, Israel Aerospace Industries recently disclosed their semi-autonomous uninhabited vehicle for military use called ‘RoBattle.’ Both autonomous cars and lethal autonomous weapons systems are likely to cause serious damages to those who are not engaged with them or who are supposed not to be affected by them. One important question in deploying autonomous systems in open situations where they interact with indefinite people is about who will be held responsible for behaviours of an autonomous system, especially when they have led to unexpected damages. Car accidents are inevitable, and we cannot eliminate the possibility that, in warfare, non-combatants are unintendedly killed. Deploying autonomous systems in transportation or warfare will make it difficult to identify who is responsible for the damage.

In ethics, responsibility has traditionally been applied only to a human or a group of humans. It has been thought that only agents who are capable of intentional choice of their actions can be held responsible, and it is only humans who are capable of it, or at least so it has seemed. However, the recent development of ICT has revealed a limitation of this old conception of responsibility, because our recognition, decision-making and action are increasingly supported and influenced by technological artefacts, which fact makes it difficult to identify who (or what) is really responsible for consequences of one’s action. Some ethicists are paying attention to the reduced sense of responsibility in the society where our actions are mediated by computers or other complex artefacts (see, for example, Nissenbaum [2]). The difficulty becomes more salient as technological artefacts acquire greater autonomy.

Self-driving cars will be greatly beneficial, for they will reduce the number of accidents and the amount of energy consumption, mitigate traffic jam, and enable those who cannot drive to use cars to go out. However, the concern about reduced or lost responsibility might get in the way of implementing self-driving systems. Therefore, it will be valuable to consider what will or should become of our concept of or practice concerning responsibility in the age of autonomous machines.

In this talk, we will first examine what our traditional conception of responsibility consists of, and how traditional conception of responsibility, conditions for attributing it and practice concerning it are confronted with difficulties by emerging technologies that produce autonomous agents. At the same time, we will take a look at Dennett’s claim about responsibility of artificial intelligence. Then we will consider how the concept of responsibility has to change in a society where autonomous machines and humans coexist. Here we will argue that it is important to separate blameworthiness from responsibility. We will then object to Dennett that even if an artificial intelligent system has higher-order intentionality and fails any of the exculpatory conditions he considered, it is useless to blame it, thought it will make sense to think it is responsible for the damage.

Our focus will be on the original role or function for which the conception of responsibility has evolved. We assume, along with Joshua Greene [3], that our morality has evolved because it helped our ancestors to cooperate with each other and thus increased the chance of the survival of their society. If so, the same will hold true for our practice using the concept of responsibility. With the concept of responsibility, we encourage one another to do good and discourage one another from doing harm to others. This is why we have developed and maintained the practice concerning responsibility. However, emerging technologies make it difficult for the conception of responsibility to fulfill the original function. Therefore, if responsibility is to continue to do the same job rather than continue to be the same, we have to revise it.

We will suggest that it would be not only useless but also costly to search for some individuals who is to blame when the accident happens due mainly to actions of a complex artificial autonomous system or to interactions among such systems. No humans are responsible for it, but to blame machines is to no purpose, since blaming will make sense only if the blamed feels guilty by the blaming and change his or her future actions. Instead, we should think more of responsibility of the manufactures or even the society to compensate for the damages and to improve the systems in order to prevent future harmful events.

Maybe it is hard not to blame anyone when something goes wrong. Our emotions are wired so that we will want to search for the culprit of the mishap and blame and punish him or her. It will help to recognise that although this disposition must have been advantageous in the past, when no artificial agents with high autonomy exist, it may not continue to be so in the future.

Finally we will mention the responsibility gap created by lethal autonomous weapons systems (LAWS). One of the reasons people are opposed to them is that we cannot identify who is responsible for the war crimes they commit (see, for example, Sparrow [4]). Our suggestion might seem diminish the force of this objection. However we will claim that LAWS are considered to be unethical even in our conception of responsibility.

[1] Dennett, D. C. 1997. ``When HAL Kills, Who's to Blame? Computer Ethics,’’ in HAL's Legacy: 2001's Computer as Dream and Reality, D. G. Stork (ed.), Cambridge, MA: MIT Press, pp. 351-365. [2] Nissenbaum, H. 1994. “Computing and Accountability,” Communications of the Association for Computing Machinery, 37(1): 72–80. [3] Greene, J. 2014. Moral Tribes: Emotion, Reason, and the Gap between Us and Them, New York, NY: Penguin Press. [4] Sparrow, R. 2007. ``Killer robots,’’ Journal of Applied Philosophy, 24(1): 62-77.

15:30-17:00 Session 6C: Cyborg Ethics
Location: Room H5: building D4, 2nd floor
Cyborg Ethics: wearables to insideables. An international study
SPEAKER: unknown

ABSTRACT. Wearables are electronic devices that are incorporated into clothing and accessories and interact with the user. Today, we are seeing the first wearables (e.g., smart watches, smart glasses, or fitness trackers). In the coming years, we will begin to incorporate insideables. Insideables are electronic devices implanted in the human body for non-medical reasons that interact with the user to increase innate human capacities such as mental agility, memory, or physical strength, or give us new ones, such as the remote control of machines. Our research focus on this emerging technology. We are conducting a study on the acceptance of wearables and insideables.

Revolution in ICT is moving forward. From huge computers after World War II, we move to Desktop Computers ten years later, Laptop Computers since the 80 ́s, and Smartphones and Tablets currently. But it looks that miniaturization process is not over yet. Nowadays we are introducing wearables (Smartwatch, Smartglasses or Fitness- tracking bands as existing examples), and in the coming years we will incorporate insideables. Technological implants or insideables are electronic devices incorporated into the human body. They can be used with different purposes. Insideables can correct physical disabilities or health problems with therapeutic orientation. But insideables can increase as well innate human abilities such as mental agility, memory or physical force. Our track focuses on perceptions about the use of insideables from an international perspective, analyzing ethical attitudes and values from a cross cultural dimension. We will try to explore research questions as the followings:

• Is there any relationship (correlation) between personal values and attitudes towards insideables? • What are benefits perceived in insideables from personal and social perspectives? • What are problems perceived in insideables from personal and social perspectives? • What are the ethical challenges perceived in insideables from personal and social perspectives? • What are the ethical differences between wereables and insideables?

In this paper we will focus on international survey analysis. We will base our study in an international survey that will be filled by higher education students of the following countries: Spain, Japan, China, Mexico, Chile, Denmark, USA and India. The survey is based on an adapted TAM model and incorporate new dimensions. The constructs that are included are: Intention to Use insideables, based on Venkatesh and Davis TAM2 scale (2000); Performance Expectancy, Effort Expectancy, Social Influence, Hedonic Motivation and Facilitating Conditions of insideables, based on Venkatesh, Thong, & Xu (2012) UTAUT2 scale; Perceived Risk based on Faqih (2016) from Shim et al. (2001) scales; Ethical Awareness of insideables based on Reidenbach and Robin (1988, 1990) Multidimensional Ethics Scale (MES) that includes Moral Equity, Relativism, Egoism, Utilitarianism, Contractualism); and Innovativeness of insidables measured using three items adapted from Goldsmith and Hofacker (1991) by Juaneda et al. (2016). We used an 11-point scale (0 to 10) (Dawes, 2008; Van Beuningen et al., 2014).


Faqih, K. M. (2016). An empirical analysis of factors predicting the behavioral intention to adopt Internet shopping technology among non-shoppers in a developing country context: Does gender matter?. Journal of Retailing and Consumer Services, 30, 140-164. Goldsmith, R. E., & Hofacker, C. F. (1991). Measuring consumer innovativeness. Journal of the Academy of Marketing Science, 19(3), 209-221. Juaneda-Ayensa, E., Mosquera, A., & Murillo, Y. S. (2016). Omnichannel Customer Behavior: Key Drivers of Technology Acceptance and Use and Their Effects on Purchase Intention. Frontiers in Psychology, 7:1117, 1-11. Reidenbach, R. E., & Robin, D. P. (1988). Some initial steps toward improving the measurement of ethical evaluations of marketing activities. Journal of Business Ethics, 7(11), 871-879. Reidenbach, R. E., & Robin, D. P. (1990). Toward the development of a multidimensional scale for improving evaluations of business ethics. Journal of business ethics, 9(8), 639-653. Shim, S., Eastlick, M. A., Lotz, S. L., & Warrington, P. (2001). An online prepurchase intentions model: The role of intention to search. Journal of retailing, 77(3), 397-416. Venkatesh, V. and Davis, F.D. 2000. A theoretical extension of the technology acceptance model: four longitudinal field studies. Management Science. 46(2), 186-204. Venkatesh, V., Thong, J. and Xu, X. 2012. Consumer acceptance and use of information technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly: Management Information Systems. 36(1), 157–178.

Exploring the Implication of New Emerging Technologies: Case Study in USA and India.
SPEAKER: Shalini Kesar

ABSTRACT. The term cyborg (short for "cybernetic organism") refers to both organic and biomechatronic body parts. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline. The area Cyborg - a cybernetique organism which is a combination of the biological and the technological has started gaining attention. Having said that wearable technology, part of cyborg, has been on an on-going research since the 1970s. Google glasses, fitness bands, are examples of the popular wearable technologies that businesses are investing in manufacturing. There is no doubt that such technology revolution promises benefits in various sectors ranging from health to transportation. However, with every technology comes uncertainty of its implications on people and on society in general. This is particularly true in the area of security. This paper is part of an on-going research conducted in collaboration with experts who aim to focus on the USA, India, Japan, Spain, and Mexico. This research involves conducting both qualitative and quantitative data collection via surveys in the countries mentioned above. The author of the paper focuses on two countries, namely the USA and India. The survey will be designed to incorporate questions that will take into account the collaborators expertise area. In addition to survey, face-to-face interviews will be conducted by the author in her countries of focus. This will enable overcoming bias in the data collection. This paper will discuss the progress of research that analyzes people’s perceptions about such technologies along with the security and ethical implications. The findings of this research will provide a rich insight into the emerging wearable technologies in context of managing security breaches. There is no doubt that, as seen in studies and statistics, that with every new technology there will be negative consequences. The increasing occurrence of cybercrime across the globe shows evidence of this problem. The USA and India being no exception. The three categories security in context are: Cyber breaches; Bodily injury; and Technology errors and omissions. With this in mind, this research will contribute in finding solutions to manage the misuse of wearable emerging technologies. The year 2014 was named in many studies and reports and the “Year of the Wearable,”. Consequently, different countries experienced an explosion of new wearable products, giant electronic companies investing in this new technologies. For example, the global wearables market is expected to reach a value of$ 19 billion in 2018, more than ten times its value five years prior. Facts and statistics below highlight the emergence of technologies. Approximately, a 652 pages report by Signals and Systems Telecom (2016), SNS Research estimates that wearable device shipments will grow at a CAGR of 29% between 2016 and 2020. By 2020, wearable devices will represent a market worth $40 Billion with over 240 Million annual unit shipments. Wearable Technology in the USA The studies about wearable technologies clearly indicate that this technology will continue to increase. The table below highlights the summary of various reports and estimate values in US Dollars investment in such technologies. Overview Values Forecasted wearable device market value for 2018 $12,642m Forecasted unit shipments of wearables worldwide for 2018 111.9m Share of respondents interested in medical devices that transmit data 38% Smartwatches & Smart Glasses Values Share of U.S. consumers interested in buying a smartwatch 40% Number of Pebble smartwatches shipped in the U.S. 29,975 units Shipments of smart watches worldwide 1.23m units Google Glass annual sales forecast for 2018 21,148,611 units Share of respondents who would not consider buying and wearing Google Glass 59% Healthcare & Fitness Values Remote cardiac monitoring services forecast for 2016 $867m Shipments of healthcare wearables worldwide 13.45m units Shipments of fitness gadgets worldwide 43.8m units Plan to purchase a smart watch 35%

Wearable Technology in India In a recent article “Wearable Technology Market - Devices Revenue of $13 Billion+ for Mobile Operators by 2020” (2016) highlights the importance and how rapidly businesses are investing in this technology in country with a population over 1 billion. In another article “India wearable market hits 400,000 units in Q116 with fitness bands the key” (2016) , comments according to the IDC Worldwide Quarterly Wearable Device Tracker, “the wearable market in India was largely reliant on fitness bands, with 87.7% of market share in the first quarter of 2016 while smart wearables, devices that run on third-party applications, had market share of about 12.3%. IDC also reported that overall wearable sales during the quarter totaled a little over 400,000 units”. In the same report, research director for IDC India, pointed out “The wearable devices have become immensely popular in the past one year, and more players are expected to make an entry into the market in both the basic and smart wearable categories. The expected launch of affordable smartwatches in the second half of 2016 could see a rapid growth in the share of smart devices.”. Further, Intel expects India to offer big opportunities for wearable technologies . A recent online future day conference (2015), clearly highlights the recognition of importance of the emerging technologies. Studies and conference have occurred in India regarding the emerging research in Cyborg. This paper aims to provide a significant contribution of an on-going research where group of experts explore the implications of wearable technologies in different countries. The findings will add a rich insight into the emerging technology. References Singh, Arvind (2015): http://indiafuturesociety.org/online-future-day-conference-2015-india/- February 27. Clynes, M. and Kline, N (1960). “Cyborgs and Space", in Astronautics.

Cyborg enhancement - the authenticity debate revisited
SPEAKER: Anne Gerdes

ABSTRACT. This short paper outlines pro et con positions in the cyborg enhancement debate. By taking departure in the notion of situated cognition, it is illustrated how the opponent’s and the proponent’s positions toward cyborg technology raise different moral issues concerning the meaning of the value of authenticity, the real me, versus the enhanced “better me”. Hence, does being true to ourselves entails making use of our unique human capacity for creativity, meaning that doing our best ought to involve enhancement? Or, is the value of authenticity incompatible with cyborg enhancement, because enhancement will inevitably be followed by alienation? Having outlined the landscape, I set out to explore arguments for both positions.