previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 13A: DP PANEL: Exploring the outstanding questions leading to the emergence of GDPR certification
Location: Dz-1
Exploring the emergence of GDPR and Cyber security certification

ABSTRACT. Both the GDPR and the draft EU Cyber Security Act introduce certification as a transparency mechanism. Although data protection and cyber security are closely interlinked topics (e.g. via article 32 GDPR - Security of processing), the regulatory approaches to certification under the GDPR and the Cyber Security Act differ substantially. This could result in significant (legal) uncertainty and inefficiencies for technology providers, controllers, processors and data subjects/consumers. This expert panel will engage in a lively discussion to explore the issue.

This panel will discuss the following:  

1. GDPR certification and certification under the Cyber Security Act is not mandatory. What is the legal effect of both certifications?

2. Conformity assessment is an essential part of the certification process. Who will conduct the audits? Should audits be left to the market or should government bodies be involved as well? What are the pros and cons? Which standards will be applied? 

3. How will government actors (DPA, EDPB, CSIRT's etc.) and market actors (certification bodies, auditors, ..) interact in rolling out certifications under both regulatory approaches?

4. While certification may be somewhat new in privacy, it is not new in information security. What can we learn and leverage from information security certification?

5. What is the extraterritorial impact of CSA and GDPR certifications? How does it affect organizations operating outside of EU?

6. How do we envision certification under the European Cybersecurity Act and under the GDPR to co-exist in practice? How can we avoid unnecessary duplication of work? How do we avoid incompatibilities?

7. Could privacy and cyber security certification (in the future?) also be a part of the CE-marking under the EU New Approach? What are the opportunities and limitations?

09:00-10:30 Session 13B: DIGITAL CLEARINGHOUSE TRACK: Data, access and standardisation
Location: DZ-4
Data of Public Undertakings – Towards a Common Framework

ABSTRACT. Public undertakings generate a considerable amount of valuable data in the course of performing services of general interest, e.g. data on traffic-flows, timetables, locations, electricity grids. The contribution discusses the innovation-related EU legal framework on access and re-use of such data. It focuses on the interplay between the recast PSI Directive, competition law, information access laws, and public service obligations, and intends to add to the broader debate on the public-private interface in a data-driven economy.

Data Standardization: Portability and Interoperability in an Interconnected World

ABSTRACT. Data standardization is key to facilitating and improving the use of data. Absent data standardization, a “Tower of Babel” of different databases may be created, limiting synergetic knowledge production. Based on interviews with data scientists, this paper identifies three main technological obstacles to data portability and interoperability: metadata uncertainties, data transfer obstacles, and missing data. It then explores whether market-led standardization initiatives can be relied upon to increase welfare, and evaluates the role governmental-facilitated standardization should play, if at all.

Data, Innovation and Transatlantic Competition in Finance: The Case of the Access to Account Rule

ABSTRACT. Technological innovation is transforming the structure of the retail banking sector. Traditional business models are facing a rapid disruption process led by the emergence of FinTech companies. In order to offer payment initiation services and account information services, third party providers need to access customer accounts.The EU has taken the lead in the transition by providing, within the revised Payment Service Directive, a sector-specific portability regime (the access to account rule) expressly aimed at fostering competition.

09:00-10:30 Session 13C: HEALTH & ENVIRONMENT: New Regulatory approaches to Energy and Health
Location: DZ-5
The sense and scope of the protection of Cyber consumers in the French legal system Insights from the mobile wellness applications

ABSTRACT. French consumer law is based on the premise of consumer protection, considered as the party to a contract concluded with a professional.

This same premise irrigates the reflection of legal doctrine on the protection of the Cyber-consumer; i.e. a consumer who is defined by the particularity of the environment in which he consumes: namely the online context.

However, an analysis of the structure of consumer law and doctrinal discourse reveals the inanity of its premise; it also reveals the inability of the law to protect the consumer in the “consumer society”.

In addition, the emergence of the “exposure society”, which results in particular from the use of mobile applications, adds new risk factors for consumers. In considering these factors, the focus is on the growing inability of the law to provide effective protection in the online context.

Based on the example of wellness applications (fitness and nutrition), the existence of new control systems and power mechanisms will be outlined. Indeed, accessible on mobile phones, which have become the extension of the user body, these applications govern the physical and mental health of consumers.

This contribution supports the idea that in a world where data is the cornerstone of a competitive global economic policy, (Cyber-)consumer protection seems utopian. This protection resembles a reassuring discourse that allows the “spectacle of consumption” to continue; and this, in favour of a structural power of financial capital that influences the neoliberal State. For users of mobile applications in general and wellness apps in particular, the weakness is the spiral of consumption, which is maintained by the need for self-exposure. However, the recognition of this fundamental weakness, which is intrinsic to the economic system and which the law admits without overcoming, makes it possible to revise the postulate of consumer law in order to construct a legal discourse in accordance with the role the law may (or wants to) play.

SAFETY? SECURITY/ TWO CULTURES? Rearticulating safety and security cultures critical infrastructures through the lens of co-production

ABSTRACT. Contemporary technological societies are faced with an increasing number of crises, such as environmental catastrophes, technological and industrial crises, and terrorist attacks (Bijker, Hommels, & Mesman, 2014). These crises may have unintentional human or natural causes, may be rooted in intentional and malevolent acts, or comprise a mix of motivations and behaviors (Khripunov & Kim, 2008). Particularly vulnerable to these growing threats are critical infrastructures such as energy sector and nuclear power plants. In order to prevent and mitigate the risks confronting them, these infrastructures have over time developed measures to increase first and foremost their safety, and subsequently, their security. Research analyzing the implementation of those measures in critical infrastructures is typically split into two separate domains: safety culture and security culture. As a consequence, no stabilized and comparable definitions of safety and security have been developed. Nor is it clear how the two concepts relate to one another, and whether they can coexist, as is often assumed by institutional regulatory and policy bodies (e.g. International Atomic Energy Agency, 2016b) and some authors (Gandhi & Kang, 2013; Reniers, Cremer, & Buytaert, 2011). We may hence ask: how do safety and security cultures interact? which synergies and discrepancies do they entail? What may be the impact of their articulation on risks mitigation ? To address these questions, this paper provides a first-of-its-kind systematic literature review of the concepts of safety culture and security culture in critical infrastructures. It highlights several lacunae, such as the existence of a certain fuzziness among definitions due to ontological contradictions regarding safety and security cultures conceptions. Besides, it stresses the non-integration of technological and procedural elements as active safety and security cultures’ elements. In order to overcome the identified pitfalls, it suggests mutually informed and comparable safety and security cultures definitions that incorporate technological, procedural and human aspects and mobilize vulnerability and resilience approaches. Building on this theoretical endeavor, it proposes an integrated model of safety and security cultures that paves the way for empirical research within critical infrastructures.

09:00-10:30 Session 13D: IP TRACK: Property, Industrial Property, and Innovation
Location: DZ-6
The role of courts in anti-innovative patent enforcement

ABSTRACT. Patent scholars increasingly worry about the adverse effects injunctions can have on competition and innovation. Given patent law’s purpose of fostering innovation, these concerns appear reasonable. At the same time, courts seem poorly situated to assess the consequences of an injunction in any given case. My paper explores this conundrum and investigates (i) how patent courts can evaluate the consequences of injunctions for competition and innovation; and (ii) whether it is desirable that they do so on a case-by-case basis.

Mind the Gap

ABSTRACT. Our research concerns the property status of digital files stored in the cloud. We argue that such files do not currently constitute property under the law of England and Wales, which does not recognize possession of intangible items. This can lead to gaps in the rights and remedies available to both users and providers of cloud services, since issues like access to files will be governed mainly by the terms and conditions of cloud contracts, which are often highly restrictive.

09:00-10:30 Session 13F: AI AND RESPONSIBILITY TRACK: AI and responsibility in practice
Location: DZ-8
Dutch big data practices for prevention: measuring the (un)reliability of citizens?

ABSTRACT. Dutch journalistic and academic debates on preventative profiling and the societal implications of such practices often tie into debates on similar issues in the US or UK, where big data-led risk assessment systems in recent years became daily reality. Some prominent cases are the COMPAS-case (as reported by Propublica [1]), NYPD’s crime-forecasting system CompStat (cf. USA Today [2]). These were recently joined by China’s social credit scoring system as another external point of reference for the Dutch debate.

While well-known examples from other continents do trigger relevant concerns, not using local cases may create the impression that a) these practices are to a lesser degree current here, and b) protection against their implications for citizens has been taken better care of by the EU’s data protection regime. We think this impression is diverting critical studies into assessing differences in regimes, and impedes discussion on how to hold organisations accountable for designing and deploying such systems.

While Dutch, preventative big data systems may have a shorter history, their implications promise to be life-long for the affected citizens. We therefore think that the Dutch debates should focus more on ethical, legal and societal implications of these systems.

Some particular examples of predictive practices within the Dutch public sector deserve critical attention. We discuss semi-automated risk assessment systems for Dutch youth (ProKID [3]) in more depth, and reflect on the so-called System Risk Indication (SyRI) system [4], and the predictive policing system CAS (“Calamity Anticipation System” [5], currently being implemented in all Dutch police districts).

Often reasons in the debate promoting the growing deployment of such systems are related to the vast potential of big data-led correlations. Notably, such correlations are regularly presented as direct solutions for such societal problems as anti-social behavior, delinquency, unemployment or other related issues. Often a rhetorical shutdown of criticisms comes when any opposition to these systems becomes framed as a direct opposition to solve such societal problems. Following from this logic, the need for rendering citizens increasingly ‘transparent’ by an exponential growth in data about them remains presented as a vital necessity for measuring the reliability of citizens. Yet, with such growth in data and probabilities for correlations at the same time the opacity of data flows simultaneously increases with severe consequences for risk-profiled citizens, such as framing them unreliable. Moerel and Prins argue that such consequences occur, because “without knowing the reasons behind a systematically identified correlation, will treat the chances presented by these correlations as facts”[6].

These systems are primarily focused on risks, meaning that they are framed in terms of negative expectations, often derived from external factors that cast a shadow over a certain individual or situation. This comes with a particular dynamic: an expansion of the risk profile of an individual to their social and physical environment, and also along their entire life-time. These systems have a bias towards the negative in the sense that the system has no place for recording improvements or dis-association with risk factors.

We argue that these practices establish a gradual erosion of the transparency and accountability relationship between Government and Citizen. [7] While transparency used to serve democracy by enabling citizens to check upon the reliability of its government, the practices we describe show a shift to the converse. The very design and use of preventative big data systems as tools to measure the reliability of citizens by government agencies tends to frame citizens as being unreliable.

[1] Angwin, J.; Larson, J.; Mattu, S. & Kirchner, L. (2016) – Machine Bias There’s software used across the country to predict future criminals. And it’s biased against blacks - [2] Giacalone, J. L.; Vitale, A. S. (2017) – When policing stats do more harm than good - [3]La Fors-Owczynik, K. & Valkenburg, G. Risk Identities – Constructing Actionable Problems in Dutch Youth. In Van der Ploeg & Pridmore (eds), Digitizing Identities. Doing Identity in a Networked World, Routledge Studies in Science, Technology and Society, 2015. [4]See a.o. Algorithm Watch, “High Risk Citizens” https://algorithmwatch.org/en/high-risk-citizens/ (4 July 2018); or Privacy Barometer, “De transparante burger” (The transparent citizen) https://www.privacybarometer.nl/maatregel/85/De_transparante_burger (16 september 2014) [5] Cf. Rutger Rienks (Dutch Police), Predictive Policing (report, 2015) https://issuu.com/rutgerrienks/docs/predictive_policing_rienks_uk [6] Moerel, L., & Prins, C. Privacy voor de Homo Digitalis: Proeve van een nieuw toetsingskader voor gegevensbescherming in het licht van big data en Internet of Things. In Homo Digitalis, Den Haag (Nederlandse Juristen Vereniging, 2016) [7] Cf. Richard & King (2013): Transparency Paradox.

Corporate responsibility, empathic technology and the (expected) invasion of privacy: A media analysis of Facebook’s AI suicide prevention program

ABSTRACT. In this paper, we present a media analysis of Facebook’s recently implemented AI program by which they aim to detect users that are at risk of suicide. We examine media articles, YouTube videos, and the comments from readers/watchers to the content, and conduct an inductive analysis of the hopes and concerns commentators (in the broad sense) have. Much of the media reporting has been (cautiously) optimistic, with some commenters proclaiming that this is ‘medically very important’. They argue that this is, and should be, part of Facebook’s responsibility as one of the biggest social media platforms worldwide. Some go as far as referring to this as creating technology that is ‘more human’, able to pick up on ‘the subtle nuances of language’, and used for the ‘better of humanity’. However, other media reports as well as the comments from the audience were much more mixed. Previous literature has looked into concerns of both privacy and safety in relation to suicide prevention and AI as used by the Samaritans (a not-for-profit organisation in the UK), and indeed privacy and safety were two major concerns for much of the media reporting and, especially, the readers’ comments in our study as well. However, other comments focused on Facebook (as a private corporation) itself, with readers asserting this was the reason they were not on Facebook in the first place, or had left Facebook, and that people who continue to use Facebook should expect such things to happen, thus seeing it as a negative development but one however that was the own responsibility of people ‘stupid’ enough to use Facebook. In the conclusion, we reflect on what these developments may mean for how mental illness and suicidal behaviour are seen and experienced, how they are acted upon and, particularly, who should act upon them.

09:00-10:30 Session 13G: BOTLEG PANEL 1: Legitimacy of Public-Private Partnerships
Location: DZ-3
PANEL BotLeg I: Public-private actions against botnets: issues of legitimacy and governance
PRESENTER: Bert-Jaap Koops

ABSTRACT. Security and safety are public policy goals, with a key responsibility for governments to safeguard these. However, in many areas, governments are not in a position to sufficiently ensure security or safety by themselves—they are dependent on assistance from private parties in governing a sector to achieve public policy goals. Public-Private Partnerships have emerged over the past decades as a practical necessity and a potential solution to governance challenges. At the same time, these partnerships raise questions of legitimacy, since legality, accountability, and checks and balances are not a given when governance is partially, and not always transparently, outsourced to private actors and PPPs.

In this panel – the first of two panels discussing findings of the NWO-funded BotLeg project “Public-private actions against botnets: establishing the legal boundaries” – we will discuss general issues of legitimacy and governance of involving private actors in three sectors, involving different public policy objectives: cybersecurity, humanitarian aid, and food safety. Speakers will discuss the legitimacy of private-public partnerships, the conditions for the execution of public tasks by private actors, distribution of responsibilities, and associated questions of accountability and liability. 

The first context is combatting botnets, which facilitate many forms of cyber-attacks, as a key challenge in cybersecurity. A wide set of anti-botnet strategies, including pro-active strategies and public-private co-operation, is needed to detect and dismantle botnets. We will discuss the need for involving private actors, the challenges of distributing responsibilities among the entire spectrum of actors in the field according to their capabilities, and reflect on what legitimacy entails in this context. 

The second context is data partnerships in the humanitarian sector, involving international organizations and specialist technology firms. Humanitarian organizations are encountering enormous challenges in managing, integrating, and analyzing data from global operations, while facing mounting donor pressure to create efficiencies in operations, reduce costs, and counter fraud. Data partnerships are viewed as a mode of achieving solutions, but in the humanitarian context these raise critical questions about the legitimacy of actors, the lack of agency among beneficiaries, and other novel governance challenges.

The third context is food safety. In a context of increasingly globalized food supply chains, a growing concentration of market power among food retailers, and a perceived lack of capacity among national governments to regulate food safety, private schemes have developed to become a central governance instrument to deal with the systemic risk of food safety outbreaks. These private schemes, both national and transnational, possess a wealth of data on industry compliance and risk. Governments around the world are seeking to enrol the schemes in their enforcement policies to bolster their own capacities. While the resulting partnerships may make the deployment of public resources in the field more efficient, the arrangements also trigger important considerations of legitimacy and accountability.


  • Bert-Jaap Koops & Bart van der Sloot: Legitimacy of Public-Private Partnerships in cybercrime
  • Aaron Martin: Legitimacy of Public-Private Partnerships in international humanitarian programs
  • Paul Verbruggen : Legitimacy of Public-Private Partnerships in food safety


10:30-11:00Coffee Break
11:00-11:45 Session 14: Keynote 5: Prof. Geert van Calster

Keynote Health and Environment

Location: DZ-2
Too clever by half? What the regulation of AI might want to learn from environmental law?

ABSTRACT. My talk will do what it says on the tin: I will discuss some of the core suggestions currently being made for the regulation of artificial intelligence. I will then test these against the lessons we may or may not have learnt from the regulations of environmental law specifically, and the regulation of new technologies in general.

11:45-13:15 Session 15A: DP TRACK: DP by design
Location: DZ-3
Personal data management and privacy management: barriers and stepping stones
PRESENTER: Nitesh Bharosa

ABSTRACT. In the wake of the General Data Protection Regulation (GDPR) there is increasing interest in providing individuals more control over their personal data. Yet, the concept of personal data management is poorly studied. What does personal data management actually mean from an individual perspective? And what is needed in order to enable personal data management in a society? This paper investigates these questions. Drawing on a case study in the financial domain, we provide a more focused understanding of the current situation (without personal data management) and a scenario with personal data management. We propose that the following components are needed in order to facilitate personal data management on a large scale: (1) easy to use high level of assurance electronic-IDs, (2) personal data spaces that allow for secure storage and qualified interactions with data, (3) data specifications (standardisation of syntax, semantics and structure) allowing for the automated processing (without manual rekeying or conversion) of data exchanged between systems, (4) remotely accessible tooling/features (e.g. data processing and analysis), (5) technical interfaces (APIs) for information sharing (posting and retrieving data, including consent) that can be used by all actors across multiple financial domains, (6) support for organisations that want to use the previously mentioned components and (7) a cross-domain public-private governance that steers the development and adoption of the previously stated components. This paper concludes with a discussion of pathways for developing these components and facilitating personal data management in practice.

Improving privacy choice through design: How designing for reflection could support privacy self-management
PRESENTER: Arnout Terpstra

ABSTRACT. In today's society online privacy is primarily regulated by two main regulatory systems: (command-and-control) law and notice and consent (i.e. agreeing to terms of agreement and privacy policies). Both systems prohibit reflection on privacy issues from the public at large and restrict the privacy debate to the legal and regulatory domains. However, from a socio-ethical standpoint, the general public needs to be included in the privacy debate in order to make well-informed decisions and contribute to the law-making process. Therefore, we argue that privacy regulation must shift from a purely legal debate and simple one-time yes/no decisions by 'data subjects' to public (debate and) awareness and continuous reflection on privacy and privacy decisions by users of IT systems and services. In order to allow for this reflective thinking, individuals need to (1) understands what is at stake when interacting with digital technology, (2) have the ability to reflect on the consequences of their privacy decisions, and (3) have meaningful controls to express their privacy-preferences. Together, these three factors could provide for knowledge, evaluation and choice within the context of online privacy. In this article, we elaborate on these factors and provide a design-for-privacy model that introduces friction as a central design concept that stimulates reflective thinking and thus restores the privacy debate within the public arena.

11:45-13:15 Session 15B: DP TRACK PANEL: Data Subjects as Data Controllers
Location: DZ-5
PANEL: Data Subjects as Data Controllers
PRESENTER: Michèle Finck

ABSTRACT. Data Subjects as Data Controllers

The GDPR in essence establishes a binary distinction between the data subject and the data controller as two separate legal entities. On the one hand, the Regulation primarily envisages situations where the data subject directly or indirectly provides personal data to a data controller. On the other hand, the data controller is assumed to determine the means and purposes of personal data processing and to subsequently carry out related processing activities (alone or together with others) independently of the data subject.

This, of course, is a common scenario in many contexts. Yet, it is also becoming increasingly clear that the binary distinction between the data subject and the data controller doesn’t hold up in many other contexts. One well known problem area is cloud computing where it is often unclear if a cloud provider is merely a data processor; however newer, allegedly more privacy protective technologies, are raising still more issues. Our panel would discuss and compare such scenarios in explaining, examining and comparing instances where new data governance models and technological evolutions challenge the GDPR’s binary divide between the data subject and the data controller.

We would present four different papers that draw attention to this issue respectively in relation to databoxes in the smart home context (Lilian Edwards), Apple (Michael Veale), Personal Information Management Systems (Nicolo Zingales) and blockchain technologies (Michèle Finck).

In these various scenarios the data subject, at least to some degree, contributes to the determination of the means and purposes of data processing - a role carried out by data controllers under the GDPR. This leads us to examine whether the data subject herself should be qualified as a data controller. Relatedly, we will also discuss the applicability of the household exemption in light of relevant case law and the reformulation of the corresponding recital in the GDPR. Particular attention will also be paid to the concept of joint controllers in light of the regulatory guidance and case law on joint controllers, particularly the seminal ruling in Wirtschaftsakademie Schleswig Holstein that indicates that in at least some of the scenarios discussed data subjects are likely to be joint controllers.

Lilian Edwards will examine databoxes. Databox is what is sometimes known as a Personal Data Container but can also conceptually be described as an operating system for a home user’s personal data. The aim of Databox is to enable a home user to make use of services normally delivered via sharing their personal data with an external service provider, without so giving away their data. This has obvious advantages given the prevalent distrust in the sharing data economy and the lack of control and oversight generally experienced by users of consumer cloud services such as social media, data aggregators, price comparison engines, switching sites et al. Serious legal questions arise however as to whether Databox is ever or often a data controller or even a processor; whether the user is the sole data controller; whether the domestic purposes exemption applies and if so to whom; and whether this conceptual framework underpinning the GDPR really scales at all to privacy-preserving infrastructure such as Databox at all. The shift from a world of products to services back to something that is effectively neither will also be interrogated.

Michael Veale will consider how firms are increasingly seeking to shed the label of ‘data controller’ in relation to data they hold by locking down systems in ways which privilege confidentiality, but limit control. He will draw on a case study from an ongoing investigation he triggered against Apple Inc. relating to their refusal to provide access to recordings and transcripts collected in connection to the Siri voice assistant. He will then look ahead to the situation that some companies appear to be envisioning a move towards: where they determine how data is used and transformed in software and hardware they control at the design stage, but do not continuously centralise or change the purposes of such processing. In these cases, companies build large, data-driven infrastructures whilst trying to ‘bind their hands’ using new technologies, such as privacy-preserving computation and federated computation. Does the GDPR anticipate such approaches? Does the traditional definition of data controller hold up in this situation, and if not, where are the tensions in a world with a complex mix of decentralised computing but centralised design?

Nicolo Zingales will focus on Personal Information Management Systems (PIMS), who offer an architecture for centralised storage, management and permissioned sharing of personal data on an individual’s device. He will discuss the data protection implications of choices made in the design and governance of this architecture, in particular the type of encryption chosen, the degree of openness of the ecosystem to third party applications, and the instructions provided by PIMs to its users and business partners. Based on the results of an empirical analysis of terms of service and practices of a selected sample of PIMS, it will be shown that a common thread for these companies is the attempt to avoid the qualification of data controller for activities beyond mere storage, in particular by delegating responsibilities onto users and third party applications. Additionally, innovative governance mechanisms (including trusts and distributed decision-making power) are adopted to separate the operational side from the rule- and policy-making side, thus reducing the level of influence of PIMS on the 'effective means' of processing. The presentation will conclude reviewing the rationale for the recent expansion of the concept of joint controllership, making the case for a principled and scalable approach towards the obligations of providers of critical infrastructures such as PIMS.

Finally, Michèle Finck will present a paper that examines the data subject and data controller roles in the context of blockchain technologies. Whereas there are multiple points of tension between the GDPR and blockchains, determining the identity of the data controller in these networks might be the hardest to resolve. This paper briefly introduces blockchain technologies and illustrates that particularly in public and permissionless systems, there simply isn’t one legal entity that determines both the means and purposes of data processing. This leads to an examination of the notion of joint controllers and its application to such contexts, considering in particular that agreements allocating responsibilities cannot easily be concluded between parties that do not know another. The paper further critically engages with the French DPA’s recent guidance on the application of the GDPR to blockchains in which it was argued firstly that data subjects are in at least some circumstances also data controllers in a blockchain network, and secondly that the household exemption applies where individuals engage in such networks in a personal capacity.

We are hoping that this panel will allow us to further develop our respective research projects and identity common themes and problems. We are also hoping that it would be of interest to many participants in the conference as it offers insights into less well-known methods of data governance and processing and engages with provisions and concepts of the GDPR that are of general interest to anyone working in this area.

11:45-13:15 Session 15C: AI AND RESPONSIBILITY TRACK: Regulating AI
Location: DZ-6
Assessing Legal Liability for Harm by Fully Autonomous AI

ABSTRACT. In 1981, I published the first-ever academic article on the legal ramifications of AI, suggesting that as AI becomes more autonomous, we could follow the historical path of how “less-than-human” sentient beings (who cause harm) were treated, e.g. “goring ox”, slaves, women, children/minors, cognitively challenged, and servants/agents. Today, we are close to fully autonomous AI (henceforth: aAI), and in certain narrow knowledge fields we are already there – a result of the neural net approach to training AI through “Deep Learning”. However, this not only has brought a huge functional increase in aAI abilities, but also involves a different, unexpected problem: difficulty/impossibility understanding precisely how such aAI makes its decisions! When things go wrong – i.e. aAI decisions cause harm – it will be harder to assign responsibility and liability.

My proposed paper offers several future legal, economic and quasi-public, institutional solutions for assessing autonomous aAI harm. Several researchers (see “Source Materials” below) have raised some of the issues involved and offered general analyses, but none yet has proposed a detailed legal/institutional structure for the phenomenon. 1) Spread liability for injurious decisions among the manufacturer, seller (if different), and buyer/user. 2) Decrease manufacturer liability with the passage of time & use, while increasing that of the end user. 3) Establish a 3-level system of institutional regulation, testing & approval for each aAI: A- Light: In-house testing thru the R & D process, with strict internal record-keeping according to governmentally approved protocols (non-abidance would entail almost automatic liability, i.e. no need for the injured party to prove “negligence”). B- Medium: Outsourced, independent, computer-simulated, multiple-scenario (e.g. driving in the city; highway; in a storm; etc) testing by governmentally pre-approved testing “labs”, with a financial firewall between the aAI developer and the independent tester. C- Heavy: Government regulator testing the aAI system/product in the field: a) an expanded Patent Office performing cost/benefit analysis (see sub-paragraph “vi” below) – with relevant hiring/training to upgrade or supplement Patent Office personnel; or alternatively: b) establishment of a Federal Technology Agency (FTA) with three-stage testing: first, checking the programming; second, computer simulations to determine possible unexpected “glitches” (harmful or at least unexplainable AI decisions); third, actual testing in the field.

Suggested addendums: I- Governmental organizations (e.g. military) would have to do at least “B” and preferably “C”. II- The greater the likelihood of loss of human life (depending on the field), the more stringent the pre-test. Thus, the “approved Labs”, Patent Office, or FTA could determine whether A, B, or C testing is called for – occasionally combining two levels: e.g. the inventing/manufacturing company can perform “A”, deliver its results to the Patent Office along with the patent application, and then the Patent Office could decide whether “A” was sufficient, or “B” or “C” was necessary (with ultimate right of appeal to some independent, quasi-judicial body like the U.S. Patent Trial and Appeal Board (in Europe called the Legal Board of Appeal). III- If the above procedures and protocols are followed, the government could establish an industry-financed (small tax) fund for “no-fault” compensation to injured parties. This differs from the “retributive” or “compensation” approach normally used in liability cases – and is similar to “no-fault” car insurance. IV- If the law does not require the above testing protocols – or for aAI manufacturers who do not seek a patent – we could legislate differential liability: 1- aAI that seeks the above “protocol” approval would automatically be put in the “descending liability” format (parag. 1 above) – or immediately into “no-fault”; 2- those not following the “protocol” would have greater liability (e.g. heavier burden of proof that everything was done to avoid aAI harm) and a mandated longer warranty period. V- Any sale of an aAI decision-making system must clearly explain whether it can be “turned off” and whether (and to what extent) the aAI can be asked to explain its actions/decisions. VI- Differentiate between civil (financial harm) and criminal (loss of life or limb) liability – regarding the above paragraphs (protocols, stringency, burden of proof etc).

Caveats & Challenges (to be noted briefly): A) aAI that undergoes gradual “upgrade”/improvements by the manufacturer: to what extent does this change the above levels of liability and/or procedures? Similarly, if the “upgrade” is performed by the end user (open source code). B) How to define “autonomous” AI compared to non-autonomous on the one hand, or “independent” AI on the other hand? (See “D” below) C) This paper does not attempt to deal with ethical ramifications of aAI – a wider, and even more complex and challenging issue area. D) This paper will not speculate on, or address, future AI having “sentience” (independent moral agency).

Selected, Relevant Source Materials

Asaro, Peter M. (2015). “The Liability Problem for Autonomous Artificial Agents”: http://peterasaro.org/writing/Asaro,%20Ethics%20Auto%20Agents,%20AAAI.pdf (Paper also delivered at: AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016).

Brożek B., Jakubiec M. (2017). “On the Legal Responsibility of Autonomous Machines,” Artificial Intelligence Law. No. 25(3), pp. 293–304.

du Preez, Derek (2018). “EU to debate artificial intelligence regulation and legal issues,” diginomica: https://diginomica.com/2018/03/12/eu-debate-artificial-intelligence-regulation-legal-issues

Hage J. (2017). “Theoretical Foundations for the Responsibility of Autonomous Agents,” Artificial Intelligence Law. No. 25(3), pp. 255–271.

Hall, Brian (2017). “Top 5 Legal Issues Inherent In AI and Machine Learning”, TraverseLegal: https://www.traverselegal.com/blog/top-5-legal-issues-inherent-in-ai-and-machine-learning/

Karliuk, Maksim (2018). “The Ethical and Legal Issues of Artificial Intelligence,” RIAC: http://russiancouncil.ru/en/analytics-and-comments/analytics/the-ethical-and-legal-issues-of-artificial-intelligence/

Lehman-Wilzig, Sam (1981). “Frankenstein Unbound: Toward a Legal Definition of Artificial Intelligence,” FUTURES: THE JOURNAL OF FORECASTING AND PLANNING, vol. 13, #6 (Dec. 1981), pp. 442-457: https://www.researchgate.net/publication/278394289_Frankenstein_unbound

Pagallo, Ugo (2018). “Apples, oranges, robots: four misunderstandings in today’s debate on the legal status of AI systems,” PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A: www.rsta.royalsocietypublishing.org

Stankovic, Mirjana; Gupta, Ravi; Rossert, Bertrand Andre; Myers, Gordon I.; and Nicoli, Marco (2017). “Exploring Legal, Ethical and Policy Implications of Artificial Intelligence” (White Paper), LJD: https://www.researchgate.net/profile/Mirjana_Stankovic2/publication/320826467_Exploring_Legal_Ethical_and_Policy_Implications_of_Artificial_Intelligence/links/59fbf3d4a6fdcca1f2930ad3/Exploring-Legal-Ethical-and-Policy-Implications-of-Artificial-Intelligence.pdf

Organising the regulation of algorithms: comparative legal lessons

ABSTRACT. This contribution places itself within the ambit of scholarship revolving around the regulation of algorithms and machine learning. The literature has highlighted various challenges this technology poses to data protection law, among which the relevance of safeguards, the material scope, or the range of rights at stake (e.g., Hartzog, 2017; Hildebrandt 2015, Gellert, 2018, Wachter, 2019).

The present contribution is carried in the ambit of the ERC INFO-LEG project, and focuses on the challenges surrounding the notion of personal data. As Purtova (2018) has convincingly demonstrated, there are real risks that in a world of machine learning, everything becomes information, and every piece of information becomes personal data; thereby transforming data protection law as “the law of everything”.

This contribution tries to address the issues associated with the ever-expanding scope of the legal notion of personal data. It does so by resorting to, and developing the concept of “organising notion”, which serves as a heuristic and conceptual tool for an exercise in comparative law. What does it mean to understand data protection as organised around the notion of personal data? Further, how is information defined and operationalised as an organising notion in other informational legal frameworks such as in the field of cybersecurity for instance? Are there any useful lessons to be learned as far the regulation of machine learning from a data protection law perspective is concerned?

11:45-13:15 Session 15E: AI AND RESPONSIBILITY TRACK PANEL: Workshop on AI, Robotics and Legal Responsibility in the Age of Big Data
Location: Dz-1
PANEL: Workshop on AI, Robotics and Legal Responsibility in the Age of Big Data
PRESENTER: Ugo Pagallo

ABSTRACT. AI, Robotics and Big Data are intertwined, converging and will drastically influence business models, le-gal institutions, social communities and facilities in the digital age. A collection of everyday physical smart systems equipped with microchips, sensors, and wireless communications capabilities and connect-ed to the internet and to each other, shall receive, collect and send myriads of user data, track activities and interact with other devices, in order to provide more efficient services tailored to users’ needs and desires. The near future will bring us more complex and multi-task intelligent devices that will be using AI and predictive algorithms to make decisions while relying on external distributed data sources. As the scope of intelligent agents’ activities broadens, it is important to ensure that designers, producers, manu-facturers and/or end-users of such complex technological systems will be held legally responsible and they will not make irrelevant, counter-productive, harmful or even unlawful decisions. As the intensity and magnitude of this technological revolution is still not fully understood, the law may struggle to evolve quickly enough to address the challenges it raises. To set a legal framework which ensures an ade-quate level of protection of personal data and other individual rights involved, while providing an open and level playing field for businesses to develop innovative data-based services, is a challenging task. This requires to examine how the relationship between human beings and digital technologies affects the role of legal responsibility and social accountability in the governance and regulation of AI, robotics and predictive algorithms. Therefore, the research concerns how the needs for data protection, business inter-ests and social issues can best accounted by law. This research has to be explored from a multidiscipli-nary perspective ranging from law, economics, social science, computer science and robo-ethics.


Deadline, 1 November 2018 Conference 15-17 May 2019, Tilburg University

Organized by University of Turin Chair Ugo Pagallo, University of Turin; Moderator Massimo Durante, University of Turin

Panel (70 minutes): Names of the speakers will be provided

Posters (20 minutes): Paola Aurucci, San Raffaele Hospital, Center for Advanced Technology in Health and Wellbeing, Milan Jacopo Ciani Sciolla, University of Turin

11:45-13:15 Session 15F: BOTLEG PANEL 2: Public-private actions against botnets: the case of DDoS attacks
Location: DZ-8
PANEL Botleg II: Public-private actions against botnets: improving law and practice
PRESENTER: Bert-Jaap Koops

ABSTRACT.  Combatting botnets, which facilitate many forms of cyber-attacks, is a key challenge in cybersecurity. The classic crime-fighting approach of prosecuting perpetrators and confiscating crime tools fails here: botnets cannot be simply 'confiscated', and law-enforcement's reactive focus on prosecuting offenders is ill-suited to deal effectively with botnet threats. A wider set of anti-botnet strategies, including pro-active strategies and public-private co-operation, is needed to detect and dismantle botnets. Public-private anti-botnet operations, however, raise significant legal and regulatory questions: can data about (possibly) infected computers be shared among private parties and public authorities? How far can private and public actors go in anti-botnet activities? And how legitimate are public-private partnerships in which private actors partly take up the intrinsically public task of crime-fighting? 

In this panel – the second of two panels discussing findings of the NWO-funded BotLeg project “Public-private actions against botnets: establishing the legal boundaries”– we will discuss legal opportunities and legal obstacles for private actors to engage in botnet mitigation at different stages of the botnet lifecycle. We will zoom into the case of fighting DDoS attacks and discuss what they are, how they can be mitigated, especially with respect to the IoT-powered DDoS attacks, and discuss what actions law enforcement authorities are taking to address this issue. 

This panel features invited speakers from diverse backgrounds – and points of view – united in their conviction that mitigating botnets and DDoS attacks is a shared effort. The debate goes beyond the boundaries of criminal law and the confines of data protection to discuss the broader regulatory landscape. It revisits matters of intermediary liability in face of cybercrime and questions the norms of product liability in the age of the IoT.  

11.45-11.50 Bert-Jaap Koops      – Word of welcome

11.50-12.15 Karine e Silva          – Legal bottlenecks in botnet mitigation: a transatlantic overview

12.15-12.30 Jair Santanna          – DDoS attack: what it is and how they can be mitigated

12.30-12.45 Cristian Hesselman – IoT-powered DDoS attacks and how they can be mitigated (e.g., SPIN)

12.45-13.00 Floor Jansen            – The police’s fight against DDoS attacks

13.00-13.15 Q&A and discussion


13:15-14:15Lunch Break
14:15-15:00 Session 16: Keynote 6: Dr. Seda Guerses

Keynote Justice and the Data Market  

Location: DZ-2
Beyond Privacy? Protective Optimization Technologies

ABSTRACT. In the 90s, software engineering shifted from packaged software and PCs to services and clouds, enabling distributed architectures that incorporate real-time feedback from users. In the process, digital systems became layers of technologies metricized under the authority of objective functions. These functions drive selection of software features, service integration, cloud usage, user interaction and growth, customer service, and environmental capture, among others. Whereas information systems focused on storage, processing and transport of information, and organizing knowledge "with associated risks of surveillance" contemporary systems leverage the knowledge they gather to not only understand the world, but also to optimize it, seeking maximum extraction of economic value through the capture and manipulation of people's activities and environments. The ability of these optimization systems to treat the world not as a static place to be known, but as one to sense and co-create, poses social risks and harms such as social sorting, mass manipulation, asymmetrical concentration of resources, majority dominance, and minority erasure. In the vocabulary of optimization, these harms arise due to choosing inadequate objective functions. During the talk, I will provide an account of what we mean by optimization systems, detail their externalities and make a proposition for Protective Optimization Technologies

15:00-16:30 Session 17A: DP TRACK: Risk-based approach and impact assessment
Location: DZ-3
Capturing licence plates: police participation apps from an EU data protection perspective

ABSTRACT. In October 2017 a Pokémon Go-like smartphone app called ‘Automon’ was revealed as one of several new initiatives to increase the public’s contribution and engagement in police investigations in the Netherlands. Automon is designed in the form of a game that instigates participants to photograph license plates to find out if a vehicle is stolen. The participants in the game score points for each license plate photographed, and in case a vehicle is indeed stolen they might also qualify for a financial reward. In addition, when someone reports that a vehicle has recently been stolen, game participants that are in the vicinity receive a push notification and are tasked to search for that specific vehicle and license plate.

This paper studies the example of the Automon app and contributes to the existing debate on crowdsourced surveillance and the involvement of individuals in law enforcement activities from a legal point of view. It analyses for the first time the lawfulness of initiatives that proactively require the involvement of individuals in law enforcement activities and confronts them with the data protection standards of the European Union (EU). The Automon app design fails to comply with the new standards and any new legal intervention to regulate the field must be introduced at EU level.

Human rights in personal data processing: An analysis of the French and UK approach

ABSTRACT. The current technological and social scenario, characterised by the presence of increasingly complex innovations and data-intensive systems, forces to reflect on how to address data protection in this state of transition towards a society increasingly shaped by automated data processing. This urges developers and policy makers to develop risk analysis and risk management models that go beyond the traditional focus on data quality and security, to take into account the impact of data processing on human rights and fundamental freedoms. The main challenge in the design of these broader assessment models concerns the outline of a general paradigm of values to be used as a benchmark in the assessment process. From this perspective, the main goal of this research paper is to figure out whether and to which extent the data protection authorities take into account human rights at large, both in their decisions and in the guidelines they provide. In carrying out this analysis, the paper focuses on the approach adopted by the French and UK data protection authorities. These authorities show two different ways of addressing these issues, since the French authority is mainly centred on case law and the ICO on general guidance. Although the documents adopted have different nature, which affects the extent and elaboration of the references to human rights, they show a plurality of rights and freedoms – other than the right to privacy and the right to data protection – taken into account by data protection authorities. This result confirms therefore the need to develop a broader impact assessment model which considers all the human rights and fundamental freedoms likely to suffer prejudice in the context of a given treatment of personal data.

Detecting new approaches for a Fundamental Rights Impact Assessment to Automated Decision-Making
  • Article 35 (3)(a) of the General Data Protection Regulation (GDPR) obliges a controller to execute a Data Protection Impact Assessment (‘DPIA’) in the event of automated decision-making (‘ADM’). An assessment of potential fundamental rights impacts is part of the DPIA. 
  • Companies see great and promising profits in automated decision making (ADM). However, research among companies indicates that legal insecurity exists as regards the interpretation of the GDPR, including the provisions relevant to ADM’s and to the execution of a DPIA.
  • DPIA's have not yet been broadly applied in practice. The objective of the author is to detect a way forward in the event a company intends the use of ADM, by offering a practical tool including a sliding scale of potential impacts and corresponding mitigating measures so as to contribute to compliance with the GDPR and fundamental rights.
  • The impact assessment is based on four benchmarks: i) to establish the fundamental right(s) at stake, ii) to detect risks at data capture and data analytics stages, iii) to establish who is beneficiary of the use of personal data in the ADM and iv) to establish who is in control over the data sharing during and after the ADM.
  • By responding the benchmarks a controller would develop a fundamental rights impact assessment. The objective is to identify the risk level, indicating the type of measures an accountable controller should consider so as to achieve an appropriate risk management. The proposed approach should help fostering compliant, fair, transparent and trustworthy ADM.
15:00-16:30 Session 17B: DIGITAL CLEARINGHOUSE TRACK: Personalisation, microtargeting and online content monitoring
Location: DZ-4
The Regulation of Online Political Microtargeting in Europe

ABSTRACT. This paper examines how political microtargeting is regulated in Europe, and the strengths and weaknesses of that regulation. The paper examines the question from three perspectives, namely data protection law, freedom of expression, and sector-specific rules for political advertising. The paper analyses the interplay of the different legal regimes, and assesses whether these regimes leave important issues unaddressed. We also explore whether regulation should be amended to mitigate the risks of microtargeting.

From parallel tracks to overlapping layers: GDPR and e-Commerce Directive towards convergence of legal regimes

ABSTRACT. Legal regimes are increasingly overlapping in the information society. The development of new technologies have encouraged the development of new businesses challenging the horizontal and vertical separation between different markets. Focusing on the framework of the Digital Single Market, it is possible to underline the challenging convergence of the system of the e-Commerce Directive and that of data protection. From original parallel tracks, the two systems will likely overlap raising new issues and questions about their relationship.

If someone wonders where this story started, it would probably find an answer looking at the EU legal framework at the beginning of this century. Indeed, Directive 2000/31/EC (“e-Commerce Directive”) expressly clarified that its scope of application does not include questions relating to information society services covered by Directives 95/46/EC and 97/66/EC. This system has applied until the entry into force of Regulation (EU) 679/2016, also known as General Data Protection Regulation (“GDPR”). The GDPR has not only reviewed the EU data protection legal framework ensuring a high degree of uniformity between Member States’ legislation, but it has eliminated the traditional separation between the system of the e-Commerce Directive and the Data Protection Directive. The GDPR has clarified that its application should not affect the rules provided for by the e-Commerce Directive, in particular, those regarding ISP’s liability.

As a result of this potential overlapping, one of the main issues concerns the extension of the “safe harbour” regime also for third-party conducts violating data protection rules. This extension would likely encourage online intermediaries to check and monitor also this kind of online contents in order to avoid liability for their awareness with potential chilling effects for freedom of expression. Although the GDPR would likely extend the possibility to apply the safe harbour exemption to the field of data protection, the limits to the scope of the e-Commerce Directive are still in force. Moreover, the safe harbour regime would apply only to third-party contents. Indeed, the extension of this regime should not be considered as an exemption of liability from the unlawful processing of personal data performed directly by online intermediaries.

Another issue to take into consideration is the increasing control which online intermediaries perform over their online spaces. This trend is mainly because of the evolution of the algorithmic society. Since algorithms allows hosting providers to play a more active role in processing data and performing online content management activities, the safe harbour extension to third-party data protection violation could risk to blur the notion of ‘data controller’ and ‘Internet service providers’ affecting the application of the rules in the field of data processing and ISP’s liability. The well-known case Google Spain has highlighted this issue even before the adoption of the GDPR. In this scenario, the main question is whether the increasing use of algorithms and AI technologies would transform online intermediaries into data controllers due to their active participation in the processing activities.

From an EU perspective, this work will try to underline the main challenges deriving from this new scenario where the data protection and e-Commerce Directive regimes are converging. Moreover, this work will propose solutions in order to highlight the benefits deriving from convergence of legal systems.

Personalised pricing and EU law

ABSTRACT. According to the OECD (2018:6), price personalisation is the ‘practice of price discriminating final consumers based on their personal characteristics and conduct, resulting in each consumer being charged a price that is function – but not necessarily equal – to his or her willingness to pay’. With the development of big data and algorithmic pricing, personalised prices are a growing policy concern. It is thus timely to analyse how EU law regulates the issue. Our paper aims to review the main EU rules applicable to the personalised pricing, their conditions of applications and their effects and to make policy recommendations to increase the consistency and the effectiveness of those rules.

The first section of the paper briefly introduces the issue by situating price discrimination within the different degrees of discrimination developed by the economic theory, analysing its pervasiveness and its economic impact.

The second section of the paper deals with the transparency rules applicable to personalised prices. It deals with EU consumer protection rules, in particular the Directive on unfair commercial practices, the Directive on misleading and comparative advertising and the Directive on consumer rights and analyse under which conditions those Directives regulate personalised prices and with which effects. The section then deals with the General Data Protection Regulation and how this Regulation may be apply when the personalisation of the prices relies on personal data.

The third section of the paper deals with the prohibition rules. It deals with the anti-discrimination rules and explains the limits on the criteria to be used to personalise the prices. Then the section deals with competition law and explains under which circumstances personalised prices can be prohibited as exploitative abuse.

Finally, the fourth section of the paper concludes with an evaluation of the consistency and the effectiveness of EU rules regarding personalized prices and, on that basis, makes some policy recommendations. This section shows that the different rules can be substitute, in particular consumer protection and data protection rules. However, we show that the rules are mainly complementary, hence reinforce each other, in particular the transparency rules increase the effectiveness of the anti-discrimination rules. Therefore it is key that all EU rules are enforced in a coherent manner. This requires that the different enforcement authorities (such consumer protection agencies, data protection authorities or antitrust agencies) cooperate closely at the national and at the EU level as it is proposed in the Digital Clearing House. This section also recommends, in case of personalized prices, the introduction of an obligation to explain to the consumer how the prices are determined and the main parameters used for its determination.

15:00-16:30 Session 17C: IP TRACK PANEL: To Decentralize Everything, Hash Here: Blockchains, Intellectual Property and Data Protection
Location: Dz-1
PANEL: To Decentralize Everything, Hash Here: Blockchains and Information Law
PRESENTER: Balazs Bodo

ABSTRACT. Blockchain is the latest technological hype to promise disruptive decentralization, especially as it relates to information goods and services. While one of the goals of decentralized technologies is to operate without the need to adhere to existing legal systems, blockchains face challenges of compatibility with these systems so as to facilitate wider adoption. This panel examines these challenges from the perspective of information law, focusing on their treatment under the legal regimes of copyright, data protection and competition.

15:00-16:30 Session 17E: AI AND RESPONSIBILITY TRACK PANEL: Explaining Responsibly
Location: DZ-8
PANEL : Explaining Responsibly
PRESENTER: Aviva de Groot

ABSTRACT. Panel proposal for track "AI, Robotics and Responsibility"

The interdisciplinary scholarly discourse on 'the right to explanation' of AI-infused decision making processes goes beyond the GDPR's sphere of application, as it addresses understandability needs that are recognized on a global scale. Processes of analyzing, profiling, and predicting human behaviour support decisions in all sectors of society, from credit scoring to fraud detection to decisions on who to hire - or even arrest.

The increasing complexity of the technologies used, industry intricacies, and network effects all add to inscrutability and assessment challenges of these applications, on individual level as well as societal levels. The decreasing awareness of ubiquitous automation processes in the background of people's lives raises additional concerns. Increasingly, it is noted that issues of obscurity cannot be 'explained away,' or explained at all on an individual level. While most agree there is a pressing need to make these systems safe, fair, and 'democratically understandable,' there seems to be, at least temporarily, some competition between those that argue for scrutability at higher levels and the ones researching individual explanatory potential.

In the meantime, in theory and practice, different approaches and methodologies towards 'explainable AI 2.0' are being designed and tested. The GDPR functions as a catalyst as controllers already need to comply with requirements for explainability. Explanations should be understandable and meaningful. The latter term precisely triggers the above mentioned competition, as it is far from self-evident what a ‘meaningful’ explanation is. What counts as an honest, time-stamped translation of a complex and dynamic computational process? Who gets to decide what that is? Can explanations be misused to obfuscate abuse of power?

In the absence of commonly understood and accepted evaluative standards it is hard to assess the beneficence, usefulness and pitfalls of these developing explanatory methodologies. This conundrum might inform us to stop talking about 'responsible explanations' and instead speak of 'explaining responsibly.' As a field of research, it needs to be interdisciplinary. Law, philosophy, data science, cognitive sciences, STS and humanities each have valuable theory and experience to bring to the table.

This panel provides such a table, and aims to start the discussion in acknowledgment of the seemingly irreconcilable, acute needs for both individual explanations and high level governance strategies.

Confirmed panelists: Reuben Binns, Michael Veale, Martijn van Otterlo, Rune Nyrop. The panel will be presented, chaired and the discussion hosted by , Aviva de Groot, Sascha van Schendel and Emre Bayamlıoğlu.

15:00-16:30 Session 17F: AI AND RESPONSIBILITY TRACK: Addressing responsibility concerns about AI through the practice approach
Location: DZ-5
PANEL: Addressing responsibility concerns about AI through the practice approach
PRESENTER: Merel Noorman

ABSTRACT. The current momentum in the development of AI has revived debates about the loss of control over these technologies and obfuscation of human responsibility. Increasingly opaque, networked and autonomous technologies have led some to suggest that our existing ways of distributing responsibility, including for example determining legal liability, will soon no longer suffice. How can we evaluate such a claim? And if it is indeed the case that the established ways of distributing responsibility no longer suffice, how can we address this problem?

One way of addressing the problem is by looking at responsibility as a set of social practices. These practices involve the accepted ways of evaluating actions, holding others to account, blaming or praising, and conveying expectations about obligations and duties. They can be forward and backward looking and pertain to the various kinds of responsibility, such as accountability, role responsibility, legal responsibility, and moral responsibility. Conceiving of responsibility as a set of practices places the focus on the shared understanding of what is expected, what is owed, and what are likely consequences of failure within a sociotechnical network. It draws attention to the formal and informal mechanisms and strategies to ascribe responsibility, such as laws, policies, procedures, organizational rules, and social norms that promulgate and enforce responsibility.

The concept of responsibility practices raises questions that require descriptive and normative analyses: how do the discourses on AI and AI technologies come into conflict with established responsibility practices and how is responsibility understood within these practices? What is it about these technologies that makes the application of particular laws, protocols or norms problematic? Where and how do people (re)negotiate how responsibility is understood or how it is ascribed? In what way do these negotiations affect the design of the technology? But also, how can we intervene in negotiations about the distribution of responsibility to ensure that human beings will continue to be responsible for the behavior of AI technologies?

During this panel we will discuss how this practices approach can help us to think about responsibility concerns raised about AI.