TILTING2019: TILTING PERSPECTIVES 2019 – REGULATING A WORLD IN TRANSITION
PROGRAM FOR THURSDAY, MAY 16TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 7A: DP TRACK: Future and alternatives for data protection
Location: DZ-3
09:00
Diagnosis: Regulatory Disconnection - time to reorganize data protection law?

ABSTRACT. In this paper, we will make the case for the necessity to fundamentally reorganize data protection law, where the reorganization has to be of foundational character and concern on the very boundary concepts around which the law is constructed. We will then move to show one possible approach to redesign data protection around new boundary concepts. We will start by explaining the need for such fundamental redesign. We argue that the impasse of data protection law caused a.o. by the broad material and personal scope of data protection law, as well as the highly intensive data protection obligations, constitutes what Brownsword calls a ‘regulatory disconnect’. Namely, the technology as viewed by the regulation does not match the present state of technology or its uses. When new data technologies are perceived as posing risks, the automatic reaction is to fit them under the umbrella of data protection, since the current main organizing notion of ‘personal data’ implies a binary application of the law. A consequence of this has been the extension of ‘personal data’, in an attempt to re-establish regulatory connection. It is argued that the concept of ‘personal data’ is failing at establishing legal boundaries, leading to data protection regulating everything. Even if this is not the case, the fact remains that the boundaries of this legal domain, as well as legal certainty for the actors involved, are threatened. Data protection regulation is said to be technology neutral, and diagnosing the issue in terms of regulatory disconnect may seem counter-intuitive. Yet, technology and its applications seem to stretch data protection law to the limit: e.g. a widening range of situations constitutes processing personal data, and all data might be soon considered personal. Similarly, the roles and responsibilities within data protection law are still based on scenarios where it is relatively easy to distinguish between a controller and processor, while e.g. recent case law shows otherwise (e.g. Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH). Whenever regulatory disconnect takes place, interpreters face a choice between re-connecting the law through purposive interpretation, or leave the issue to the regulators. Until now, the mismatch in data protection has been addressed through both of these approaches. The Court of Justice has developed case law on the material and personal scope of data protection (e.g. Nowak, Google Spain) . The Data Protection Directive has been updated to the General Data Protection Regulation. Yet, the central concepts around which legal protection is organized (e.g. personal data, controller) remain the same, subject to purposive interpretation by the courts. Hence, the regulatory disconnect has been addressed by ‘more-of-the-same’ law, with additional and more effective obligations and broadened scope. We will demonstrate that a more far-reaching intervention of the regulators is thus necessary. Finally, we will argue that while many approaches to such reorganization are possible, one possible strategy would be to look at how legal protection against harms is organized in other areas of law, to build a model of such protection, including a high-level overview of the boundary concepts used in other domains of law, and use this model and the boundary concepts for redesign of legal protection against data-driven harms. This part of the paper will formulate a research agenda for the search of new organizing notions for a future-proof protection of people against data processing harms.

09:30
Trust and the Smell of Data: a Postphenomenological Experiment
PRESENTER: Esther Keymolen

ABSTRACT. Increasingly, everyday life is being moulded by smart and networked artefacts. We connect with friends through social media platforms, we fetch our news on Twitter, we find our way in the city by following the instruction of navigation apps. Generally, these technologies are designed in such a way that they are easy to use, have an excellent performance, and seamlessly fit our daily activities. No manual needed. All in all, their “ready-at-hand-ness” makes them seemingly trustworthy. However, from research we also know that the convenience and usefulness we experience with our smart devices is only half of the story. Behind the interface, and therefore beyond our first-hand phenomenological experience, data is being leaked and shared in a complex network of actants. This makes users of networked technology vulnerable in different ways. They may lose their -informational- privacy and consequently run the risk of being manipulated and have their action space being curtailed. Current strategies to make users more aware of their data-vulnerability is to inform them via terms and conditions and privacy-policies (text-based) or to make use of icons and other visible cues (icon-based). These strategies have in common that they appeal to the visual perception of end-users and require -to a certain extent- the rational and conscious processing of information. Amongst others, these strategies bring along information overload and information fatigue, negatively impacting their success. This paper brings together insights from postphenomenology and experimental psychology, to explore other bodily perception strategies to raise data-awareness. We will refer to these strategies as mediation tactics. The theoretical starting point is that in our experience of the world implicit and tacit knowledge is already present, guiding our interactions. Smell is one of the facets of this tacit knowledge. Smell informs us on a very instinctive level that something is potentially dangerous (e.g. the smell of decomposing bodies or of decayed food). Because of its strong warning function, smell has also been artificially added -for example to gas- in order to warn people. In collaboration with the Smell of Data project of artist and designer Leanne Wijnsma, we have conducted a small-scale experiment to see whether adding a warning smell to networked artefacts that leak data might change the trust perception of users. The hypothesis is that: (1) perceptions of trust are shaped by the technologies at hand. (2) If we alter the design of the technology, the perception of trust will change as well. The concept of multistability (the openness of artefacts to incorporate different meanings/identities) will be the starting point of this experiment. The aim of this research is twofold: we want to test whether changing the design of the interface by adding smell indeed has an effect on the trust perception of users (empirical stance) and we want to use these results to flesh out key concepts in postphenomenology, in particular the interaction between micro and macro perceptions (conceptual stance). The first findings of this research will be presented.

10:00
Data protection through work floor democracy: The German and Dutch example

ABSTRACT. This presentation examines the role of collective labour law in the protection of personal data of employees gathered by employers in the context of work activities. Specifically the role of works councils in this process. Works councils are institutionalized representative organs that are meant to protect employee interests on the company level. This paper will focus on the German and Dutch iterations of the works council; the ‘betriebsrat’ and the ‘ondernemingsraad’.In these these countries a work floor democracy needs to give consent prior to technological work floor surveillance. This paper will examine what these rights entail. With work floor surveillance becoming ever more present, the need for fast, flexible, yet effective regulation grows more relevant. With the debate swinging from self-regulation to national or supranational laws, the works council represents a possible alternative option.

09:00-10:30 Session 7B: DP PANEL: Workshop: The Practical Implementation of the EU Data Strategy
Location: DZ-4
09:00
Workshop: The Practical Implementation of the EU Data Strategy

ABSTRACT. The EU has over the past years created a data strategy in the form of a number of related legal instruments. Great impact is seen from the adoption of the GDPR, but also the PSD2 Directive and NIS Directive are of significant influence. On short term, the set will be completed by the new ePrivacy Regulation and the Regulation on the free flow of non-personal data. Once complete, the legal framework will cover privacy and data protection in general, data protection in online contexts and marketing, reuse of financial data, data in critical infrastructures (including cloud services), and non-personal data. The different directives and regulations show that the EU is trying to facilitate, or stimulate, economic benefits from data, whilst protecting the privacy rights of individuals and ensuring the security of networks. So, the EU has an overarching strategy or vision on the value and benefits of data use. The broad scope also implies that a range of professionals is needed for a proper implementation. Not only lawyers, but also information security specialists, application developers, and the users of applications and systems in all parts of an organization have to be involved. Nevertheless, it seems that in practice an interdisciplinary collaboration is still not the logical approach. Due to the main addressees of the different legal instruments, either the legal department, the IT department, or, for instance, the marketing department is determined to take care of the implementation. Common distinctions between departments, and in particular the traditional distance between lawyers and techies, remain in place and may even be strengthened. The question that arises is how to counter this phenomenon and how to ensure that value is created in an effective, respective, and economic valuable manner. The proposal for this workshop is to discuss the different possible approaches towards organizing compliance. Our statement is that there should be paid more attention to the EU data strategy in a broad sense, stressing the commonalities and connections between the different legal instruments as adopted and prepared by the EU. This will result in an embedment of strategic viewpoints on data in the general strategy of organizations. In addition, for data intensive organizations, this should be translated to a specific data strategy. Ultimately, an overall strategy can be formulated based on a number of ethical viewpoints that are inspired by the general principles that form the foundations of the EU legislative documents. These principles can be translated into organization-specific norms that direct the do’s and don’ts of the organization as a whole and of the different departments in particular. The aim of the workshop is to bring together stakeholders and representatives from commercial companies, policy, and academia to discuss the approach set out above. Participation of lawyers, technologists, and policy makers is encouraged.

The workshop will be led by Arnold Roosendaal PhD LLM. and Nitesh Bharosa PhD Arnold is director at Privacy Company and Smart Data Company and has extensive experience in advising companies, governments, and non-profit organizations in implementing EU legislation into their business and strategies. Nitesh is head of research & development at Smart Data Company and Senior Research Fellow at Delft University of Technology. He has worked on multiple large scale data exchange initiatives and has advised several organisations on how to develop and execute an agile data governance strategy.

09:00-10:30 Session 7C: IP TRACK: Data, Platforms and IP rights
Location: DZ-6
09:00
Towards an optimal regulation for innovative markets? An example of data sharing under PSD2

ABSTRACT. Almost invisible yet crucial for the proper functioning of the economy, retail payment systems are undergoing a rapid process of technological change driven by innovation in ICT. Despite promises of further economic growth through spurring e- and m-commerce, the innovative services based on new technologies pose novel security risks and raise questions about consumer protection. Unsurprisingly, personal data is omnipresent, both in innovation and regulatory policy debates addressing the ongoing processes of change. Crucial as a resource for further innovation, its capacity to give a decisive competitive advantage brought it into the limelight of the revised regulatory framework under Directive (EU) 2015/2366 (PSD2). The revised payment services directive, enacted to tackle regulatory gaps and legal uncertainty resulting from the innovation-driven changes in the payment landscape, introduces obligation of data sharing on market incumbents vis-à-vis innovative newcomers. Commission Delegated Regulation (EU) 2018/389 (RTS SCA) elaborates technical and operational details of data sharing – the core component of what became known as ‘open banking’. Less than one year until the implementation of the core provisions of the regulation, the adopted framework raises a number of concerns over its potential impact on competition and further innovation in the sector.

The paper examines the regulatory framework of the EU retail payments industry seen as an innovation policy instrument containing the elements of both supply- and demand-side innovation policy measures. The paper analyses the assumptions underlying the choice and formulation of regulatory objectives and scrutinises the mechanisms employed to achieve balance between often competing objectives under the PSD2. The systematic approach undertaken in the paper, combining the analysis of the new rules and institutional architecture from the regulatory theory perspective with the insights from the innovation theory literature, contributes to a wider debate of how to optimise the regulation of markets for innovation.

09:30
The Interface between Big Data & Intellectual Property

ABSTRACT. This paper reviews the impact of Artificial Intelligence tools used in connection with ‘Big Data’, mostly in the form of Text and Data Mining (TDM) on the intellectual property system, and vice versa. The paper discusses the obstacles that copyright protection of material included in big data datasets might pose, the role of limitations and exceptions (L&Es) for TDM, and the compatibility of such L&Es with the three-step test. The paper then considers the impact of AI/TDM on patentability and whether parts of AI/TDM systems may themselves be patentable. The paper then turns to the data exclusivity right in pharmaceutical and chemical data (such as clinical trial datasets) and ends with a discussion of trade secret and confidential information law’s application to AI/TDM.

10:00
European Regulation of Mobile Platforms
PRESENTER: Ronan Fahy

ABSTRACT. This paper examines how two new pieces of proposed EU legislation, the ePrivacy Regulation and Regulation on Platform Fairness, will apply to smartphone ecosystems. In particular, the paper analyses the potential tension points between the proposed rules and underlying policy objectives of safeguarding privacy in electronic communications settings and the functioning of the digital economy in the emerging era of platform governance. We conclude with recommendations on how to address these issues in the respective regulations in these areas.   

09:00-10:30 Session 7D: JUSTICE AND DATAMARKET TRACK: Social effect of brokerage
Location: DZ-7
09:00
Profiling and Chilling Effects: Exploring the Behavioral Mechanisms
PRESENTER: Shruthi Velidi

ABSTRACT. Problem Description: In recent years, the amount of digital traces that individuals leave has grown exponentially. Given the rapid adoption of digital services and applications across all domains of life, from health and business, to politics, education and leisure, our lives have become increasingly datafied (Van Dijck, 2014). The digital traces we leave include both voluntary data from online participation, for example through likes and posts on social media, as well as involuntary data generated as a by-product of our online activities (Micheli, Lutz, & Büchi, 2018). The latter can also encompass sophisticated metadata, for example about the time of access and location of the user as well as about the device used.

Taken together, personal data and metadata create an extremely detailed picture or profile of a person. This is especially the case when data from different sources are combined, for example when Google buys purchasing records from Mastercard to measure the success of search advertising (Bergen & Surane, 2018). Such data is increasingly used to make predictions about individuals, for example about their buying behavior, their creditworthiness (i.e., the likelihood that someone will default a loan), their job performance (e.g., AI systems that assess candidate videos based on language and facial cues such as HireVue) and their criminal activity (e.g., the COMPAS recidivism algorithm). This targeted aggregation of data, from multiple sources, associated with the individual is termed profiling.

The General Data Protection Regulation (GDPR) defines profiling as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.” According to the GDPR, the data subject should be informed of such profiling, and should receive meaningful information about the logic involved, the significance of, and the intended consequences of such profiling.

However, how can the data subject be informed about the extent and nature of potential consequences of profiling if they are currently unaware of these practices? While recent research on profiling has focused on the opaqueness, the lack of oversight of the “logic involved” (Citron & Pasquale 2014), and the existence of a right to explanation of how algorithms and systems work (Wachter, Mittelstadt, & Floridi 2017), an important aspect that has received less attention is the “envisaged consequences” of such profiling. Indeed, while governments and private actors increasingly data mine to maximize efficiency and accuracy, such practices might lead to invasive and discriminatory practices -- as they are typically built on biased datasets (Noble, 2018) -- which require further exploration (Custers, Calders, Zarsky & Schermer, 2013).

Content and Methods of our Contribution: In this article, we build upon recent studies of the legal consequences of profiling (Custers, Calders, Zarsky & Schermer 2013) by investigating the potential chilling effects of autonomous profiling systems from a multi-disciplinary perspective. Our goal is to scrutinize the issue of profiling with respect to pathways for chilling effects and conformity. After introducing the topic of profiling, we identify and discuss what chilling effects these systems might produce. Typically, profiling is linked to discrimination (e.g. racial profiling) and manipulation (e.g. voter manipulation), but is also linked to lesser known aspects such as homophily (or the so-called ‘filter bubble’) as well as the consequences of the procrustean design of such systems.

For the full article, we carry out an in-depth multi-disciplinary review of relevant literature. The literature review includes an overview of the key theories in privacy and surveillance studies that relate to profiling as well as a summary of important empirical insights. We then connect profiling practices with chilling effects and elaborate why profiling might have chilling effects that differ from other forms of surveillance, especially due to its predictive aim.

The literature review serves to develop hypotheses on the chilling effects of profiling practices as well as on more traditional surveillance technology (social media monitoring). We propose to test these hypotheses using an experimental vignette study in different application areas of profiling such as the job market and credit checks (e.g., for a new rental contract or mortgage). In each experimental vignette study, we will manipulate the type of surveillance (non-profiling surveillance vs. profiling; or non-profiling surveillance vs. simple profiling vs. complex profiling) and test the effect on intended behavioral change in the sense of chilling or alteration of online communication, interactions, and activities.

Our conclusions will report the results of the empirical analysis and contextualize our findings within the broader literature of data protection, privacy, and surveillance. This will include recommendations on how chilling effects that are associated with profiling technology could be reduced and how data controllers can provide meaningful information about the envisaged consequences of their profiling activities.

References: Bergen, M., & Surane, J. (2018, August). Google and Mastercard cut a secret ad deal to track retail sales. Bloomberg, 30 August 2018. Retrieved from https://www.bloomberg.com/news/articles/2018-08-30/google-and-mastercard-cut-a-secret-ad-deal-to-track-retail-sales

Citron, D. K., & Pasquale, F. (2014). The scored society: due process for automated predictions. Wash. L. Rev., 89, 1-33.

Custers, B., Calders, T., Zarsky, T., & Schermer, B. (2013). The way forward. In Discrimination and Privacy in the Information Society (pp. 341-357). Berlin, Heidelberg, DE: Springer.

Micheli, M., Lutz, C., & Büchi, M. (2018). Digital footprints: an emerging dimension of digital inequality. Journal of Information, Communication and Ethics in Society, online first. doi: 10.1108/JICES-02-2018-0014

Noble, S. (2018). Algorithms of oppression. New York, NY: NYU Press.

Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197-208.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.

09:30
Nudging the sharing of data in a fair way
PRESENTER: Michele Loi

ABSTRACT. Privacy self-management conferring absolute control to the data subject leads to cognitive overload in most people. In order to promote autonomy, it must be focused and choices have to be simplified. This is a conceptual contribution to Taylor’s unified framework to data justice (Taylor 2017). We focus on two questions: 1. What nudges (default rules, attention enhancing/selecting features etc) are necessary in a fair choice architecture for data transactions? 2. What nudges are impermissible in a fair choice architecture?

09:00-10:30 Session 7E: AI AND RESPONSIBILITY TRACK: AI, responsibility and the GDPR
Location: DZ-8
09:00
Right to Explanation and Algorithmic Transparency in the EU Member States Legislations and beyond

ABSTRACT. The aim of this intervention is to analyze the very recently approved national Member States’ laws that have implemented the GDPR in the field of automated decision-making (prohibition, exceptions, safeguards): all national legislations have been analyzed and in particular 9 Member States Law address the case of automated decision making providing specific exemptions and relevant safeguards, as requested by Article 22(2)(b) of the GDPR (Belgium, The Netherlands, France, Germany, Hungary, Slovenia, Austria, United Kingdom, Ireland). The approaches are very diverse: the scope of the provision can be narrow (just automated decisions producing legal or similarly significant effects) or wide (any decision with detrimental impact) and even specific safeguards proposed are very diverse. One relevant point is to understand whether the national extension of scope of Article 22 is compatible with EU treaties. However, just few states guarantee a right to legibility/explanation about the algorithmic decisions (France and Hungary), even considering accountability of algorithms; other States (Ireland and United Kingdom) emphasize the importance of a transparent and effective mechanism for contesting automated decision (e.g. notification, explanation of why such contestation has not been accepted, etc.); some other states provide just the three safeguards mentioned at Article 22(3): subject’s right to express his/her point of view; right to obtain human intervention; right to contest the decision. Interestingly, in some Member States when automated decisions are taken by public authorities, the law provides wider requirements: e.g., in the French law there is a list of stricter conditions to respect, in particular in terms of the use of sensitive data, accountability for legibility of algorithmic decisions and the respect of administrative procedures. The difference between public automated decision-making and private automated decision-making needs to be seriously taken into account even at a European level. References for further research are the modernized version of CoE Convention 108 on algorithmic decisions and non-EU regulations of automated decisions, e.g. the Brazilian Data Protection Law and Californian Data protection law.

09:30
Problematising ‘Algorithmic Accountability’: Why Shift the Analysis from Function to Form?

ABSTRACT. The paper I propose for seeks to problematize the idea of ‘algorithmic accountability’ through legal rights in context of pre-emptive machine learning algorithms. With the assumption that at its core, ‘accountability’ is a relation between social subjects, the paper builds by problematizing the subject of such accountability both in terms of the subject for whom accountability is sought (the relevant algorithmic subject) and the subject through whom accountability is sought (the relevant legal rights subject.). The contextual focus in the paper lies specifically on pre-emptive machine learning algorithms and the rights-based regulatory regime of the GDPR.

I develop this problematisation through a comparative approach by outlining in the first part of the paper, the sites of discourse (literature, resistances) which question the accountability of pre-emptive machine learning algorithms on the one hand and of legal rights on the other. Rather than working with a pre-defined notion of accountability, I seek to discern through such a comparison what precisely these discourses mean when they assert that pre-emptive machine learning algorithms or modern legal rights are unaccountable. This obviously speaks to unaccountability for power. But what sort of power and how?

To make sense of this, in the second part of the paper, I use Michel Foucault’s understanding of power as governmentality to illustrate that machine learning algorithms and legal rights exercise power not merely by enforcing certain outcomes, but also through the generation of knowledge(s) about the algorithmic or rights subject. Such knowledge(s) in fact produce the rights subject through exercise of the power technique of ‘subjectivation.’ However, such subjectivity remains deeply contested and it is this power to ‘subjectivate’ that the discourses which assert the unaccountability of pre-emptive machine learning algorithms or rights put in doubt. Using these insights on power and subjectivity, the paper argues that the ‘accountability’ – what it means or should mean or how it should be operationalised- cannot be responsibly gleaned without a deep dive into how subjectivation operates in contexts of algorithms and through modern legal rights. I use this as an opportunity to focus on the GDPR and identify the rights subjects therein.

The last part of the paper concludes by making a case for the study of subjectivation in both pre-emptive machine learning algorithms and GDPR rights through an interrogation of the form/structure of both P-EML algorithms and of modern rights. The argument here is for a shift in analysis of algorithmic accountability from mere issues of functional legitimacy of P-EML algorithms and legal rights under GDPR to a focus on the particular form of these algorithms and rights which enable such functioning.

09:00-10:30 Session 7F: GENERAL PANEL: Technology Law as a Coherent Field?
Location: Dz-1
09:00
PANEL: Technology Law as a Coherent Field?
PRESENTER: Michael Guihot

ABSTRACT. There is an expanding literature that seeks to explore the relationship between technology, law and regulation. Those approaching the area for the first time can quickly become lost in a thicket of different new technologies, applications of those technologies, the benefits, threats and risks associated with each technology, and the plethora of regulatory approaches to balancing those risks and benefits. Perhaps because of this, some have argued that technology law lacks coherence; that is that it doesn’t ‘hang together’, or that it lacks a professional consensus on a coherent internal organization of materials. Other areas of law, now more formally recognized, such as Health Law and Environmental law, suffered the same existential misgivings in their early development. Is it time for Technology Law to now similarly join the canon of law subjects?

This panel will begin the discussions through which we might achieve a professional consensus on a coherent organization of the parts of Technology Law. Those discussions must include discussion about the appropriateness and timeliness of such a classification process, and, if technology law does cohere, what is the thread along which it might do so? Other questions might include: what organizing classification system might best encapsulate the seemingly disparate topics under the rubric of Technology Law? Would another rubric (e.g., ICT/Internet Law or Information Law, or Technology Regulation) be better suited to bring coherence? What further research needs to be carried out in order to map the limits of the field?

10:30-11:00Coffee Break
11:00-11:45 Session 8: Keynote 2: Prof. Lee Bygrave

Keynote Data Protection

Location: DZ-2
11:00
DP:=PDF

ABSTRACT. In this keynote address Lee revisits basic tenets of European data protection law as they have evolved over the last 50 years. He argues that the latest iteration of European data protection law revolves predominantly around three normative prongs, abbreviated as ‘p’, ‘d’, and ‘f’. He further argues that this pdf-triad (the meaning of which will be revealed in the address) is not radically different from the normative thrust of previous iterations of data protection law, but is the result of a gradual, largely harmonious evolution of norms. Nonetheless, particular elements of the triad were only faintly visible in past decades. Lee also considers the utility of the triad in ensuring that technological-organisational developments do not deleteriously affect human rights and freedoms. He suggests that while the triad cannot, in practice, adequately meet all of the myriad challenges thrown up by technological-organisational change, it has a flexibility that such change, at least in principle, cannot easily ‘outrun’. And, in the hands of activist data protection authorities or, perhaps more importantly, in the hands of a judiciary that is sympathetic to the safeguarding of fundamental human rights, the triad is a potentially powerful regulatory tool, both now and for the foreseeable future.

11:45-13:15 Session 9A: DP TRACK: Data protection and privacy in law enforcement
Location: DZ-3
11:45
Radical Visibility

ABSTRACT. Police departments in the United States — and elsewhere — have been quietly outfitting their officers with body-worn cameras for a number of years, partly in response to a host of image management problems generated by the rise of citizen journalism and the recent surge in media attention to police-involved violence. The mass adoption of body cameras is a specific manifestation of a larger phenomenon in which police departments are driving and shaping local criminal justice, surveillance, and information policies in the absence of much directly applicable legal regulation of police surveillance in the United States — a phenomenon sometimes called policymaking by procurement. However, unlike many police surveillance technologies, national civil liberties organizations have also supported body-worn camera adoption as a means to protect communities from police misconduct—or at least to document aberrant police behavior. Thus, the media, the police, and civil society have constructed an image of body cameras as something different from typical police surveillance technology — as something more desirable and empowering of democratic civilian oversight of the police.

However, after body cameras actually hit the streets, police departments and state legislatures across the United States are forced to grapple with questions about when officers should (or should not) activate their cameras and how body-worn camera footage ought to be treated under state FOI laws — and who ought to have access to these recordings. Unlike CCTV and other forms of more static surveillance, these roving cameras are recording inside private homes and during sensitive police contacts with people suffering from homelessness, mental health issues, and domestic violence. Importantly, when state FOI law requires police departments to disclose this sort of footage under the guise of state transparency (as is currently the case in multiple U.S. states), the increased visibility of private individuals can easily become the collateral damage of our transparency regime — a form of collateral visibility.

In this paper, I present findings from empirical research with police officers and others involved in body-worn camera deployment within three municipal police agencies in United States and discuss how these findings can inform future debates about privacy, data protection, and access to information laws. Along the way, I outline how the affordances of surveillance-enabling technologies (such as body-worn cameras and smartphones) in combination with the affordances of new media platforms (like YouTube and Facebook) and broad public disclosure laws interact to create radical new forms of secondary visibility for the police as well as for those with whom they interact during their work (bystanders, victims, witnesses, subjects, etc.).

12:15
The Wiretapping of Things

ABSTRACT. Law enforcement necessitates some form of control and access to what individuals in society are doing. Enforcement agencies might require real-time access to various types of data to detect and prevent crimes, arrest suspects, and eventually use the acquired data as evidence. Throughout history, the state sought to gain access to communication technologies that would advance their investigation or even suspicion of criminal activities—whether by using telegraph, mail, and telephones, to name a few examples. Such potential access greatly increased with the invention of the public internet, and recently expended through what is termed as the Internet of Things (IoT). Real-time access to IoT devices could reveal individual's location, any information they conveyed in the vicinity of the device, their images and videos, their heartrate, etc. While these practices could be vital for enforcement in the digital age, the wiretapping of things is also highly intrusive and could greatly jeopardize civil rights and liberties like that of the right to privacy. This Article examines the normative and pragmatic implications of law enforcement in light of new communication technologies, and mainly, that of IoT. It questions, inter alia, whether the current legal framework of wiretapping—designed for telephones—is relevant for new technological developments? Could the state even pragmatically gain access to IoT devices in real-time—and "wiretap" them? Do the current warrant requirements for such action changes in light of IoT capabilities? How should policymakers balance between the potential need to acquire such data but safeguard the right to privacy? This Article approaches these and other related timely questions by analyzing the current legal framework that governs the lawful interception of data in transit and at rest. Upon discussing and evaluating whether access to IoT devices is practical and desirable under the current legal framework, the Article concludes that policymakers must reconsider the current regulatory framework for wiretapping as it is ill suited to properly protect civil rights and liberties.

12:45
In search of Privacy Action Points: Contemporary Software Engineering Practice and Privacy by Design

ABSTRACT. Software engineering has been undergoing massive shifts with the move from shrink wrapped software to service oriented architectures, waterfall to agile methods, and from personal computers to cloud infrastructures. These shifts have come to reconfigure software engineering practices with great consequences for whether and how privacy by design will be applied. This empirical study reports on contemporary software engineering practices with an eye on ways to integrate privacy by design into its everyday activities. In particular, the objective is to discover what are moments in agile development opportune for a team or individual to apply privacy by design, and what are the moments which are stacked up against taking such privacy actions.

The reported results are based on a participatory study with three application development teams active in migrating a multinational corporation to a public cloud infrastructure. During the study, we paid attention to how teams organize their everyday development activities; how they negotiate development priorities with the larger corporation of which they are part of; and, when pain points trumped the ability of developers to address privacy in everyday software development. The outcome is used to explore the idea of "privacy action points": moments in the software engineering process which are opportune for taking privacy by design decisions and turning those decisions into implementations. The study is an important starting point for gathering knowledge on how to best integrate privacy by design into engineering processes in current day service ecosystems.

11:45-13:15 Session 9B: DP TRACK: The concept of personal data/ Data protection and economics and trade perspectives
Location: DZ-4
11:45
Personal data transfers in international trade and EU law: A tale of two "necessities"

ABSTRACT. Technological advances have made cross-border trade increasingly dependent on personal data and its flows across borders. In several cases, notably in the European Union, law restricts international personal data flows to a degree that is arguably beyond what is permitted under trade liberalization commitments. Under the lens of trade law, the EU’s regulatory autonomy to control transfers of personal data is structured by a general exception at the core of which is the so-called “necessity test”. Conversely, according to the EU Charter of Fundamental Rights, protection of personal data is a fundamental right, and therefore transfers of personal data can comply with trade commitments that derogate from fundamental rights only if the derogation passes another test, known as “strict necessity”. While “trade law necessity” requires that restrictions on personal data flows be least trade restrictive, the EU Charter mandates that liberalization of such flows should be least fundamental rights restrictive. This article shows how a simultaneous application of trade law and EU Charter “necessities” to EU restrictions on transfers of personal data creates a Catch-22 situation for the EU, and proposes ways out of this compliance deadlock.

12:15
Trade Secrets, models & personal data: conflict in waiting?
PRESENTER: M.R. Leiser

ABSTRACT. Our presentation addresses potential conflict between data protection law and commercially sensitive models. We do so through the lens of Machine Learning models that contain personal data, and models whose value depends on the personal data used to train them. The latter conflicts with data subject rights thanks to the expansion of the scope of personal data by the CJEU.  We examine from a technical/legal perspective before positing how to resolve conflicts between the GDPR, Article 16 CHoFR, and ML-models.

12:45
Economic Drivers Behind “The Law of Everything”

ABSTRACT. The increasing datafication of life has led to the question whether the current definition of “personal data” could eventually turn European data protection law, most notably the General Data Protection Regulation, into a “law of everything”. Whether this will indeed be the case surely depends on a variety of factors such as how exactly the definitions will be interpreted, but also which particular technological developments will materialize.

However, this paper aims to take a different perspective by analyzing one of the underlying forces behind the increasing datafication: the economic incentives stemming from the fact that more informed decision-making tends to lead to better decisions – albeit from the perspective of the decision-maker and not necessarily others or even not society as a whole.

The general argument of the paper is as follows. There are two levels on which a rational decision-maker, i.e. one that wants to make the best possible decisions given the circumstances, may want to contribute to increasing datafication based on a cost-benefit analysis.

First, on the level of individual decisions: as long as the decision-maker expects the benefits from gathering more information to outweigh the cost (monetary, but also in terms of time and effort) associated with gathering this information, we would expect the decision-maker to gather more information. Second, on the level of making it cheaper to gather additional information: if one wants to gather information, but finds that current technologies are still too expensive, one may want to invest into making information gathering cheaper (or better while keeping cost at the same level).

Combining these two effects then leads to an increasing datafication, to which there may be no boundaries if the cost of processing a marginal unit of information becomes negligible. The further this development goes, i.e. the smaller the cost of improving decisions by using information becomes, the more things could rationally become the focus of data collection and analysis, until eventually everything becomes digitally available.

However, universal datafication alone would not yet render European data protection law a “law of everything”. For this to happen, all this data would also need to fall within the definition of “personal data”, understood as any information relating to an identified or identifiable natural person. Having established the potential for everything becoming data, the paper will continue to show that from there it follows from statistical principles that virtually any standard of identifiability will be eventually met. A similar argument based on statistical principles can then be made to also show that the chance that any given piece of information relates to an identified or identifiable person will approach certainty. It is worth noting, that the arguments put forward here are valid even without any particular incentives for increased identifiability of people. Such incentives exist without doubt, e.g. in the form of the endeavor of businesses to increasingly target consumers and customize products as well as prices. While these incentives would further accelerate the above-sketched development, they are not a necessary condition for this argument.

In conclusion, the paper shows that there are fundamental economic incentives that should be expected to eventually render everything personal data and hence to render data protection law a “law of everything”. But this is not to say that there is no possibility to prevent this from happening - on the contrary: if (regulatory) ways can be found to render it increasingly costly to process personal data, we should expect the aforementioned development to come to a stop before literally everything becomes personal data. Such ways could consist of a very high intensity compliance regime, hefty fines in case of non-compliance with data protection principles, or similar instruments that render it more difficult to legally process vast amounts of data. However, the level at which this would be desirable and further, which effects this may have on potential competition policy goals, is beyond the scope of this paper.

11:45-13:15 Session 9C: HEALTH & ENVIRONMENT TRACK: Energy and data
Location: DZ-5
11:45
Blockchain for the Energy Transition: Rule of Code versus Rule of Law?

ABSTRACT. The potential of Blockchain technology in achieving a low carbon energy transition is getting increasing attention at national, European, and international level. From a legal perspective, this can be seen for example in the literature on energy law and economics (Lavrijssen & Carrillo Parra, 2017; Butenko, 2018), in a recent International Energy Agency Report (IEA, 2017) and in the European Parliament resolution of 3 October 2018 on distributed ledger technologies. Indeed, Blockchains offer the possibility to develop P2P disintermediated markets where citizens, industry and regulators can interact (Aste, Tasca, Di Matteo, 2016), so being seen as a paradigm of decentralization, which also represents an instance of institutional evolution and economic coordination (Davidson et al., 2018). For these reasons, Blockchain seems to be particularly suitable in order to drive the “4D Revolution” that is affecting the electricity sector under the new so-called “EU Winter Package”: Decarbonization, Digitalization, Decentralization, and Democratization. While the Blockchain potential for retail markets and making P2P trading a tangible possibility is quite examined in literature, an in-depth analysis of the Blockchain ability to solve the so-called “Energy Trilemma” that affects the electricity sector and justifies the public intervention in a liberalized market is still missing. Therefore, considering the interaction between legal and technological innovation in advancing the energy transition (Zillman et al., 2018), the Blockchain capability to reach energy security, equity, and environmental sustainability will be explored, starting from the assumption that an analysis on “technological advances in energy is incomplete without detailed attention to the potential applicability of Blockchain technology in energy governance” (Truby, 2018). The paper is organized as follows: After a brief description of the main technological and social innovations that will impact the electricity market in the next few years and that were the object of a first recognition in the “EU Winter Package”, I will outline the legal innovations that the proposal introduces and the “regulatory disconnections” (Butenko, 2016) that still exist, also considering the grey areas of regulation created by emerging technologies. A special emphasis will be given to the Blockchain capability to substitute incentives to renewable energy generation and to assure a real integration between energy and climate policy, both generally and specifically between energy and transport sector. What role could Blockchain play in adapting existing support schemes, so minimizing market distortion? How could Blockchain be used to trace the effective renewable origin of the electricity fed into the electric vehicle? I conclude that regulation by Blockchain is not necessarily a “Battle for Supremacy between the Code of Law and Code as Law” (Yeung, 2018), since the Code could also be a resource at the disposal of traditional institutions, as the recent European Parliament resolution and the European Blockchain Partnership demonstrate. In particular, I argue that the Blockchain could embody the deep rationale of the Winter Package. It allows a market-oriented and decentralized approach, based on horizontal relationships between active and passive consumers in a demand-supply mechanism, but at the same time it creates an “entry point” to traditional regulators through smart contracts (e.g. by ensuring fairness and quick interventions in case of market distortion). After all, Blockchains are a paradigm of decentralization, which not only really empowers prosumers and local energy communities, but also represents a new form of coordination between market and public policy.

References:

Aste, T., Tasca, P., Di Matteo, T. (2017) ‘Blockchain Technologies: The Foreseeable Impact on Society and Industry’, Computer, vol. 50, no. 9, pp. 18-28; Butenko, A. (2016) ‘Sharing Energy: Dealing with Regulatory Disconnection in Dutch Energy Law’, Eur. J. Risk Regul, vol. 7, no. 4, pp. 701-716; Butenko, A. (2018) ‘User-centered Innovation in EU Energy Law: Market Access for Electricity Prosumers in the Proposed Electricity Directive’, Oil, Gas & Energy Law (OGEL), vol. 16, no. 1, available at: https://www.ogel.org/article.asp?key=3732; Davidson, S. et al., (2018) ‘Blockchains and the economic institutions of capitalism’, Journal of Institutional Economics, pp. 1-20; IEA, (2017) ‘Digitalization and Energy’, available at: https://www.iea.org/publications/freepublications/publication/DigitalizationandEnergy3.pdf; Lavrijssen, S., Carrillo Parra, A. (2017) ‘Radical Prosumer Innovations in the Electricity Sector and the Impact on Prosumer Regulation’, Sustainability, vol. 7, no. 9, pp. 1207-1228; Truby, J. (2018) ‘Book review – Research Handbook on EU Energy Law and Policy, Rafael Leal-Arcas, Jan Wouters. Edward Elgar Publishing, Northampton, MA (2017)’, Energy Research & Social Science, no. 42, 2018, p. 11-12; Yeung, K. (2018) ‘Regulation by Blockchain: The Emerging Battle for Supremacy between the Code of Law and Code as Law’, Modern Law Review (forthcoming), available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3206546; Zillman, D. et al. (eds), (2018) ‘Innovation in Energy Law and Technology. Dynamic Solutions for Energy Transitions’, Oxford, Oxford University Press.

12:15
Smartening up while keeping safe? Advances in smart metering and data protection in EU law

ABSTRACT. Achieving the European Union’s (EU) low-carbon objectives requires significant structural changes in the European energy markets. In particular, the increasing share of renewable electricity in the overall electricity mix necessarily involves improving the ability of the electricity system to withstand intermittency. One of the key solutions to do so is by activating the demand side in a way that enables a more flexible balancing between demand and supply.

In practice, initiatives and technological solutions to activate the demand side are well exemplified in smart meters, which collect and communicate information on the consumption of electricity. The data gathered through smart metering is central in shifting and reducing demand peaks to better address the needs of a more intermittent electricity system. In the existing legal setting, EU energy law only encourages the rollout of smart meters but the proposed legal changes in the 2016 Winter Package are considerably more obligating.

The proposed legal changes to the existing sector specific electricity legislation recognize that the opportunities of utilizing the consumer data collected via smart metering are significant and of fundamental importance in creating more flexibility in the electricity market. However, the extensive utilization of this data invites questions on data privacy and data protection. The general development of datafication of European societies and the adjacent concerns have been addressed in the recently adopted General Data Protection Regulation (GDPR), which applies to the processing of personal data and, as such, is relevant for the collection and utilization of smart metering data.

There is an apparent conflict between the objectives embedded in EU data protection law on the one hand and EU energy law on the other. EU energy law addresses smart metering and the resulting data as means to achieve a more sustainable electricity system, whereas EU data protection rules are focused on data security issues and the protection of privacy. The proposed paper examines the cross-section between data protection and energy law in the EU. The focus of the legal analysis is in the GDPR and the proposed rules in the Winter Package, which are systematically analyzed in an effort to identify and interpret the existing and proposed legal framework that governs the collection and utilization of smart metering data.

11:45-13:15 Session 9D: IP TRACK: Shifting Norms in Copyright Law
Chair:
Location: DZ-6
11:45
Copyright in the Digital Era: Abandoning the Property-Based Model of Protection

ABSTRACT. Depending on who you might ask in the academic world, you may either be told that copyright law is either doing just fine or is irreparably broken. There is a wealth of literature to suggest that copyright law is failing to adapt to modern notions of creative expression and dissemination. Online piracy of everything from music to film and television shows and software feels like an unstoppable force that is constantly two steps ahead of those charged with preventing it. New iterations of creative expression such as sampled music, appropriation art, and even memes, falling under the umbrella of “remix culture” pose difficult questions for the law about what types of uses should and should not be protected- irrespective of our ability to stop them. Conversely, just as technological developments have lead to more infringements, they have also brought about new ways of protecting and enforcing copyrights such as digital rights management and online filtering technology.

This paper will take sides with the group skeptical of the efficacy of copyright law in the modern world- both in enforcing itself and representing modern ideals of how expression should be protected. The technologies developed to counter infringement create just as many problems as they solve while remaining to be, at best, marginally effective. While user-generated content represents a new form of cultural exchange that should be supported by the law, piracy does not. Yet, both are similar in the fact that they remain uncontrollable in the internet age and represent lost revenue for content owners. Therefore, the key similarity shared by these two issues is their potential solution.

This paper will serve as a critique of the current property-based model for copyright protection across the globe. It will argue that digitisation of cultural expressions have rendered the property model impotent based on the dissolving ability of creators to protect and control their works as if they were property. It will argue that the ideal solution is to shift copyright law from a right in property to a system of economic rights. In doing so it will discuss two potential models- one of complete restructuring and abandonment of the current property-based model and another where tools of economic rights, namely compulsory licenses, are used to temper the existing model and hybridise it with the more extreme economic rights version. It will describe how such systems would look based on legal and private sector precedents, how they could be adopted, and the implications for each on an international scale.

12:15
Ownership norms and creative works in anonymous online communities: a conflict of values with copyright law?

ABSTRACT. This paper explores the relationship between copyright law and anonymous online communities where creative works are shared. Applying qualitative online methods, it focuses on the communities’ and individual creators’ approaches to ownership and control of works, and the values that underpin these ownership norms. It does this in order to better understand the online creative environment and engage with debates about copyright reform.

12:45
Selective Copyright Enforcement: the Difference between A Fan and a Pirate

ABSTRACT. In the political discourse, right holders and politicians agree that effective copyright enforcement at high levels of protection is essential for digital markets in cultural products to flourish. Opposition to this comes from user organisations and a small number of politicians, but the mainstream discourse of strict copyright protection survives. However, copyright is enforced inconsistently in practice and in particular, certain types of infringing behaviour are not only tolerated but, in some cases, facilitated by the right holders. This pattern cannot be explained by enforcement resources or the lack thereof. The gap between pushing for high levels of protection while at the same time facilitating infringement exists in most jurisdictions but is most pronounced in the Japanese contents sector surrounding comics, cartoons and videogames where international appeal, a flourishing domestic market and high levels of fan contributions directly interact.

Based on a series of background interviews with academics, right holders, practicing lawyers and users in Japan, this paper will show that copyright enforcement has little to do with copyright infringement. First, the main value of copyright law today is not the enforcement, but the protection of closed environments maintained by DRM provisions. Rather than targeting infringers, right holders actively steer users towards preferred methods of uses with their works. Most notably, these environments are both more restrictive and broader than what copyright law permits. Secondly, identifying a pirate is not done on the basis of the law: it is not the infringing act which matters. Instead, infringement is both tolerated and sometimes even actively encourages. Instead, non- copyright relevant factors decide when the right holder will take action. The result is a system of selective copyright enforcement, leading to inconsistent enforcement and in turn issues with copyright’s legitimacy and innovation. It is argued here that at least digital copyright needs to be re-assed comprehensively in two ways. First, the notion of market harm has to be taken seriously by the legislator. Secondly and following from this, copyright provisions need to be examined according to their usefulness, amending or removing them as required.

11:45-13:15 Session 9E: JUSTICE AND DATAMARKET PANEL: Access rights as a research tool
Location: DZ-7
11:45
PANEL: Access rights as a research tool
PRESENTER: René Mahieu

ABSTRACT. Panel Justice and the data market Title: Access rights as a research tool Description: This panel will discuss the use of subject access requests as a critical method for researching data markets and brokers. Data markets and brokers form the backbone of the datafication of society yet they are opaque on a technical, juridical as well economic level. The transparency obligations in the GDPR and in particular the right to access may be used as a tool to reveal these veiled practices. The right of access is already being used in this way, for example by researcher David Carroll to shed light on the data practices of Facebook and Cambridge Analytic, and digital human rights organizations to shed light on practices of data brokers. In this panel we look at the existing practices of using the right of access as a tool for critical research. We discuss the methodological aspects of using this right as a tool for research. And ask what researchers learn from and contribute to digital rights organizations and journalists who are using this tool. Organizers: Hadi Asghari, Joris van Hoboken and René Mahieu Moderator/chair: Joris van Hoboken and Hadi Asghari Panelists: Researcher: René Mahieu (Is doing PhD research on the question if the right of access is effective in practice and is looking in particular at how this individual right can have a collective function.) Researcher: Jef Ausloos (University of Amsterdam wrote “Shattering One-Way Mirrors. Data Subject Access Rights in Practice”) Researcher: Frederike Kaltheuner (DATACTIVE or Privacy International Data Exploitation Program) or Stefania Milan (DATACTIVE) Researcher: Aaron Martin (Tilburg University) Media: Saar Slegers (Is an independent journalist who used the right of access as a tool to report on her own personal data trail. By tracing the data trail from a commercial letter she received from an unknown she found her way into the opaque world of data brokers.) NGO: Rejo Zenger (works at Bits of Freedom, has been pioneering the use of the right of access for investigative purposes since 2009)

11:45-13:15 Session 9F: AI AND RESPONSIBILITY TRACK: Algorithmic regulation
Location: DZ-8
11:45
Disjunctive Temporal Forms: Fairness, Bias, and Pre-Emptive Algorithmic Decision-Making Systems

ABSTRACT. Impartial decision-making is a fundamental principle of public law, but, when it comes to algorithmic bias, public law scholars are noticeably silent. Where bias is reasonably perceived to exist, a reviewing court may overturn an institutional decision and return it to the decision-making authority to decide again, with an impartial, open-minded decision-maker at the helm. For the technologists who create assistive or pre-emptive decision-making software, however, bias is inescapable. By their very design, these systems are “biased” or partisan towards a particular result. This feature is actively critiqued by socio-legal scholars, who demonstrate how algorithmic systems materialize and perpetuate bias towards privileged groups and against marginalized individuals. Given that impartial decision-making is a core tenet of public law, this paper asks why algorithmic decision-making systems have thus far escaped the scrutiny of legal scholars. In doing so, it draws on examples from Canadian, Australian, and British child welfare and social assistance programs and on common law principles that specify conditions for fair, unbiased decision-making.

This paper suggests that the silence among public law scholars might be explained by the different notions of time underlying legal principles and algorithmic decision-making systems. To support this theory, it draws on interdisciplinary law and temporality scholarship to articulate two contrasting decision-making “times” or temporal shapes: the linear, progressive time of impartial decision-making, which moves from pre-hearing to hearing from affected individuals to weighing options and, finally, to articulating a decision; and the zig-zagging time of algorithmic decision-making systems, which draws on the past (as represented by data) to pre-empt a future risk today. In doing so, it also questions the novelty of this zig-zagging temporal form, linking it to the risk-based decision-making structures embedded within particular social welfare regimes. This paper then considers how legal scholars might respond to the temporal form of algorithmic decision-making systems, and how these responses might assign responsibility as among the many actors who today co-produce institutional decisions.

12:15
Legality and Democratic Deliberation in Black Box Policing

ABSTRACT. The case-law surrounding the European Convention on Human Rights establishes certain qualitative requirements of legality for legitimate restrictions of convention rights. Legal rules should, for instance, be accessible, clear, foreseeable, and limit the discretionary power of the government when they imply a rights limitation. These values are all challenged by the increased reliance on digital technologies in the sphere of governing, whereby the practical effect of law can become mediated and applied through machine learning algorithms, artificial intelligence, or opaque surveillance tools, resulting in a lack of foreseeability and the extension of discretionary legal spaces. These concerns become particularly poignant in the field of policing, which is characterised by a contradictory combination of on the one hand areas of extensive discretion and on the other strict rules relating to certain investigatory measures. Injecting emerging technologies into this area of law create a risk of ‘black box policing’ with far-reaching implications for both the qualitative legality of policing methods and the possibility for democratic deliberation regarding policing mandates. This contribution aims to outline the issues black box policing may imply from a constitutional and rule of law point of view, as well as constructively analyse how legality as a component of rule of law can be translated into a context of emerging technology, algorithmic decision making and artificial intelligence in order to re-establish implicit connections between rule of law, democracy, and individual autonomy.

12:45
How to regulate machine-learning algorithms that optimize the law-making itself?

ABSTRACT. For quite some time, algorithms - and especially machine-learning ones - have been pervading more and more aspects of society. The legal scholarship has consequently been analyzing the disruptions that this brings and have then attempted to figure out how the law could regulate the use of algorithm. The legal ‘sector’ has also been affected by the action of algorithms and in particular the activities traditionally exerted by attorneys-at-law, judges and law enforcement agencies are being increasingly subject to algorithmic computation. But the reach of algorithms has been taken a step further. Algorithms – and especially machine-learning ones due to their ever-learning capabilities – are indeed credited to be able to optimize the law making itself.

The law is classically described as impersonal in the sense that it is based on categorizations of diverse factual situations to which some legal regime is then applied. Machine-learning algorithms are credited to be able to optimize the making of the law by tailoring – and constantly re-tailoring - the rules according to the subject-matter of the law: this development has been described as the (algorithmic) personalization of the law. The rationale lies in the optimization of legal categorizations that would better – and ever-better - fit the diversity of factual situations. Algorithmic law-making would avoid the classical problem of under- and over-inclusiveness of the law. This also obviously brings about specific risks and challenges. The law classically projects over regulatory subject-matters but this new development paradoxically amounts to the opposite. Namely, the law turns out to be the subject-matter of the optimizing algorithm. The legal scholarship has merely started to touch upon the legal impact of algorithmic personalization of the law and what regulatory tools can be designed to respond to the identified challenges. It therefore appears necessary to analyze different situations of algorithmic personalization of the law in order to better apprehend this new development.

This paper contributes therein by analyzing the specific case of railway assets maintenance. Railway assets are safety-sensitive so that their management – and in particular their maintenance - is subject to strict and detailed regulation. The paper analyzes the impact of algorithmic personalization of maintenance regulation by means of machine learning-based predictive maintenance. On the one hand, maintenance needs can get more accurately tackled which would result in increased safety. It also allows to better come to terms with the high level of complexity of the applicable regulation. On the other hand, algorithmic personalization of the law inherently implies delegation - and often privatization - of the law-making which needs to be regulated. It also questions the delineation to be made between norms whose making shall be delegated to the algorithm and those which shall stay as “ground norms”. Ultimately, such questioning leads more fundamentally to reassess the rationale of the law. In this regard, a systemic approach proves needed to fully grasp the value and usefulness of the (initial) regulation.

11:45-13:15 Session 9G: GENERAL PANEL: A General Framework for Identifying Technology-Driven Legal Disruption: the Case of Artificial Intelligence
Location: Dz-1
11:45
PANEL: A General Framework for Identifying Technology-Driven Legal Disruption: the Case of Artificial Intelligence

ABSTRACT. Rationale

Artificial intelligence has been predicted to disrupt the ordinary functioning of society across a broad array of sectors. The resulting impact of AI upon the law is thus both direct and indirect, and can be viewed at three different levels of severity. First, is the granular level of discrete decision-making at the level of individual policymakers or designers; the second involves the constitutional level of core values and the institutions which guarantee those values at the societal level; and the third concerns the existential level of the grand futuristic challenges posed by the potential future advent of highly capable AI to humanity at large. Taken together, the challenges introduced by AI are likely to trigger seismic shifts in the legal and regulatory landscape.

This poses a multifaceted and messy problem for framing regulatory responses to AI. While the challenges are introduced by a tight cluster of digital technologies, the legal disruptions that cascade from AI are difficult to organise, manage and respond to. This workshop aims to set out the rationale for establishing a focused, dynamic, conceptual framework built around the concept of legal disruption, and situates this proposal in preceding debates over the creation of distinct legal fields for new technologies (e.g. the debate over ‘cyberlaw’), and relating to the orientation and approach towards robotics regulation.

Workshop method

The proposed model elaborated in this workshop aims to set out the potential trajectories for regulatory initiatives targeting AI, and the impacts of its development and deployment in society.

As a methodological framework, this approach allows us to look at the negative externalities caused by new technologies—of which AI is but one example—and by the efforts made to regulate these in a changing legal world. Based on and applicable to various legal disciplines (Constitutional Law, Legal Theory, Medical Law, Tax Law, Governance), this framework can be used on different new technologies in order to better assess and conduct research on the causes and consequences of legal disruption, as well as the current or predictable effects of regulatory efforts targeting the new technologies’ disruptive effects. To understand the nature and the effects of the legal disruption at hand is necessary in order to be able to determine whether existing regulation is applicable, or whether our existing legal concepts are equipped to deal with the new technology.

This first step allows us to connect the different levels of legal-regulatory impacts precipitated by AI together in a coherent manner, as well as elaborating upon how a shift away from looking at AI as an external precipitating hazard and re-focusing on the components of exposure and vulnerability might factor into regulatory responses targeting the societal impact of AI.

Workshop set-up

During this 90 min workshop, we develop the bases of the framework that we are creating. In addition to developing and clarifying the conceptual framework, the workshop also applied two case-studies of legal disruption—one, on the use of AI in medicine, the other, on blockchain and taxation regimes—as a way of testing its relevance and usefulness. We actively engage with the audience in order to discuss this model and its applications, improve it, find the limits of the model, and extend it to other cases.

Specific Presentations and layout:

  1. General Presentation of the Framework---Hin-Yan LIU—An Introduction to the General Framework for Identifying Technology-Driven Legal Disruption. 
  2. Causes and visualisation of disruption---(1)Léonard VAN ROMPAEY—AI Legal Disruption as Distantiation: Symptomatic and Systemic Effects; (2)Michaela LEXER—AI and Medical Law.
  3. Debate
  4. Taking an international perspective---(1)Matthijs Michiel MAAS—Disrupting international law through ‘transformative artificial intelligence’? Development, displacement, destruction.; (2)Luisa SCARCELLA—Blockchain-related Challenges for Tax Authorities.
  5. Debate
  6. Break
  7. Looking at future perspectives---John DANAHER—Artificial Intelligence and the Constitutions of the Future.
  8. Debate---
  9. Building interactions through group questions---Applying the framework to the participants own research. Where does the model break? How to refine it?
13:15-14:15Lunch Break
14:15-15:00 Session 10: Keynote 3: Prof. Alexandre de Streel

Keynote Digital Clearinghouse

Location: DZ-2
14:15
Redesigning regulation for digital platforms

ABSTRACT. Alexandre de Streel is Professor of Law at the University of Namur where he is the director of the Research Centre in Information, Law and Society (CRIDS). His research focuses on regulation and competition law in network industries. Alexandre is also a Joint Academic Director at the Centre on Regulation in Europe (CERRE) in Brussels, and a member of the Scientific Committee of the Florence School of Regulation at the European University Institute. Alexandre regularly advises international organisations (including the European Commission, European Parliament, OECD, EBRD) and he is an Assessor (member of the decisional body) at the Belgian Competition Authority.

15:00-16:30 Session 11A: DP TRACK: Data protection, equality and non-discrimination
Location: DZ-3
15:00
Calculating the citizen: the role of equality in automated decision-making in Dutch law enforcement

ABSTRACT. The use of automated decision-making in law enforcement is increasing. Many of these automated decision-making systems target the lower socioeconomic classes, or other vulnerable groups in society. The creation and implementation of these systems – even as decision-making support systems – sort people into categories, which can reinforce inequalities in society. In my research I carry out three case studies in automated decision-making in enforcement under Dutch administrative and criminal law, to investigate this social sorting. The first case study of my research is the Systeem Risico Indicatie system, or SyRI. SyRI is designed to detect social security fraud through processing and analysing data in pre-established projects, collaborations between administrative bodies and organizations. The second case study concerns the system developed by private company Totta, which is implemented in several municipalities. This system is specifically designed to detect benefit fraud. The third case study concerns the system ProKid (Plus). ProKid (Plus) makes a risk assessment of every child which gets into contact with the Dutch Police. If necessary, action is taken by ‘Bureau Jeugdzorg’. All three systems have a big impact on the citizens involved: they are subject of far-reaching investigation, benefits can be put on hold, and children can be monitored by several authorities. I analyse these case studies from the perspective of the principle of equality: equal treatment and procedural fairness. Preliminary findings suggest that equality is not taken into account when using automated decision-making. As the (empirical) research is in progress, I hope Tilting Perspectives gives me the opportunity to get some feedback on work-in-progress.

15:30
Understanding vs. Accounting for Automated Systems: The Case of the Seattle Surveillance Ordinance
PRESENTER: Michael Katell

ABSTRACT. A key challenge to assigning or assuming responsibility for automated systems is understanding them. This is true for practitioners, such as system designers and engineers, but also for the operators of systems employed in civic spheres. We provide insights from an ethnographic study of government officials and community activists involved in the drafting and implementation of the Seattle Surveillance Ordinance, one of several local laws that have been enacted in the U.S. in recent years to require accountability from police departments and other municipal agencies using surveillance technologies. These policy-making efforts provide real-world case studies of efforts to render algorithmic and information systems accountable to public oversight in the absence of comprehensive national policies, as is the case in the U.S. In the ordinances we reviewed, the process of assuming or assigning responsibility for surveillance technology begins with city employees, who are tasked with reporting to the public and to elected officials about a municipality’s use of surveillance technologies. They are expected to demonstrate detailed understandings of the features and functions of the technologies under review, including the full extent of their algorithmic capabilities. We find that the mental models of artificial intelligence employed by city employees do not correspond with the actual features of the systems they are tasked to evaluate leading to failures in the identification of machine learning and other automated technologies within particular artifacts. To address this gap in understanding, we suggest that surveillance regulations include provisions to make the potential harms of a system’s algorithmic components more legible to political and community stakeholders, and thereby enable them to more effectively assign or assume responsibility for the use and social effects of automated surveillance systems. We situate this policy-making approach within contemporary narratives and critiques of algorithmic transparency.

15:00-16:30 Session 11B: DIGITAL CLEARINGHOUSE TRACK: Blockchain and enforcement
Location: DZ-4
15:00
Enhanced KYC-AML Compliance via Distributed Ledgers

ABSTRACT. The speed in which new regulation has been evolving in the aftermath of the Great Financial Crisis of 2008, brought about multiple regulatory challenges for financial institutions. Complying with Know Your Customer (KYC) regulation requires a large amount of data, typically processed manually and in error prone fashion. Moreover, institutions operate in different jurisdictions with diverse KYC requirements. We argue that by utilising Blockchain technology we can create a KYC regulatory technology framework which is more transparent, efficient and secure. 

15:30
Crypto-assets under European financial law. Where norms' scope and enforcement's reach stop.

ABSTRACT. This paper highlights regulatory and enforcement issues in the context of blockchain-based crypto-assets. To this aim, the work explores which legal instruments are applicable under European law. Based on this analysis, it finds that regulatory issues derive from the unsuitability of legal classifications to capture the fluid nature of crypto-assets, while enforcement shortages mainly arise from: a) decentralised governance/business model of platforms; b) non-incorporation of entities; c) chances of regulatory arbitrage, such as geographical relocation of entities and “technological displacement”.

15:00-16:30 Session 11C: HEALTH & ENVIRONMENT: Climate change and data
Location: DZ-5
15:00
Blockchain and climate finance law: A perpetual synergy or a one-way street?

ABSTRACT. Climate change is a critical and imminent global challenge necessitating urgent mitigation and adaptation actions, which require financial flows. The International Energy Agency estimates the cost of global transition at $90 trillion by 2030. An emerging area of law, termed ‘climate finance law’, predominantly examines the role of law in mobilising and leveraging (i.e. generating) the needed finance. It does, however, not yet focus on the two further phases of channelling and spending of climate finance that are crucial for effective approaches to climate finance. Blockchain technology is a tool that can support these three phases of climate finance, but its fit with climate finance law is not yet explored. Blockchains are a distributed ledger on which data can be permanently stored so that it is open, verifiable, and cannot be modified. This paper assesses the role of blockchain applications to climate finance in supporting climate finance law, as well as the converse: the role of climate finance law in supporting blockchain technology for climate finance. In doing so, this paper sets out the benefits and limits of blockchain in context of climate finance law, and concludes that there can be a useful synergistic relationship between the two. The wider contribution of this paper is therefore the investigation of one recent innovative technology, and its potential impact on and implications for the law.

15:30
Digital Technologies for Land Use, Land Use Change and Forestry Reporting under the Paris Agreement - A Game Changer for the Better?

ABSTRACT. This paper explores whether large scale adoption of emerging digital technologies in the land use, land use change and forestry (LULUCF) sector will improve reporting under the UN Framework Convention on Climate Change (UNFCCC) Paris Agreement. Emerging digital technologies, especially the Internet of Things (IoT) and Blockchain, are increasingly identified with strategies to achieve a  low carbon, sustainable and fairer future. Given the urgency to reduce GHG across the LULUCF sector globally to keep the rise in temperature within the 1.5° C Paris Agreement target the promise of emerging digital technologies needs to not be hollow.

The reporting guidelines for LULUCF for the UNFCCC are notoriously complex. Parties to the 2015 Paris Agreement adopted a Rulebook at the UNFCCC Conference of the Parties in November 2018 with only slightly revised reporting guidance as used under the Kyoto Protocol for its’ Implementation Framework under Articles 13-14.  Reports on  LULUCF policies to reduce greenhouse gas (GHG) emissions such as bioenergy carbon capture and storage (BECCs), reducing emission from avoided deforestation and degradation (REDD+) and afforestation and reforestation (A/R) will continue to provide inaccurate accounts. 

The paper draws on two case studies: forestry conservation and BECCs to evaluate the potential legal and policy implications for LULUCF reporting under the Paris Agreement of increased dependency on IoT and Blockchain. The paper concludes by arguing that widespread incorporation digital technologies will lead to a significant transformation in reporting processes under the Paris Agreement, as well as other multilateral environmental agreements such as the UN Convention on Biological Diversity and the UN Convention on Combatting Desertification. Yet the outcomes will be no less contentious than existing reporting procedures.

16:00
Secondary use of sensitive data in observational studies: the impact of the Italian GDPR implementation law on retrospective biomedical research

ABSTRACT. Over the past twenty-five years, the digital revolution has made it possible for healthcare facilities and health agencies to use massive digital databases for the storage of administrative and health service data gathered from routine clinical practice (Stendardo et al, 2013). The availability of such a large group of heterogeneous data-sets, coupled with advances in computing power, is driving researchers to create sophisticated algorithms for the analysis of pre-existing data valuable to medical research (not previously combinable through matching techniques) to look for patterns, correlations, and links of potential significance (Mostert et al, 2016). Extracting meaningful information from this flood of data is a challenge, but holds unparalleled potential for observational or epidemiological studies (Thiese, 2014). These studies are often retrospective, meaning that they are based on the reuse of previously collected sensitive data (Thiese, 2014); and their analysis through web-based data mining tools is having a revolutionary impact on epidemiological research (Salathé et al., 2012), pharmacovigilance and facilitates certain studies, such as those on rare diseases (Woodw, 2013).

. Yet whilst the possibilities in terms of innovative research springing from large-scale reuse and linkage of health and genomic data continue to expand, developments in IT and the reutilization of sensitive data have led to increasing concern about the protection of data (Mostert et al., 2016). The new Regulation on the protection of natural persons with regard to the processing of personal Data and on the free Movement of such data” - also known as “GDPR” recognizes the benefits of research carried out re-using information contained in large databases (Recital 29) and acknowledges that the “informed consent or anonymization paradigm” may hamper data intensive medical research. In order to reconcile the often-competing values of data protection and innovation, the GDPR carves out numerous derogations for “historical or scientific purposes”, allowing researchers to avoid restrictions on secondary uses of sensitive data (Article 5(1)(b), Recital 50). However, these derogations depend on the appropriate safeguards set up by the data controller. Article 89(1) specifies that one way for the controller to comply with the new legal framework concerns the use of “pseudonymization” techniques combined with further “technical and organizational measures”, such as the principle of privacy-by-design, privacy-by-default and the presence of a data protection officer (Bologni and Bistolfi, 2017). However, the possible impact of GDPR on current observational studies in medical research starts only now to be clear since – in this field – it allow Member States to use extensive discretional powers (Pagallo, 2017): for instance, Article 9(4) of the GDPR allows Member States to introduce further safeguards with regard to the processing of genetic data, biometric data or data concerning health. At the same time when it comes to the research exemption, Article 89(2) allows Member States to enact derogations from a number of rights otherwise afforded to data subjects under the GDPR. With this in mind, the paper aims to restrict the focus of analysis to the Italian legal framework. Here, the processing of health data for research purposes – before the GDPR -was governed by a regulation that is, in many respects, stricter than those existing in other EU countries (Picciocchi et al., 2017). In September 2019, the Italian Council of Ministers approved the Legislative Decree n. 101/2018, that aims to harmonize the Italian Privacy Code and other national laws with the European General Data Protection Regulation. In line with the GDPR, Italian law keep research in a privilege position and allows the reuse of personal and sensitive data for research purposes, without consent, in specific circumstances. After analysing the novelties introduced by the Legislative Decree n. 101/2018on the processing of sensitive data for medical research purposes the paper aim to show how the GDPR's delegation of powers back to the national legal systems of the Member States entails a number of critical drawbacks like: complex stratification of different inputs and rules (EU law, national law, orders issued by authorities, and soft law, which need to be integrated with ethical principles (Floridi et al., 2018), political strategies and practical solutions) risks of forum shopping in H2020 EU projects and fragmentation of law that may hamper the free flow of personal data and, consequently the progress of medical research, especially when compared to other EU countries.

References:

Bolognini L. Bistolfi C., (2017) “Pseudonymization and impacts of Big (personal/anonymous) Data processing in the transition from the Directive 95/46/EC to the new EU General Data Protection Regulation”. Computal Law Secure Rev, 33(2), pp. 171–181., doi: https://doi.org/http://dx.doi.org/10.1016/j.clsr.2016.11.002.

Floridi L., Luetge C., Pagallo U. et al. (2018) “Key Ethical Challenges in the European Medical Information Framework”. Minds and Machines. Available : https://doi.org/10.1007/s11023-018-9467-4

Green M.D., Freedman D.M., Gordis L. (2000) “Reference guide on epidemiology. Reference manual on scientific evidence” (Federal Judicial Center). Available: http://www.fjc.gov/public/pdf.nsf/lookup/sciman06.pdf/file/sciman06.pdf. Accessed 29 June 2012 Knoppers B.M (2014), “International ethics harmonization and the global alliance for genomics and health”. Genome Med, 6(2), p. 13, doi: https://doi.org/10.1186/gm530. Mittelstadt, B.D., Floridi, L., (2016) “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”. Sci Eng Ethics. 22(2), pp. 303-41, doi: http://dx.doi.org/ 10.1007/s11948-015-9652-2.

Mostert, M., Bredenoord, A.L., Biesaart, M.C. and van Delden. J. J., (2016) “Big Data in medical research and EU data protection law: challenges to the consent or anonymise approach”. Eur J Hum Genet, 24(7), pp. 956-60, doi: http://dx.doi.org/10.1038/ejhg.2015.239.

Pagallo U., (2017) “The Legal Challenges of Big Data: Putting Secondary Rules First in the Field of EU Data Protection”. European Data Protection Law Review, 3(1), pp. 36-46. Available: http://hdl.handle.net/2318/1640445 Piciocchi, C., Ducato, R., Martinelli, L. et al., (2017), Legal issues in governing genetic biobanks: the Italian framework as a case study for the implications for citizen’s health through public-private initiatives. J Community Genet, pp 1–14, doi: https://doi.org/10.1007/s12687-017-0328-2.

Salathé M., Bengtsson L., Bodnar T.J., Brewer D.D., Brownstein J.S., Buckee C., et al. (2012) “Digital Epidemiology”. PLoS Comput Biol, 8(7): e1002616. doi: http://dx.doi.org/10.1371/journal.pcbi.1002616. Stendardo A., Preite F., Gesuita R., Villani S., Zambon A., SISMEC "Observational Studies" working group, (2013), “Legal aspects regarding the use and integration of electronic medical records for epidemiological purposes with focus on the Italian situation”. Epidemiology Biostatistics and Public Health, 10(3): e8971, doi: http://dx.doi.org/10.2427/8971. Thiese M. S., (2014), “Observational and interventional study design types; an overview”. Biochem Med (Zagreb), 24(2), pp. 199-210, doi: http://dx.doi.org/10.1161/BM.2014.022. van Ommen G.J.., Törnwall O., Bréchot C., Dagher G., Galli J., Hveem K.., Landegren U., Luchinat C., Metspalu A., Nilsson C., Solesvik O.V., Perola M., Litton J.E., Zatloukal K., (2015) “BBMRI-ERIC as a resource for pharmaceutical and life science industries: the development of biobank-based Expert Centres". Eur J Hum Genet, 23(7), pp: 893–900, doi: http://dx.doi.org/10.1038/ejhg.2014.235.

Legal References:

General Data Protection Regulation (EU) 2016/670. Available in English: http://ec.europa.eu/justice/data-protection/reform/files/regulation_oj_en.pdf

WP29, Opinion 5/2014 on anonymization techniques, WP 216, 10 April 2014. Available in English:http://ec.europa.eu/justice/dataprotection/article29/documentation/opinionrecommendation/files/2014/wp216_en.pdf

Italian Personal Data Protection Code. Legislat. Decree no. 196 of 30 June 2003. Available in English: http://www.garanteprivacy.it/home_en/italian-legislatio.

15:00-16:30 Session 11D: IP TRACK PANEL: European Data Economy and Regulation of Data
Chair:
Location: DZ-6
15:00
PANEL: European Data Economy and Regulation of Data
PRESENTER: Martin Husovec

ABSTRACT. This session will explore the relationship between intellectual property rights, including databases with public sector information (PSI), competition law, Artificial Intelligence, the Internet of Things (IoT) and Big Data. The session will examine the recent proposal to revise the PSI directive, the EC’s evaluation of the PSI and database directives, the provision on text and data mining in the proposal for a directive on copyright in digital single market, the so-called ‘data economy package’ and the proposal for a regulation on the free flow of non-personal data as well as possible a possible data producer right and access to data right.

Panelists: Martin Husovec Estelle Derclaye Inge Graef Lorenzo Dalla Corte and others

15:00-16:30 Session 11E: AI AND RESPONSIBILITY TRACK: AI, human rights and responsibility
Location: DZ-7
15:00
A Robot in Every Home. Automated Care-Taking and the Constitutional Rights of the Patient in an Aging Population
PRESENTER: Andrea Bertolini

ABSTRACT. With population rapidly aging and increasing welfare costs, many countries consider robotics as a potential solution for providing care to senior citizens. Applications – often referred to as social- or care-robots – are believed to be more efficient and cost-effective than human carers, in particular considering the anticipated technological advancement that should allow the deployment of «a robot in every home» (Gates 2007). Such solutions, however, need to be discussed within the existing legal and ethical framework, primarily as emerging from European constitutional traditions. The final aim is to determine whether the use of robots in the care of senior citizens is legitimate, upon which conditions, and when intended to pursue what ends. This will also allow us to identify guiding principles for the design of such applications, and towards the definition of the functions they ought to serve. The article therefore intends to (i) discuss how the right to care is defined in some European legal systems in light of existing international treaties and constitutional traditions. To this end, three countries are selected, the United Kingdom, Sweden and Italy, exemplifying three alternative welfare system that coexist in Europe (Esping-Anderson 1990, Rhodes & Mèny 1998). More specifically, the comparative legal analysis will underline both the petition of principle emerging from such legal frameworks, contrasting it with its enactment in terms of services offered to senior citizens, in light of national legislation. That way it will both define the theoretical right to care, as well as determine what is understood as a reasonable standard of care in practice (Szebehely & Trydegard 2011). Then, it (ii) describes the status of current technological advancement and research, focusing on the kind of services existing and future – yet realistic according to a mid-term horizon – applications – might – offer. Those, in particular, differentiate the provision of services – ranging from administering medical treatments, to helping in the completion of daily tasks – from social interaction and entertainment. Attention is devoted to the perception of the human-machine interaction by the elder. Finally, it (iii) discusses how such applications and functions influence the legal – international, constitutional and regulatory – framework, whether they positively or negatively affect the rights possessed by elderly people, both in their theoretical connotation and in their practical application. To this end it also discusses the ethical framework pursuant to which such an assessment ought to occur, primarily whether a purely utilitarian perspective suffices, merely measuring the level of services offered, and the different performance of human and artificial carers. The single services identified under (ii) above will be considered to determine whether they conform to existing constitutional values or rather challenge them, eventually violating them. The notion of a right to cure will be differentiated from that of a right to care (Calzo 2018), reflecting the distinction between physical well-being and social interaction.

References: Gates, W. H. (2007). A robot in every home. Scientific American. Esping-Anderson, Gosta (1990). The three worlds of welfare capitalism. Princeton University Press. Rhodes, Martin, & Mèny, Yves ( 1998). The future of European welfare, a new social contract?. St. Martin's Press, INC. Szebehely, Marta & Trydegard, Gun-Britt (2012). Home care for older people in Sweden: a universal model in transition. Health and Social Care in the community. Calzo, Antonello Lo (2018). Il diritto all’assistenza e alla cura nella prospettiva costituzionale tra eguaglianza e diversità. Associazione Italiana Dei Costituzionalisti.

15:30
Automated journalism and the Freedom of Information: Ethical and Juridical Problems of the AI in the Press Field

ABSTRACT. Technological changes have deeply influenced journalism and the Press: from the competition of new media and the challenges of the Web 2.0 to the creation of a new way to produce news, i.e., automated journalism. Between the different notions of the use of AI in the Press field (automated journalism, robot journalism, News-Writing Bots, algorithmic journalism) in this paper the wording “automated journalism” is preferred as long as it seems to describe in a better way the practice of this type of journalism and it seems more used by the scholars who have studied this topic. Automated journalism is the use of AI, i.e., software or algorithms, in order to automatically generate news stories without any contribution of human beings, apart from that of programmers who (eventually) have developed the algorithm. An article produced by AI is an article in which the algorithm collects and analyses data and finally writes a piece of news. Automated journalism can operate in two different ways: by producing the news without the journalists’ intervention in writing and publication or by “cooperating” with a journalist who can be deputized to supervise the operations or to improve the article with his or her considerations. The mode of operation of automated journalism is deeply connected with the access and availability of structured data, which are needed to generate news articles. This paper aims to analyze the ethical and juridical problems of automated journalism, in particular, looking at the freedom of information and focusing on the issue of liability and responsibility. From a legal point of view, the analysis shall embrace and share the European concept of the freedom of information and media regulation, looking at the ECHR and EU legal system and the Italian one. The first part of the paper shall explore the field of the media outputs in which automated journalism - as currently developed - could produce innovations and the issue of data utilization. The second paragraph shall analyze the legal and ethical problem of automated journalism by looking at the problems of liability and responsibilities and best practice concerning data use. The main issues are: Who is or should be responsible or liable for a piece of journalism created by AI? Is it necessary to think about new forms of liability or responsibilities for programmers? What forms of regulation of this phenomenon shall be developed (law, ethical codes)? In the final remarks, some solutions and guidelines shall be proposed looking at the problems highlighted in the paper.

16:00
Artificial Intelligence and Privacy: An Exploration Through Five Encounters

ABSTRACT. This paper explores the way in which artificial intelligence impacts the conditions for privacy. It will do so by untangling both notions and contrasting different takes on each of them. Artificial intelligence understood as a new phase in the deployment of data intensive computing (optimization) creates some clear tension points with privacy. This chapter provides more clarity about these tension points by contrasting developments in the production of AI with specific approaches to theorizing, regulating and engineering privacy. This will allow us to foreground specific questions at the intersection of AI and privacy through the following five ‘encounters’ between privacy and AI: • Can privacy, as a right to be let alone, continue to exist in a world powered by AI, and under what conditions could it inform a right to refusal of AI? • What are the implications for data privacy regulation of the project to make AI 'fair, transparent and accountable'? • What are the possibilities for AI to help ensure privacy in terms of contextual integrity, for instance through intelligent agents? • What are the limitations of protecting privacy, in terms of autonomy, when people are subjected people to optimization regimes? • What new forms and approaches to privacy may be needed in view of the challenges posed by AI?

15:00-16:30 Session 11F: PANEL: State surveillance and privacy
Location: Dz-1
15:00
PANEL: Privacy in the times of bulk state surveillance and law enforcement access to citizen’s data
PRESENTER: Eleni Kosta

ABSTRACT. This panel is going to focus on privacy protection in the times of bulk state surveillance and law enforcement access to citizen’s data. A little more than one year from the Law Enforcement Directive (LED – Directive 2016/680) coming into force on 5 May 2018, the panel will critique the impact of several high-profile incidents featuring actors under the scope of the LED, including the “lessons learned” from Police Scotland’s deployment of cyber tech, the “failure” in deploying facial-recognition CCTV cameras in Glasgow, and the “fallout” from news that police in England and Wales as part of their investigations are asking rape victims for consent to access their mobile phones. Taken together, the incidents reveal how Law Enforcement Authorities are struggling to abide with their new data processing obligations. The panel will further discuss the work of civil society on the Dutch 2017 Intelligence and Security Services Act and on the police on the internet/digital investigation. The panel discussion will then move outside continental Europe and will focus on bulk communications data collection and use in the United Kingdom under the UK Investigatory Powers Act given the two cases pending before the CJEU and the ECHR respectively. Finally, the panel will concentrate on the fact that Dublin has grown as a hub for internet firms, which means that user data is increasingly subject to Irish law and will present findings that Irish law and practice fails to meet the requirements of the ECHR and Charter of Fundamental Rights in a number of ways, particularly in relation to legal basis, transparency and voluntary disclosures.

16:30-17:00Coffee Break
17:00-17:45 Session 12: Keynote 4: Prof. Virginia Dignum

Keynote AI and Responsibility

Location: DZ-2
17:00
Responsible Artificial Intelligence

ABSTRACT. As Artificial Intelligence (AI) systems are increasingly making decisions that directly affect users and society, many questions raise across social, economic, political, technological, legal, ethical and philosophical issues. Can machines make moral decisions? Should artificial systems ever be treated as ethical entities? What are the legal and ethical consequences of human enhancement technologies, or cyber-genetic technologies? How should moral, societal and legal values be part of the design process? In this talk, we look at ways to ensure ethical behaviour by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems. We will in particular focus on the ART principles for AI: Accountability, Responsibility, Transparency.