View: session overviewtalk overview
09:00 | Diagnosis: Regulatory Disconnection - time to reorganize data protection law? ABSTRACT. In this paper, we will make the case for the necessity to fundamentally reorganize data protection law, where the reorganization has to be of foundational character and concern on the very boundary concepts around which the law is constructed. We will then move to show one possible approach to redesign data protection around new boundary concepts. We will start by explaining the need for such fundamental redesign. We argue that the impasse of data protection law caused a.o. by the broad material and personal scope of data protection law, as well as the highly intensive data protection obligations, constitutes what Brownsword calls a ‘regulatory disconnect’. Namely, the technology as viewed by the regulation does not match the present state of technology or its uses. When new data technologies are perceived as posing risks, the automatic reaction is to fit them under the umbrella of data protection, since the current main organizing notion of ‘personal data’ implies a binary application of the law. A consequence of this has been the extension of ‘personal data’, in an attempt to re-establish regulatory connection. It is argued that the concept of ‘personal data’ is failing at establishing legal boundaries, leading to data protection regulating everything. Even if this is not the case, the fact remains that the boundaries of this legal domain, as well as legal certainty for the actors involved, are threatened. Data protection regulation is said to be technology neutral, and diagnosing the issue in terms of regulatory disconnect may seem counter-intuitive. Yet, technology and its applications seem to stretch data protection law to the limit: e.g. a widening range of situations constitutes processing personal data, and all data might be soon considered personal. Similarly, the roles and responsibilities within data protection law are still based on scenarios where it is relatively easy to distinguish between a controller and processor, while e.g. recent case law shows otherwise (e.g. Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH). Whenever regulatory disconnect takes place, interpreters face a choice between re-connecting the law through purposive interpretation, or leave the issue to the regulators. Until now, the mismatch in data protection has been addressed through both of these approaches. The Court of Justice has developed case law on the material and personal scope of data protection (e.g. Nowak, Google Spain) . The Data Protection Directive has been updated to the General Data Protection Regulation. Yet, the central concepts around which legal protection is organized (e.g. personal data, controller) remain the same, subject to purposive interpretation by the courts. Hence, the regulatory disconnect has been addressed by ‘more-of-the-same’ law, with additional and more effective obligations and broadened scope. We will demonstrate that a more far-reaching intervention of the regulators is thus necessary. Finally, we will argue that while many approaches to such reorganization are possible, one possible strategy would be to look at how legal protection against harms is organized in other areas of law, to build a model of such protection, including a high-level overview of the boundary concepts used in other domains of law, and use this model and the boundary concepts for redesign of legal protection against data-driven harms. This part of the paper will formulate a research agenda for the search of new organizing notions for a future-proof protection of people against data processing harms. |
09:30 | Trust and the Smell of Data: a Postphenomenological Experiment PRESENTER: Esther Keymolen ABSTRACT. Increasingly, everyday life is being moulded by smart and networked artefacts. We connect with friends through social media platforms, we fetch our news on Twitter, we find our way in the city by following the instruction of navigation apps. Generally, these technologies are designed in such a way that they are easy to use, have an excellent performance, and seamlessly fit our daily activities. No manual needed. All in all, their “ready-at-hand-ness” makes them seemingly trustworthy. However, from research we also know that the convenience and usefulness we experience with our smart devices is only half of the story. Behind the interface, and therefore beyond our first-hand phenomenological experience, data is being leaked and shared in a complex network of actants. This makes users of networked technology vulnerable in different ways. They may lose their -informational- privacy and consequently run the risk of being manipulated and have their action space being curtailed. Current strategies to make users more aware of their data-vulnerability is to inform them via terms and conditions and privacy-policies (text-based) or to make use of icons and other visible cues (icon-based). These strategies have in common that they appeal to the visual perception of end-users and require -to a certain extent- the rational and conscious processing of information. Amongst others, these strategies bring along information overload and information fatigue, negatively impacting their success. This paper brings together insights from postphenomenology and experimental psychology, to explore other bodily perception strategies to raise data-awareness. We will refer to these strategies as mediation tactics. The theoretical starting point is that in our experience of the world implicit and tacit knowledge is already present, guiding our interactions. Smell is one of the facets of this tacit knowledge. Smell informs us on a very instinctive level that something is potentially dangerous (e.g. the smell of decomposing bodies or of decayed food). Because of its strong warning function, smell has also been artificially added -for example to gas- in order to warn people. In collaboration with the Smell of Data project of artist and designer Leanne Wijnsma, we have conducted a small-scale experiment to see whether adding a warning smell to networked artefacts that leak data might change the trust perception of users. The hypothesis is that: (1) perceptions of trust are shaped by the technologies at hand. (2) If we alter the design of the technology, the perception of trust will change as well. The concept of multistability (the openness of artefacts to incorporate different meanings/identities) will be the starting point of this experiment. The aim of this research is twofold: we want to test whether changing the design of the interface by adding smell indeed has an effect on the trust perception of users (empirical stance) and we want to use these results to flesh out key concepts in postphenomenology, in particular the interaction between micro and macro perceptions (conceptual stance). The first findings of this research will be presented. |
10:00 | Data protection through work floor democracy: The German and Dutch example ABSTRACT. This presentation examines the role of collective labour law in the protection of personal data of employees gathered by employers in the context of work activities. Specifically the role of works councils in this process. Works councils are institutionalized representative organs that are meant to protect employee interests on the company level. This paper will focus on the German and Dutch iterations of the works council; the ‘betriebsrat’ and the ‘ondernemingsraad’.In these these countries a work floor democracy needs to give consent prior to technological work floor surveillance. This paper will examine what these rights entail. With work floor surveillance becoming ever more present, the need for fast, flexible, yet effective regulation grows more relevant. With the debate swinging from self-regulation to national or supranational laws, the works council represents a possible alternative option. |
09:00 | Workshop: The Practical Implementation of the EU Data Strategy PRESENTER: Arnold Roosendaal ABSTRACT. The EU has over the past years created a data strategy in the form of a number of related legal instruments. Great impact is seen from the adoption of the GDPR, but also the PSD2 Directive and NIS Directive are of significant influence. On short term, the set will be completed by the new ePrivacy Regulation and the Regulation on the free flow of non-personal data. Once complete, the legal framework will cover privacy and data protection in general, data protection in online contexts and marketing, reuse of financial data, data in critical infrastructures (including cloud services), and non-personal data. The different directives and regulations show that the EU is trying to facilitate, or stimulate, economic benefits from data, whilst protecting the privacy rights of individuals and ensuring the security of networks. So, the EU has an overarching strategy or vision on the value and benefits of data use. The broad scope also implies that a range of professionals is needed for a proper implementation. Not only lawyers, but also information security specialists, application developers, and the users of applications and systems in all parts of an organization have to be involved. Nevertheless, it seems that in practice an interdisciplinary collaboration is still not the logical approach. Due to the main addressees of the different legal instruments, either the legal department, the IT department, or, for instance, the marketing department is determined to take care of the implementation. Common distinctions between departments, and in particular the traditional distance between lawyers and techies, remain in place and may even be strengthened. The question that arises is how to counter this phenomenon and how to ensure that value is created in an effective, respective, and economic valuable manner. The proposal for this workshop is to discuss the different possible approaches towards organizing compliance. Our statement is that there should be paid more attention to the EU data strategy in a broad sense, stressing the commonalities and connections between the different legal instruments as adopted and prepared by the EU. This will result in an embedment of strategic viewpoints on data in the general strategy of organizations. In addition, for data intensive organizations, this should be translated to a specific data strategy. Ultimately, an overall strategy can be formulated based on a number of ethical viewpoints that are inspired by the general principles that form the foundations of the EU legislative documents. These principles can be translated into organization-specific norms that direct the do’s and don’ts of the organization as a whole and of the different departments in particular. The aim of the workshop is to bring together stakeholders and representatives from commercial companies, policy, and academia to discuss the approach set out above. Participation of lawyers, technologists, and policy makers is encouraged. The workshop will be led by Arnold Roosendaal PhD LLM. and Nitesh Bharosa PhD Arnold is director at Privacy Company and Smart Data Company and has extensive experience in advising companies, governments, and non-profit organizations in implementing EU legislation into their business and strategies. Nitesh is head of research & development at Smart Data Company and Senior Research Fellow at Delft University of Technology. He has worked on multiple large scale data exchange initiatives and has advised several organisations on how to develop and execute an agile data governance strategy. |
09:00 | Towards an optimal regulation for innovative markets? An example of data sharing under PSD2 ABSTRACT. Almost invisible yet crucial for the proper functioning of the economy, retail payment systems are undergoing a rapid process of technological change driven by innovation in ICT. Despite promises of further economic growth through spurring e- and m-commerce, the innovative services based on new technologies pose novel security risks and raise questions about consumer protection. Unsurprisingly, personal data is omnipresent, both in innovation and regulatory policy debates addressing the ongoing processes of change. Crucial as a resource for further innovation, its capacity to give a decisive competitive advantage brought it into the limelight of the revised regulatory framework under Directive (EU) 2015/2366 (PSD2). The revised payment services directive, enacted to tackle regulatory gaps and legal uncertainty resulting from the innovation-driven changes in the payment landscape, introduces obligation of data sharing on market incumbents vis-à-vis innovative newcomers. Commission Delegated Regulation (EU) 2018/389 (RTS SCA) elaborates technical and operational details of data sharing – the core component of what became known as ‘open banking’. Less than one year until the implementation of the core provisions of the regulation, the adopted framework raises a number of concerns over its potential impact on competition and further innovation in the sector. The paper examines the regulatory framework of the EU retail payments industry seen as an innovation policy instrument containing the elements of both supply- and demand-side innovation policy measures. The paper analyses the assumptions underlying the choice and formulation of regulatory objectives and scrutinises the mechanisms employed to achieve balance between often competing objectives under the PSD2. The systematic approach undertaken in the paper, combining the analysis of the new rules and institutional architecture from the regulatory theory perspective with the insights from the innovation theory literature, contributes to a wider debate of how to optimise the regulation of markets for innovation. |
09:30 | The Interface between Big Data & Intellectual Property ABSTRACT. This paper reviews the impact of Artificial Intelligence tools used in connection with ‘Big Data’, mostly in the form of Text and Data Mining (TDM) on the intellectual property system, and vice versa. The paper discusses the obstacles that copyright protection of material included in big data datasets might pose, the role of limitations and exceptions (L&Es) for TDM, and the compatibility of such L&Es with the three-step test. The paper then considers the impact of AI/TDM on patentability and whether parts of AI/TDM systems may themselves be patentable. The paper then turns to the data exclusivity right in pharmaceutical and chemical data (such as clinical trial datasets) and ends with a discussion of trade secret and confidential information law’s application to AI/TDM. |
10:00 | European Regulation of Mobile Platforms PRESENTER: Ronan Fahy ABSTRACT. This paper examines how two new pieces of proposed EU legislation, the ePrivacy Regulation and Regulation on Platform Fairness, will apply to smartphone ecosystems. In particular, the paper analyses the potential tension points between the proposed rules and underlying policy objectives of safeguarding privacy in electronic communications settings and the functioning of the digital economy in the emerging era of platform governance. We conclude with recommendations on how to address these issues in the respective regulations in these areas. |
09:00 | Profiling and Chilling Effects: Exploring the Behavioral Mechanisms PRESENTER: Shruthi Velidi ABSTRACT. Problem Description: In recent years, the amount of digital traces that individuals leave has grown exponentially. Given the rapid adoption of digital services and applications across all domains of life, from health and business, to politics, education and leisure, our lives have become increasingly datafied (Van Dijck, 2014). The digital traces we leave include both voluntary data from online participation, for example through likes and posts on social media, as well as involuntary data generated as a by-product of our online activities (Micheli, Lutz, & Büchi, 2018). The latter can also encompass sophisticated metadata, for example about the time of access and location of the user as well as about the device used. Taken together, personal data and metadata create an extremely detailed picture or profile of a person. This is especially the case when data from different sources are combined, for example when Google buys purchasing records from Mastercard to measure the success of search advertising (Bergen & Surane, 2018). Such data is increasingly used to make predictions about individuals, for example about their buying behavior, their creditworthiness (i.e., the likelihood that someone will default a loan), their job performance (e.g., AI systems that assess candidate videos based on language and facial cues such as HireVue) and their criminal activity (e.g., the COMPAS recidivism algorithm). This targeted aggregation of data, from multiple sources, associated with the individual is termed profiling. The General Data Protection Regulation (GDPR) defines profiling as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.” According to the GDPR, the data subject should be informed of such profiling, and should receive meaningful information about the logic involved, the significance of, and the intended consequences of such profiling. However, how can the data subject be informed about the extent and nature of potential consequences of profiling if they are currently unaware of these practices? While recent research on profiling has focused on the opaqueness, the lack of oversight of the “logic involved” (Citron & Pasquale 2014), and the existence of a right to explanation of how algorithms and systems work (Wachter, Mittelstadt, & Floridi 2017), an important aspect that has received less attention is the “envisaged consequences” of such profiling. Indeed, while governments and private actors increasingly data mine to maximize efficiency and accuracy, such practices might lead to invasive and discriminatory practices -- as they are typically built on biased datasets (Noble, 2018) -- which require further exploration (Custers, Calders, Zarsky & Schermer, 2013). Content and Methods of our Contribution: In this article, we build upon recent studies of the legal consequences of profiling (Custers, Calders, Zarsky & Schermer 2013) by investigating the potential chilling effects of autonomous profiling systems from a multi-disciplinary perspective. Our goal is to scrutinize the issue of profiling with respect to pathways for chilling effects and conformity. After introducing the topic of profiling, we identify and discuss what chilling effects these systems might produce. Typically, profiling is linked to discrimination (e.g. racial profiling) and manipulation (e.g. voter manipulation), but is also linked to lesser known aspects such as homophily (or the so-called ‘filter bubble’) as well as the consequences of the procrustean design of such systems. For the full article, we carry out an in-depth multi-disciplinary review of relevant literature. The literature review includes an overview of the key theories in privacy and surveillance studies that relate to profiling as well as a summary of important empirical insights. We then connect profiling practices with chilling effects and elaborate why profiling might have chilling effects that differ from other forms of surveillance, especially due to its predictive aim. The literature review serves to develop hypotheses on the chilling effects of profiling practices as well as on more traditional surveillance technology (social media monitoring). We propose to test these hypotheses using an experimental vignette study in different application areas of profiling such as the job market and credit checks (e.g., for a new rental contract or mortgage). In each experimental vignette study, we will manipulate the type of surveillance (non-profiling surveillance vs. profiling; or non-profiling surveillance vs. simple profiling vs. complex profiling) and test the effect on intended behavioral change in the sense of chilling or alteration of online communication, interactions, and activities. Our conclusions will report the results of the empirical analysis and contextualize our findings within the broader literature of data protection, privacy, and surveillance. This will include recommendations on how chilling effects that are associated with profiling technology could be reduced and how data controllers can provide meaningful information about the envisaged consequences of their profiling activities. References: Bergen, M., & Surane, J. (2018, August). Google and Mastercard cut a secret ad deal to track retail sales. Bloomberg, 30 August 2018. Retrieved from https://www.bloomberg.com/news/articles/2018-08-30/google-and-mastercard-cut-a-secret-ad-deal-to-track-retail-sales Citron, D. K., & Pasquale, F. (2014). The scored society: due process for automated predictions. Wash. L. Rev., 89, 1-33. Custers, B., Calders, T., Zarsky, T., & Schermer, B. (2013). The way forward. In Discrimination and Privacy in the Information Society (pp. 341-357). Berlin, Heidelberg, DE: Springer. Micheli, M., Lutz, C., & Büchi, M. (2018). Digital footprints: an emerging dimension of digital inequality. Journal of Information, Communication and Ethics in Society, online first. doi: 10.1108/JICES-02-2018-0014 Noble, S. (2018). Algorithms of oppression. New York, NY: NYU Press. Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197-208. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99. |
09:30 | Nudging the sharing of data in a fair way PRESENTER: Michele Loi ABSTRACT. Privacy self-management conferring absolute control to the data subject leads to cognitive overload in most people. In order to promote autonomy, it must be focused and choices have to be simplified. This is a conceptual contribution to Taylor’s unified framework to data justice (Taylor 2017). We focus on two questions: 1. What nudges (default rules, attention enhancing/selecting features etc) are necessary in a fair choice architecture for data transactions? 2. What nudges are impermissible in a fair choice architecture? |
09:00 | PANEL: Technology Law as a Coherent Field? PRESENTER: Michael Guihot ABSTRACT. There is an expanding literature that seeks to explore the relationship between technology, law and regulation. Those approaching the area for the first time can quickly become lost in a thicket of different new technologies, applications of those technologies, the benefits, threats and risks associated with each technology, and the plethora of regulatory approaches to balancing those risks and benefits. Perhaps because of this, some have argued that technology law lacks coherence; that is that it doesn’t ‘hang together’, or that it lacks a professional consensus on a coherent internal organization of materials. Other areas of law, now more formally recognized, such as Health Law and Environmental law, suffered the same existential misgivings in their early development. Is it time for Technology Law to now similarly join the canon of law subjects? This panel will begin the discussions through which we might achieve a professional consensus on a coherent organization of the parts of Technology Law. Those discussions must include discussion about the appropriateness and timeliness of such a classification process, and, if technology law does cohere, what is the thread along which it might do so? Other questions might include: what organizing classification system might best encapsulate the seemingly disparate topics under the rubric of Technology Law? Would another rubric (e.g., ICT/Internet Law or Information Law, or Technology Regulation) be better suited to bring coherence? What further research needs to be carried out in order to map the limits of the field? |
Keynote Data Protection
11:45 | Personal data transfers in international trade and EU law: A tale of two "necessities" ABSTRACT. Technological advances have made cross-border trade increasingly dependent on personal data and its flows across borders. In several cases, notably in the European Union, law restricts international personal data flows to a degree that is arguably beyond what is permitted under trade liberalization commitments. Under the lens of trade law, the EU’s regulatory autonomy to control transfers of personal data is structured by a general exception at the core of which is the so-called “necessity test”. Conversely, according to the EU Charter of Fundamental Rights, protection of personal data is a fundamental right, and therefore transfers of personal data can comply with trade commitments that derogate from fundamental rights only if the derogation passes another test, known as “strict necessity”. While “trade law necessity” requires that restrictions on personal data flows be least trade restrictive, the EU Charter mandates that liberalization of such flows should be least fundamental rights restrictive. This article shows how a simultaneous application of trade law and EU Charter “necessities” to EU restrictions on transfers of personal data creates a Catch-22 situation for the EU, and proposes ways out of this compliance deadlock. |
12:15 | Trade Secrets, models & personal data: conflict in waiting? PRESENTER: M.R. Leiser ABSTRACT. Our presentation addresses potential conflict between data protection law and commercially sensitive models. We do so through the lens of Machine Learning models that contain personal data, and models whose value depends on the personal data used to train them. The latter conflicts with data subject rights thanks to the expansion of the scope of personal data by the CJEU. We examine from a technical/legal perspective before positing how to resolve conflicts between the GDPR, Article 16 CHoFR, and ML-models. |
12:45 | Economic Drivers Behind “The Law of Everything” ABSTRACT. The increasing datafication of life has led to the question whether the current definition of “personal data” could eventually turn European data protection law, most notably the General Data Protection Regulation, into a “law of everything”. Whether this will indeed be the case surely depends on a variety of factors such as how exactly the definitions will be interpreted, but also which particular technological developments will materialize. However, this paper aims to take a different perspective by analyzing one of the underlying forces behind the increasing datafication: the economic incentives stemming from the fact that more informed decision-making tends to lead to better decisions – albeit from the perspective of the decision-maker and not necessarily others or even not society as a whole. The general argument of the paper is as follows. There are two levels on which a rational decision-maker, i.e. one that wants to make the best possible decisions given the circumstances, may want to contribute to increasing datafication based on a cost-benefit analysis. First, on the level of individual decisions: as long as the decision-maker expects the benefits from gathering more information to outweigh the cost (monetary, but also in terms of time and effort) associated with gathering this information, we would expect the decision-maker to gather more information. Second, on the level of making it cheaper to gather additional information: if one wants to gather information, but finds that current technologies are still too expensive, one may want to invest into making information gathering cheaper (or better while keeping cost at the same level). Combining these two effects then leads to an increasing datafication, to which there may be no boundaries if the cost of processing a marginal unit of information becomes negligible. The further this development goes, i.e. the smaller the cost of improving decisions by using information becomes, the more things could rationally become the focus of data collection and analysis, until eventually everything becomes digitally available. However, universal datafication alone would not yet render European data protection law a “law of everything”. For this to happen, all this data would also need to fall within the definition of “personal data”, understood as any information relating to an identified or identifiable natural person. Having established the potential for everything becoming data, the paper will continue to show that from there it follows from statistical principles that virtually any standard of identifiability will be eventually met. A similar argument based on statistical principles can then be made to also show that the chance that any given piece of information relates to an identified or identifiable person will approach certainty. It is worth noting, that the arguments put forward here are valid even without any particular incentives for increased identifiability of people. Such incentives exist without doubt, e.g. in the form of the endeavor of businesses to increasingly target consumers and customize products as well as prices. While these incentives would further accelerate the above-sketched development, they are not a necessary condition for this argument. In conclusion, the paper shows that there are fundamental economic incentives that should be expected to eventually render everything personal data and hence to render data protection law a “law of everything”. But this is not to say that there is no possibility to prevent this from happening - on the contrary: if (regulatory) ways can be found to render it increasingly costly to process personal data, we should expect the aforementioned development to come to a stop before literally everything becomes personal data. Such ways could consist of a very high intensity compliance regime, hefty fines in case of non-compliance with data protection principles, or similar instruments that render it more difficult to legally process vast amounts of data. However, the level at which this would be desirable and further, which effects this may have on potential competition policy goals, is beyond the scope of this paper. |
11:45 | PANEL: Access rights as a research tool PRESENTER: René Mahieu ABSTRACT. Panel Justice and the data market Title: Access rights as a research tool Description: This panel will discuss the use of subject access requests as a critical method for researching data markets and brokers. Data markets and brokers form the backbone of the datafication of society yet they are opaque on a technical, juridical as well economic level. The transparency obligations in the GDPR and in particular the right to access may be used as a tool to reveal these veiled practices. The right of access is already being used in this way, for example by researcher David Carroll to shed light on the data practices of Facebook and Cambridge Analytic, and digital human rights organizations to shed light on practices of data brokers. In this panel we look at the existing practices of using the right of access as a tool for critical research. We discuss the methodological aspects of using this right as a tool for research. And ask what researchers learn from and contribute to digital rights organizations and journalists who are using this tool. Organizers: Hadi Asghari, Joris van Hoboken and René Mahieu Moderator/chair: Joris van Hoboken and Hadi Asghari Panelists: Researcher: René Mahieu (Is doing PhD research on the question if the right of access is effective in practice and is looking in particular at how this individual right can have a collective function.) Researcher: Jef Ausloos (University of Amsterdam wrote “Shattering One-Way Mirrors. Data Subject Access Rights in Practice”) Researcher: Frederike Kaltheuner (DATACTIVE or Privacy International Data Exploitation Program) or Stefania Milan (DATACTIVE) Researcher: Aaron Martin (Tilburg University) Media: Saar Slegers (Is an independent journalist who used the right of access as a tool to report on her own personal data trail. By tracing the data trail from a commercial letter she received from an unknown she found her way into the opaque world of data brokers.) NGO: Rejo Zenger (works at Bits of Freedom, has been pioneering the use of the right of access for investigative purposes since 2009) |
11:45 | PANEL: A General Framework for Identifying Technology-Driven Legal Disruption: the Case of Artificial Intelligence PRESENTER: Leonard Van Rompaey ABSTRACT. Rationale Artificial intelligence has been predicted to disrupt the ordinary functioning of society across a broad array of sectors. The resulting impact of AI upon the law is thus both direct and indirect, and can be viewed at three different levels of severity. First, is the granular level of discrete decision-making at the level of individual policymakers or designers; the second involves the constitutional level of core values and the institutions which guarantee those values at the societal level; and the third concerns the existential level of the grand futuristic challenges posed by the potential future advent of highly capable AI to humanity at large. Taken together, the challenges introduced by AI are likely to trigger seismic shifts in the legal and regulatory landscape. This poses a multifaceted and messy problem for framing regulatory responses to AI. While the challenges are introduced by a tight cluster of digital technologies, the legal disruptions that cascade from AI are difficult to organise, manage and respond to. This workshop aims to set out the rationale for establishing a focused, dynamic, conceptual framework built around the concept of legal disruption, and situates this proposal in preceding debates over the creation of distinct legal fields for new technologies (e.g. the debate over ‘cyberlaw’), and relating to the orientation and approach towards robotics regulation. Workshop method The proposed model elaborated in this workshop aims to set out the potential trajectories for regulatory initiatives targeting AI, and the impacts of its development and deployment in society. As a methodological framework, this approach allows us to look at the negative externalities caused by new technologies—of which AI is but one example—and by the efforts made to regulate these in a changing legal world. Based on and applicable to various legal disciplines (Constitutional Law, Legal Theory, Medical Law, Tax Law, Governance), this framework can be used on different new technologies in order to better assess and conduct research on the causes and consequences of legal disruption, as well as the current or predictable effects of regulatory efforts targeting the new technologies’ disruptive effects. To understand the nature and the effects of the legal disruption at hand is necessary in order to be able to determine whether existing regulation is applicable, or whether our existing legal concepts are equipped to deal with the new technology. This first step allows us to connect the different levels of legal-regulatory impacts precipitated by AI together in a coherent manner, as well as elaborating upon how a shift away from looking at AI as an external precipitating hazard and re-focusing on the components of exposure and vulnerability might factor into regulatory responses targeting the societal impact of AI. Workshop set-up During this 90 min workshop, we develop the bases of the framework that we are creating. In addition to developing and clarifying the conceptual framework, the workshop also applied two case-studies of legal disruption—one, on the use of AI in medicine, the other, on blockchain and taxation regimes—as a way of testing its relevance and usefulness. We actively engage with the audience in order to discuss this model and its applications, improve it, find the limits of the model, and extend it to other cases. Specific Presentations and layout:
|
Keynote Digital Clearinghouse
14:15 | Redesigning regulation for digital platforms ABSTRACT. Alexandre de Streel is Professor of Law at the University of Namur where he is the director of the Research Centre in Information, Law and Society (CRIDS). His research focuses on regulation and competition law in network industries. Alexandre is also a Joint Academic Director at the Centre on Regulation in Europe (CERRE) in Brussels, and a member of the Scientific Committee of the Florence School of Regulation at the European University Institute. Alexandre regularly advises international organisations (including the European Commission, European Parliament, OECD, EBRD) and he is an Assessor (member of the decisional body) at the Belgian Competition Authority. |
15:00 | Calculating the citizen: the role of equality in automated decision-making in Dutch law enforcement ABSTRACT. The use of automated decision-making in law enforcement is increasing. Many of these automated decision-making systems target the lower socioeconomic classes, or other vulnerable groups in society. The creation and implementation of these systems – even as decision-making support systems – sort people into categories, which can reinforce inequalities in society. In my research I carry out three case studies in automated decision-making in enforcement under Dutch administrative and criminal law, to investigate this social sorting. The first case study of my research is the Systeem Risico Indicatie system, or SyRI. SyRI is designed to detect social security fraud through processing and analysing data in pre-established projects, collaborations between administrative bodies and organizations. The second case study concerns the system developed by private company Totta, which is implemented in several municipalities. This system is specifically designed to detect benefit fraud. The third case study concerns the system ProKid (Plus). ProKid (Plus) makes a risk assessment of every child which gets into contact with the Dutch Police. If necessary, action is taken by ‘Bureau Jeugdzorg’. All three systems have a big impact on the citizens involved: they are subject of far-reaching investigation, benefits can be put on hold, and children can be monitored by several authorities. I analyse these case studies from the perspective of the principle of equality: equal treatment and procedural fairness. Preliminary findings suggest that equality is not taken into account when using automated decision-making. As the (empirical) research is in progress, I hope Tilting Perspectives gives me the opportunity to get some feedback on work-in-progress. |
15:30 | Understanding vs. Accounting for Automated Systems: The Case of the Seattle Surveillance Ordinance PRESENTER: Michael Katell ABSTRACT. A key challenge to assigning or assuming responsibility for automated systems is understanding them. This is true for practitioners, such as system designers and engineers, but also for the operators of systems employed in civic spheres. We provide insights from an ethnographic study of government officials and community activists involved in the drafting and implementation of the Seattle Surveillance Ordinance, one of several local laws that have been enacted in the U.S. in recent years to require accountability from police departments and other municipal agencies using surveillance technologies. These policy-making efforts provide real-world case studies of efforts to render algorithmic and information systems accountable to public oversight in the absence of comprehensive national policies, as is the case in the U.S. In the ordinances we reviewed, the process of assuming or assigning responsibility for surveillance technology begins with city employees, who are tasked with reporting to the public and to elected officials about a municipality’s use of surveillance technologies. They are expected to demonstrate detailed understandings of the features and functions of the technologies under review, including the full extent of their algorithmic capabilities. We find that the mental models of artificial intelligence employed by city employees do not correspond with the actual features of the systems they are tasked to evaluate leading to failures in the identification of machine learning and other automated technologies within particular artifacts. To address this gap in understanding, we suggest that surveillance regulations include provisions to make the potential harms of a system’s algorithmic components more legible to political and community stakeholders, and thereby enable them to more effectively assign or assume responsibility for the use and social effects of automated surveillance systems. We situate this policy-making approach within contemporary narratives and critiques of algorithmic transparency. |
15:00 | PANEL: European Data Economy and Regulation of Data PRESENTER: Martin Husovec ABSTRACT. This session will explore the relationship between intellectual property rights, including databases with public sector information (PSI), competition law, Artificial Intelligence, the Internet of Things (IoT) and Big Data. The session will examine the recent proposal to revise the PSI directive, the EC’s evaluation of the PSI and database directives, the provision on text and data mining in the proposal for a directive on copyright in digital single market, the so-called ‘data economy package’ and the proposal for a regulation on the free flow of non-personal data as well as possible a possible data producer right and access to data right. Panelists: Martin Husovec Estelle Derclaye Inge Graef Lorenzo Dalla Corte and others |
15:00 | A Robot in Every Home. Automated Care-Taking and the Constitutional Rights of the Patient in an Aging Population PRESENTER: Andrea Bertolini ABSTRACT. With population rapidly aging and increasing welfare costs, many countries consider robotics as a potential solution for providing care to senior citizens. Applications – often referred to as social- or care-robots – are believed to be more efficient and cost-effective than human carers, in particular considering the anticipated technological advancement that should allow the deployment of «a robot in every home» (Gates 2007). Such solutions, however, need to be discussed within the existing legal and ethical framework, primarily as emerging from European constitutional traditions. The final aim is to determine whether the use of robots in the care of senior citizens is legitimate, upon which conditions, and when intended to pursue what ends. This will also allow us to identify guiding principles for the design of such applications, and towards the definition of the functions they ought to serve. The article therefore intends to (i) discuss how the right to care is defined in some European legal systems in light of existing international treaties and constitutional traditions. To this end, three countries are selected, the United Kingdom, Sweden and Italy, exemplifying three alternative welfare system that coexist in Europe (Esping-Anderson 1990, Rhodes & Mèny 1998). More specifically, the comparative legal analysis will underline both the petition of principle emerging from such legal frameworks, contrasting it with its enactment in terms of services offered to senior citizens, in light of national legislation. That way it will both define the theoretical right to care, as well as determine what is understood as a reasonable standard of care in practice (Szebehely & Trydegard 2011). Then, it (ii) describes the status of current technological advancement and research, focusing on the kind of services existing and future – yet realistic according to a mid-term horizon – applications – might – offer. Those, in particular, differentiate the provision of services – ranging from administering medical treatments, to helping in the completion of daily tasks – from social interaction and entertainment. Attention is devoted to the perception of the human-machine interaction by the elder. Finally, it (iii) discusses how such applications and functions influence the legal – international, constitutional and regulatory – framework, whether they positively or negatively affect the rights possessed by elderly people, both in their theoretical connotation and in their practical application. To this end it also discusses the ethical framework pursuant to which such an assessment ought to occur, primarily whether a purely utilitarian perspective suffices, merely measuring the level of services offered, and the different performance of human and artificial carers. The single services identified under (ii) above will be considered to determine whether they conform to existing constitutional values or rather challenge them, eventually violating them. The notion of a right to cure will be differentiated from that of a right to care (Calzo 2018), reflecting the distinction between physical well-being and social interaction. References: Gates, W. H. (2007). A robot in every home. Scientific American. Esping-Anderson, Gosta (1990). The three worlds of welfare capitalism. Princeton University Press. Rhodes, Martin, & Mèny, Yves ( 1998). The future of European welfare, a new social contract?. St. Martin's Press, INC. Szebehely, Marta & Trydegard, Gun-Britt (2012). Home care for older people in Sweden: a universal model in transition. Health and Social Care in the community. Calzo, Antonello Lo (2018). Il diritto all’assistenza e alla cura nella prospettiva costituzionale tra eguaglianza e diversità. Associazione Italiana Dei Costituzionalisti. |
15:30 | Automated journalism and the Freedom of Information: Ethical and Juridical Problems of the AI in the Press Field ABSTRACT. Technological changes have deeply influenced journalism and the Press: from the competition of new media and the challenges of the Web 2.0 to the creation of a new way to produce news, i.e., automated journalism. Between the different notions of the use of AI in the Press field (automated journalism, robot journalism, News-Writing Bots, algorithmic journalism) in this paper the wording “automated journalism” is preferred as long as it seems to describe in a better way the practice of this type of journalism and it seems more used by the scholars who have studied this topic. Automated journalism is the use of AI, i.e., software or algorithms, in order to automatically generate news stories without any contribution of human beings, apart from that of programmers who (eventually) have developed the algorithm. An article produced by AI is an article in which the algorithm collects and analyses data and finally writes a piece of news. Automated journalism can operate in two different ways: by producing the news without the journalists’ intervention in writing and publication or by “cooperating” with a journalist who can be deputized to supervise the operations or to improve the article with his or her considerations. The mode of operation of automated journalism is deeply connected with the access and availability of structured data, which are needed to generate news articles. This paper aims to analyze the ethical and juridical problems of automated journalism, in particular, looking at the freedom of information and focusing on the issue of liability and responsibility. From a legal point of view, the analysis shall embrace and share the European concept of the freedom of information and media regulation, looking at the ECHR and EU legal system and the Italian one. The first part of the paper shall explore the field of the media outputs in which automated journalism - as currently developed - could produce innovations and the issue of data utilization. The second paragraph shall analyze the legal and ethical problem of automated journalism by looking at the problems of liability and responsibilities and best practice concerning data use. The main issues are: Who is or should be responsible or liable for a piece of journalism created by AI? Is it necessary to think about new forms of liability or responsibilities for programmers? What forms of regulation of this phenomenon shall be developed (law, ethical codes)? In the final remarks, some solutions and guidelines shall be proposed looking at the problems highlighted in the paper. |
16:00 | Artificial Intelligence and Privacy: An Exploration Through Five Encounters PRESENTER: Joris van Hoboken ABSTRACT. This paper explores the way in which artificial intelligence impacts the conditions for privacy. It will do so by untangling both notions and contrasting different takes on each of them. Artificial intelligence understood as a new phase in the deployment of data intensive computing (optimization) creates some clear tension points with privacy. This chapter provides more clarity about these tension points by contrasting developments in the production of AI with specific approaches to theorizing, regulating and engineering privacy. This will allow us to foreground specific questions at the intersection of AI and privacy through the following five ‘encounters’ between privacy and AI: • Can privacy, as a right to be let alone, continue to exist in a world powered by AI, and under what conditions could it inform a right to refusal of AI? • What are the implications for data privacy regulation of the project to make AI 'fair, transparent and accountable'? • What are the possibilities for AI to help ensure privacy in terms of contextual integrity, for instance through intelligent agents? • What are the limitations of protecting privacy, in terms of autonomy, when people are subjected people to optimization regimes? • What new forms and approaches to privacy may be needed in view of the challenges posed by AI? |
15:00 | PANEL: Privacy in the times of bulk state surveillance and law enforcement access to citizen’s data PRESENTER: Eleni Kosta ABSTRACT. This panel is going to focus on privacy protection in the times of bulk state surveillance and law enforcement access to citizen’s data. A little more than one year from the Law Enforcement Directive (LED – Directive 2016/680) coming into force on 5 May 2018, the panel will critique the impact of several high-profile incidents featuring actors under the scope of the LED, including the “lessons learned” from Police Scotland’s deployment of cyber tech, the “failure” in deploying facial-recognition CCTV cameras in Glasgow, and the “fallout” from news that police in England and Wales as part of their investigations are asking rape victims for consent to access their mobile phones. Taken together, the incidents reveal how Law Enforcement Authorities are struggling to abide with their new data processing obligations. The panel will further discuss the work of civil society on the Dutch 2017 Intelligence and Security Services Act and on the police on the internet/digital investigation. The panel discussion will then move outside continental Europe and will focus on bulk communications data collection and use in the United Kingdom under the UK Investigatory Powers Act given the two cases pending before the CJEU and the ECHR respectively. Finally, the panel will concentrate on the fact that Dublin has grown as a hub for internet firms, which means that user data is increasingly subject to Irish law and will present findings that Irish law and practice fails to meet the requirements of the ECHR and Charter of Fundamental Rights in a number of ways, particularly in relation to legal basis, transparency and voluntary disclosures. |
Keynote AI and Responsibility