View: session overviewtalk overview
09:00 | Exploring the emergence of GDPR and Cyber security certification PRESENTER: Alex Li ABSTRACT. Both the GDPR and the draft EU Cyber Security Act introduce certification as a transparency mechanism. Although data protection and cyber security are closely interlinked topics (e.g. via article 32 GDPR - Security of processing), the regulatory approaches to certification under the GDPR and the Cyber Security Act differ substantially. This could result in significant (legal) uncertainty and inefficiencies for technology providers, controllers, processors and data subjects/consumers. This expert panel will engage in a lively discussion to explore the issue. This panel will discuss the following: 1. GDPR certification and certification under the Cyber Security Act is not mandatory. What is the legal effect of both certifications? 2. Conformity assessment is an essential part of the certification process. Who will conduct the audits? Should audits be left to the market or should government bodies be involved as well? What are the pros and cons? Which standards will be applied? 3. How will government actors (DPA, EDPB, CSIRT's etc.) and market actors (certification bodies, auditors, ..) interact in rolling out certifications under both regulatory approaches? 4. While certification may be somewhat new in privacy, it is not new in information security. What can we learn and leverage from information security certification? 5. What is the extraterritorial impact of CSA and GDPR certifications? How does it affect organizations operating outside of EU? 6. How do we envision certification under the European Cybersecurity Act and under the GDPR to co-exist in practice? How can we avoid unnecessary duplication of work? How do we avoid incompatibilities? 7. Could privacy and cyber security certification (in the future?) also be a part of the CE-marking under the EU New Approach? What are the opportunities and limitations? |
09:00 | Data of Public Undertakings – Towards a Common Framework ABSTRACT. Public undertakings generate a considerable amount of valuable data in the course of performing services of general interest, e.g. data on traffic-flows, timetables, locations, electricity grids. The contribution discusses the innovation-related EU legal framework on access and re-use of such data. It focuses on the interplay between the recast PSI Directive, competition law, information access laws, and public service obligations, and intends to add to the broader debate on the public-private interface in a data-driven economy. |
09:30 | Data Standardization: Portability and Interoperability in an Interconnected World PRESENTER: Michal Gal ABSTRACT. Data standardization is key to facilitating and improving the use of data. Absent data standardization, a “Tower of Babel” of different databases may be created, limiting synergetic knowledge production. Based on interviews with data scientists, this paper identifies three main technological obstacles to data portability and interoperability: metadata uncertainties, data transfer obstacles, and missing data. It then explores whether market-led standardization initiatives can be relied upon to increase welfare, and evaluates the role governmental-facilitated standardization should play, if at all. |
10:00 | Data, Innovation and Transatlantic Competition in Finance: The Case of the Access to Account Rule PRESENTER: Giuseppe Colangelo ABSTRACT. Technological innovation is transforming the structure of the retail banking sector. Traditional business models are facing a rapid disruption process led by the emergence of FinTech companies. In order to offer payment initiation services and account information services, third party providers need to access customer accounts. The EU has taken the lead in the transition by providing, within the revised Payment Service Directive, a sector-specific portability regime (the access to account rule) expressly aimed at fostering competition. |
09:00 | The sense and scope of the protection of Cyber consumers in the French legal system Insights from the mobile wellness applications ABSTRACT. French consumer law is based on the premise of consumer protection, considered as the party to a contract concluded with a professional. This same premise irrigates the reflection of legal doctrine on the protection of the Cyber-consumer; i.e. a consumer who is defined by the particularity of the environment in which he consumes: namely the online context. However, an analysis of the structure of consumer law and doctrinal discourse reveals the inanity of its premise; it also reveals the inability of the law to protect the consumer in the “consumer society”. In addition, the emergence of the “exposure society”, which results in particular from the use of mobile applications, adds new risk factors for consumers. In considering these factors, the focus is on the growing inability of the law to provide effective protection in the online context. Based on the example of wellness applications (fitness and nutrition), the existence of new control systems and power mechanisms will be outlined. Indeed, accessible on mobile phones, which have become the extension of the user body, these applications govern the physical and mental health of consumers. This contribution supports the idea that in a world where data is the cornerstone of a competitive global economic policy, (Cyber-)consumer protection seems utopian. This protection resembles a reassuring discourse that allows the “spectacle of consumption” to continue; and this, in favour of a structural power of financial capital that influences the neoliberal State. For users of mobile applications in general and wellness apps in particular, the weakness is the spiral of consumption, which is maintained by the need for self-exposure. However, the recognition of this fundamental weakness, which is intrinsic to the economic system and which the law admits without overcoming, makes it possible to revise the postulate of consumer law in order to construct a legal discourse in accordance with the role the law may (or wants to) play. |
09:30 | SAFETY? SECURITY/ TWO CULTURES? Rearticulating safety and security cultures critical infrastructures through the lens of co-production PRESENTER: Michiel Van Oudheusden ABSTRACT. Contemporary technological societies are faced with an increasing number of crises, such as environmental catastrophes, technological and industrial crises, and terrorist attacks (Bijker, Hommels, & Mesman, 2014). These crises may have unintentional human or natural causes, may be rooted in intentional and malevolent acts, or comprise a mix of motivations and behaviors (Khripunov & Kim, 2008). Particularly vulnerable to these growing threats are critical infrastructures such as energy sector and nuclear power plants. In order to prevent and mitigate the risks confronting them, these infrastructures have over time developed measures to increase first and foremost their safety, and subsequently, their security. Research analyzing the implementation of those measures in critical infrastructures is typically split into two separate domains: safety culture and security culture. As a consequence, no stabilized and comparable definitions of safety and security have been developed. Nor is it clear how the two concepts relate to one another, and whether they can coexist, as is often assumed by institutional regulatory and policy bodies (e.g. International Atomic Energy Agency, 2016b) and some authors (Gandhi & Kang, 2013; Reniers, Cremer, & Buytaert, 2011). We may hence ask: how do safety and security cultures interact? which synergies and discrepancies do they entail? What may be the impact of their articulation on risks mitigation ? To address these questions, this paper provides a first-of-its-kind systematic literature review of the concepts of safety culture and security culture in critical infrastructures. It highlights several lacunae, such as the existence of a certain fuzziness among definitions due to ontological contradictions regarding safety and security cultures conceptions. Besides, it stresses the non-integration of technological and procedural elements as active safety and security cultures’ elements. In order to overcome the identified pitfalls, it suggests mutually informed and comparable safety and security cultures definitions that incorporate technological, procedural and human aspects and mobilize vulnerability and resilience approaches. Building on this theoretical endeavor, it proposes an integrated model of safety and security cultures that paves the way for empirical research within critical infrastructures. |
09:00 | The role of courts in anti-innovative patent enforcement ABSTRACT. Patent scholars increasingly worry about the adverse effects injunctions can have on competition and innovation. Given patent law’s purpose of fostering innovation, these concerns appear reasonable. At the same time, courts seem poorly situated to assess the consequences of an injunction in any given case. My paper explores this conundrum and investigates (i) how patent courts can evaluate the consequences of injunctions for competition and innovation; and (ii) whether it is desirable that they do so on a case-by-case basis. |
09:30 | Mind the Gap PRESENTER: Johan David Michels ABSTRACT. Our research concerns the property status of digital files stored in the cloud. We argue that such files do not currently constitute property under the law of England and Wales, which does not recognize possession of intangible items. This can lead to gaps in the rights and remedies available to both users and providers of cloud services, since issues like access to files will be governed mainly by the terms and conditions of cloud contracts, which are often highly restrictive. |
09:00 | PANEL BotLeg I: Public-private actions against botnets: issues of legitimacy and governance PRESENTER: Bert-Jaap Koops ABSTRACT. Security and safety are public policy goals, with a key responsibility for governments to safeguard these. However, in many areas, governments are not in a position to sufficiently ensure security or safety by themselves—they are dependent on assistance from private parties in governing a sector to achieve public policy goals. Public-Private Partnerships have emerged over the past decades as a practical necessity and a potential solution to governance challenges. At the same time, these partnerships raise questions of legitimacy, since legality, accountability, and checks and balances are not a given when governance is partially, and not always transparently, outsourced to private actors and PPPs. In this panel – the first of two panels discussing findings of the NWO-funded BotLeg project “Public-private actions against botnets: establishing the legal boundaries” – we will discuss general issues of legitimacy and governance of involving private actors in three sectors, involving different public policy objectives: cybersecurity, humanitarian aid, and food safety. Speakers will discuss the legitimacy of private-public partnerships, the conditions for the execution of public tasks by private actors, distribution of responsibilities, and associated questions of accountability and liability. The first context is combatting botnets, which facilitate many forms of cyber-attacks, as a key challenge in cybersecurity. A wide set of anti-botnet strategies, including pro-active strategies and public-private co-operation, is needed to detect and dismantle botnets. We will discuss the need for involving private actors, the challenges of distributing responsibilities among the entire spectrum of actors in the field according to their capabilities, and reflect on what legitimacy entails in this context. The second context is data partnerships in the humanitarian sector, involving international organizations and specialist technology firms. Humanitarian organizations are encountering enormous challenges in managing, integrating, and analyzing data from global operations, while facing mounting donor pressure to create efficiencies in operations, reduce costs, and counter fraud. Data partnerships are viewed as a mode of achieving solutions, but in the humanitarian context these raise critical questions about the legitimacy of actors, the lack of agency among beneficiaries, and other novel governance challenges. The third context is food safety. In a context of increasingly globalized food supply chains, a growing concentration of market power among food retailers, and a perceived lack of capacity among national governments to regulate food safety, private schemes have developed to become a central governance instrument to deal with the systemic risk of food safety outbreaks. These private schemes, both national and transnational, possess a wealth of data on industry compliance and risk. Governments around the world are seeking to enrol the schemes in their enforcement policies to bolster their own capacities. While the resulting partnerships may make the deployment of public resources in the field more efficient, the arrangements also trigger important considerations of legitimacy and accountability. Speakers
|
Keynote Health and Environment
11:45 | Personal data management and privacy management: barriers and stepping stones PRESENTER: Nitesh Bharosa ABSTRACT. In the wake of the General Data Protection Regulation (GDPR) there is increasing interest in providing individuals more control over their personal data. Yet, the concept of personal data management is poorly studied. What does personal data management actually mean from an individual perspective? And what is needed in order to enable personal data management in a society? This paper investigates these questions. Drawing on a case study in the financial domain, we provide a more focused understanding of the current situation (without personal data management) and a scenario with personal data management. We propose that the following components are needed in order to facilitate personal data management on a large scale: (1) easy to use high level of assurance electronic-IDs, (2) personal data spaces that allow for secure storage and qualified interactions with data, (3) data specifications (standardisation of syntax, semantics and structure) allowing for the automated processing (without manual rekeying or conversion) of data exchanged between systems, (4) remotely accessible tooling/features (e.g. data processing and analysis), (5) technical interfaces (APIs) for information sharing (posting and retrieving data, including consent) that can be used by all actors across multiple financial domains, (6) support for organisations that want to use the previously mentioned components and (7) a cross-domain public-private governance that steers the development and adoption of the previously stated components. This paper concludes with a discussion of pathways for developing these components and facilitating personal data management in practice. |
12:15 | Improving privacy choice through design: How designing for reflection could support privacy self-management PRESENTER: Arnout Terpstra ABSTRACT. In today's society online privacy is primarily regulated by two main regulatory systems: (command-and-control) law and notice and consent (i.e. agreeing to terms of agreement and privacy policies). Both systems prohibit reflection on privacy issues from the public at large and restrict the privacy debate to the legal and regulatory domains. However, from a socio-ethical standpoint, the general public needs to be included in the privacy debate in order to make well-informed decisions and contribute to the law-making process. Therefore, we argue that privacy regulation must shift from a purely legal debate and simple one-time yes/no decisions by 'data subjects' to public (debate and) awareness and continuous reflection on privacy and privacy decisions by users of IT systems and services. In order to allow for this reflective thinking, individuals need to (1) understands what is at stake when interacting with digital technology, (2) have the ability to reflect on the consequences of their privacy decisions, and (3) have meaningful controls to express their privacy-preferences. Together, these three factors could provide for knowledge, evaluation and choice within the context of online privacy. In this article, we elaborate on these factors and provide a design-for-privacy model that introduces friction as a central design concept that stimulates reflective thinking and thus restores the privacy debate within the public arena. |
11:45 | PANEL: Data Subjects as Data Controllers PRESENTER: Michèle Finck ABSTRACT. Data Subjects as Data Controllers The GDPR in essence establishes a binary distinction between the data subject and the data controller as two separate legal entities. On the one hand, the Regulation primarily envisages situations where the data subject directly or indirectly provides personal data to a data controller. On the other hand, the data controller is assumed to determine the means and purposes of personal data processing and to subsequently carry out related processing activities (alone or together with others) independently of the data subject. This, of course, is a common scenario in many contexts. Yet, it is also becoming increasingly clear that the binary distinction between the data subject and the data controller doesn’t hold up in many other contexts. One well known problem area is cloud computing where it is often unclear if a cloud provider is merely a data processor; however newer, allegedly more privacy protective technologies, are raising still more issues. Our panel would discuss and compare such scenarios in explaining, examining and comparing instances where new data governance models and technological evolutions challenge the GDPR’s binary divide between the data subject and the data controller. We would present four different papers that draw attention to this issue respectively in relation to databoxes in the smart home context (Lilian Edwards), Apple (Michael Veale), Personal Information Management Systems (Nicolo Zingales) and blockchain technologies (Michèle Finck). In these various scenarios the data subject, at least to some degree, contributes to the determination of the means and purposes of data processing - a role carried out by data controllers under the GDPR. This leads us to examine whether the data subject herself should be qualified as a data controller. Relatedly, we will also discuss the applicability of the household exemption in light of relevant case law and the reformulation of the corresponding recital in the GDPR. Particular attention will also be paid to the concept of joint controllers in light of the regulatory guidance and case law on joint controllers, particularly the seminal ruling in Wirtschaftsakademie Schleswig Holstein that indicates that in at least some of the scenarios discussed data subjects are likely to be joint controllers. Lilian Edwards will examine databoxes. Databox is what is sometimes known as a Personal Data Container but can also conceptually be described as an operating system for a home user’s personal data. The aim of Databox is to enable a home user to make use of services normally delivered via sharing their personal data with an external service provider, without so giving away their data. This has obvious advantages given the prevalent distrust in the sharing data economy and the lack of control and oversight generally experienced by users of consumer cloud services such as social media, data aggregators, price comparison engines, switching sites et al. Serious legal questions arise however as to whether Databox is ever or often a data controller or even a processor; whether the user is the sole data controller; whether the domestic purposes exemption applies and if so to whom; and whether this conceptual framework underpinning the GDPR really scales at all to privacy-preserving infrastructure such as Databox at all. The shift from a world of products to services back to something that is effectively neither will also be interrogated. Michael Veale will consider how firms are increasingly seeking to shed the label of ‘data controller’ in relation to data they hold by locking down systems in ways which privilege confidentiality, but limit control. He will draw on a case study from an ongoing investigation he triggered against Apple Inc. relating to their refusal to provide access to recordings and transcripts collected in connection to the Siri voice assistant. He will then look ahead to the situation that some companies appear to be envisioning a move towards: where they determine how data is used and transformed in software and hardware they control at the design stage, but do not continuously centralise or change the purposes of such processing. In these cases, companies build large, data-driven infrastructures whilst trying to ‘bind their hands’ using new technologies, such as privacy-preserving computation and federated computation. Does the GDPR anticipate such approaches? Does the traditional definition of data controller hold up in this situation, and if not, where are the tensions in a world with a complex mix of decentralised computing but centralised design? Nicolo Zingales will focus on Personal Information Management Systems (PIMS), who offer an architecture for centralised storage, management and permissioned sharing of personal data on an individual’s device. He will discuss the data protection implications of choices made in the design and governance of this architecture, in particular the type of encryption chosen, the degree of openness of the ecosystem to third party applications, and the instructions provided by PIMs to its users and business partners. Based on the results of an empirical analysis of terms of service and practices of a selected sample of PIMS, it will be shown that a common thread for these companies is the attempt to avoid the qualification of data controller for activities beyond mere storage, in particular by delegating responsibilities onto users and third party applications. Additionally, innovative governance mechanisms (including trusts and distributed decision-making power) are adopted to separate the operational side from the rule- and policy-making side, thus reducing the level of influence of PIMS on the 'effective means' of processing. The presentation will conclude reviewing the rationale for the recent expansion of the concept of joint controllership, making the case for a principled and scalable approach towards the obligations of providers of critical infrastructures such as PIMS. Finally, Michèle Finck will present a paper that examines the data subject and data controller roles in the context of blockchain technologies. Whereas there are multiple points of tension between the GDPR and blockchains, determining the identity of the data controller in these networks might be the hardest to resolve. This paper briefly introduces blockchain technologies and illustrates that particularly in public and permissionless systems, there simply isn’t one legal entity that determines both the means and purposes of data processing. This leads to an examination of the notion of joint controllers and its application to such contexts, considering in particular that agreements allocating responsibilities cannot easily be concluded between parties that do not know another. The paper further critically engages with the French DPA’s recent guidance on the application of the GDPR to blockchains in which it was argued firstly that data subjects are in at least some circumstances also data controllers in a blockchain network, and secondly that the household exemption applies where individuals engage in such networks in a personal capacity. We are hoping that this panel will allow us to further develop our respective research projects and identity common themes and problems. We are also hoping that it would be of interest to many participants in the conference as it offers insights into less well-known methods of data governance and processing and engages with provisions and concepts of the GDPR that are of general interest to anyone working in this area. |
11:45 | PANEL: Workshop on AI, Robotics and Legal Responsibility in the Age of Big Data PRESENTER: Ugo Pagallo ABSTRACT. AI, Robotics and Big Data are intertwined, converging and will drastically influence business models, le-gal institutions, social communities and facilities in the digital age. A collection of everyday physical smart systems equipped with microchips, sensors, and wireless communications capabilities and connect-ed to the internet and to each other, shall receive, collect and send myriads of user data, track activities and interact with other devices, in order to provide more efficient services tailored to users’ needs and desires. The near future will bring us more complex and multi-task intelligent devices that will be using AI and predictive algorithms to make decisions while relying on external distributed data sources. As the scope of intelligent agents’ activities broadens, it is important to ensure that designers, producers, manu-facturers and/or end-users of such complex technological systems will be held legally responsible and they will not make irrelevant, counter-productive, harmful or even unlawful decisions. As the intensity and magnitude of this technological revolution is still not fully understood, the law may struggle to evolve quickly enough to address the challenges it raises. To set a legal framework which ensures an ade-quate level of protection of personal data and other individual rights involved, while providing an open and level playing field for businesses to develop innovative data-based services, is a challenging task. This requires to examine how the relationship between human beings and digital technologies affects the role of legal responsibility and social accountability in the governance and regulation of AI, robotics and predictive algorithms. Therefore, the research concerns how the needs for data protection, business inter-ests and social issues can best accounted by law. This research has to be explored from a multidiscipli-nary perspective ranging from law, economics, social science, computer science and robo-ethics. Program, Deadline, 1 November 2018 Conference 15-17 May 2019, Tilburg University Organized by University of Turin Chair Ugo Pagallo, University of Turin; Moderator Massimo Durante, University of Turin Panel (70 minutes): Names of the speakers will be provided Posters (20 minutes): Paola Aurucci, San Raffaele Hospital, Center for Advanced Technology in Health and Wellbeing, Milan Jacopo Ciani Sciolla, University of Turin |
11:45 | PANEL Botleg II: Public-private actions against botnets: improving law and practice PRESENTER: Bert-Jaap Koops ABSTRACT. Combatting botnets, which facilitate many forms of cyber-attacks, is a key challenge in cybersecurity. The classic crime-fighting approach of prosecuting perpetrators and confiscating crime tools fails here: botnets cannot be simply 'confiscated', and law-enforcement's reactive focus on prosecuting offenders is ill-suited to deal effectively with botnet threats. A wider set of anti-botnet strategies, including pro-active strategies and public-private co-operation, is needed to detect and dismantle botnets. Public-private anti-botnet operations, however, raise significant legal and regulatory questions: can data about (possibly) infected computers be shared among private parties and public authorities? How far can private and public actors go in anti-botnet activities? And how legitimate are public-private partnerships in which private actors partly take up the intrinsically public task of crime-fighting? In this panel – the second of two panels discussing findings of the NWO-funded BotLeg project “Public-private actions against botnets: establishing the legal boundaries”– we will discuss legal opportunities and legal obstacles for private actors to engage in botnet mitigation at different stages of the botnet lifecycle. We will zoom into the case of fighting DDoS attacks and discuss what they are, how they can be mitigated, especially with respect to the IoT-powered DDoS attacks, and discuss what actions law enforcement authorities are taking to address this issue. This panel features invited speakers from diverse backgrounds – and points of view – united in their conviction that mitigating botnets and DDoS attacks is a shared effort. The debate goes beyond the boundaries of criminal law and the confines of data protection to discuss the broader regulatory landscape. It revisits matters of intermediary liability in face of cybercrime and questions the norms of product liability in the age of the IoT. 11.45-11.50 Bert-Jaap Koops – Word of welcome 11.50-12.15 Karine e Silva – Legal bottlenecks in botnet mitigation: a transatlantic overview 12.15-12.30 Jair Santanna – DDoS attack: what it is and how they can be mitigated 12.30-12.45 Cristian Hesselman – IoT-powered DDoS attacks and how they can be mitigated (e.g., SPIN) 12.45-13.00 Floor Jansen – The police’s fight against DDoS attacks 13.00-13.15 Q&A and discussion
|
Keynote Justice and the Data Market
15:00 | Capturing licence plates: police participation apps from an EU data protection perspective PRESENTER: Jonida Milaj-Weishaar ABSTRACT. In October 2017 a Pokémon Go-like smartphone app called ‘Automon’ was revealed as one of several new initiatives to increase the public’s contribution and engagement in police investigations in the Netherlands. Automon is designed in the form of a game that instigates participants to photograph license plates to find out if a vehicle is stolen. The participants in the game score points for each license plate photographed, and in case a vehicle is indeed stolen they might also qualify for a financial reward. In addition, when someone reports that a vehicle has recently been stolen, game participants that are in the vicinity receive a push notification and are tasked to search for that specific vehicle and license plate. This paper studies the example of the Automon app and contributes to the existing debate on crowdsourced surveillance and the involvement of individuals in law enforcement activities from a legal point of view. It analyses for the first time the lawfulness of initiatives that proactively require the involvement of individuals in law enforcement activities and confronts them with the data protection standards of the European Union (EU). The Automon app design fails to comply with the new standards and any new legal intervention to regulate the field must be introduced at EU level. |
15:30 | Human rights in personal data processing: An analysis of the French and UK approach ABSTRACT. The current technological and social scenario, characterised by the presence of increasingly complex innovations and data-intensive systems, forces to reflect on how to address data protection in this state of transition towards a society increasingly shaped by automated data processing. This urges developers and policy makers to develop risk analysis and risk management models that go beyond the traditional focus on data quality and security, to take into account the impact of data processing on human rights and fundamental freedoms. The main challenge in the design of these broader assessment models concerns the outline of a general paradigm of values to be used as a benchmark in the assessment process. From this perspective, the main goal of this research paper is to figure out whether and to which extent the data protection authorities take into account human rights at large, both in their decisions and in the guidelines they provide. In carrying out this analysis, the paper focuses on the approach adopted by the French and UK data protection authorities. These authorities show two different ways of addressing these issues, since the French authority is mainly centred on case law and the ICO on general guidance. Although the documents adopted have different nature, which affects the extent and elaboration of the references to human rights, they show a plurality of rights and freedoms – other than the right to privacy and the right to data protection – taken into account by data protection authorities. This result confirms therefore the need to develop a broader impact assessment model which considers all the human rights and fundamental freedoms likely to suffer prejudice in the context of a given treatment of personal data. |
16:00 | Detecting new approaches for a Fundamental Rights Impact Assessment to Automated Decision-Making ABSTRACT.
|
15:00 | The Regulation of Online Political Microtargeting in Europe PRESENTER: Ronan Fahy ABSTRACT. This paper examines how political microtargeting is regulated in Europe, and the strengths and weaknesses of that regulation. The paper examines the question from three perspectives, namely data protection law, freedom of expression, and sector-specific rules for political advertising. The paper analyses the interplay of the different legal regimes, and assesses whether these regimes leave important issues unaddressed. We also explore whether regulation should be amended to mitigate the risks of microtargeting. |
15:30 | From parallel tracks to overlapping layers: GDPR and e-Commerce Directive towards convergence of legal regimes ABSTRACT. Legal regimes are increasingly overlapping in the information society. The development of new technologies have encouraged the development of new businesses challenging the horizontal and vertical separation between different markets. Focusing on the framework of the Digital Single Market, it is possible to underline the challenging convergence of the system of the e-Commerce Directive and that of data protection. From original parallel tracks, the two systems will likely overlap raising new issues and questions about their relationship. If someone wonders where this story started, it would probably find an answer looking at the EU legal framework at the beginning of this century. Indeed, Directive 2000/31/EC (“e-Commerce Directive”) expressly clarified that its scope of application does not include questions relating to information society services covered by Directives 95/46/EC and 97/66/EC. This system has applied until the entry into force of Regulation (EU) 679/2016, also known as General Data Protection Regulation (“GDPR”). The GDPR has not only reviewed the EU data protection legal framework ensuring a high degree of uniformity between Member States’ legislation, but it has eliminated the traditional separation between the system of the e-Commerce Directive and the Data Protection Directive. The GDPR has clarified that its application should not affect the rules provided for by the e-Commerce Directive, in particular, those regarding ISP’s liability. As a result of this potential overlapping, one of the main issues concerns the extension of the “safe harbour” regime also for third-party conducts violating data protection rules. This extension would likely encourage online intermediaries to check and monitor also this kind of online contents in order to avoid liability for their awareness with potential chilling effects for freedom of expression. Although the GDPR would likely extend the possibility to apply the safe harbour exemption to the field of data protection, the limits to the scope of the e-Commerce Directive are still in force. Moreover, the safe harbour regime would apply only to third-party contents. Indeed, the extension of this regime should not be considered as an exemption of liability from the unlawful processing of personal data performed directly by online intermediaries. Another issue to take into consideration is the increasing control which online intermediaries perform over their online spaces. This trend is mainly because of the evolution of the algorithmic society. Since algorithms allows hosting providers to play a more active role in processing data and performing online content management activities, the safe harbour extension to third-party data protection violation could risk to blur the notion of ‘data controller’ and ‘Internet service providers’ affecting the application of the rules in the field of data processing and ISP’s liability. The well-known case Google Spain has highlighted this issue even before the adoption of the GDPR. In this scenario, the main question is whether the increasing use of algorithms and AI technologies would transform online intermediaries into data controllers due to their active participation in the processing activities. From an EU perspective, this work will try to underline the main challenges deriving from this new scenario where the data protection and e-Commerce Directive regimes are converging. Moreover, this work will propose solutions in order to highlight the benefits deriving from convergence of legal systems. |
16:00 | Personalised pricing and EU law PRESENTER: Alexandre de Streel ABSTRACT. According to the OECD (2018:6), price personalisation is the ‘practice of price discriminating final consumers based on their personal characteristics and conduct, resulting in each consumer being charged a price that is function – but not necessarily equal – to his or her willingness to pay’. With the development of big data and algorithmic pricing, personalised prices are a growing policy concern. It is thus timely to analyse how EU law regulates the issue. Our paper aims to review the main EU rules applicable to the personalised pricing, their conditions of applications and their effects and to make policy recommendations to increase the consistency and the effectiveness of those rules. The first section of the paper briefly introduces the issue by situating price discrimination within the different degrees of discrimination developed by the economic theory, analysing its pervasiveness and its economic impact. The second section of the paper deals with the transparency rules applicable to personalised prices. It deals with EU consumer protection rules, in particular the Directive on unfair commercial practices, the Directive on misleading and comparative advertising and the Directive on consumer rights and analyse under which conditions those Directives regulate personalised prices and with which effects. The section then deals with the General Data Protection Regulation and how this Regulation may be apply when the personalisation of the prices relies on personal data. The third section of the paper deals with the prohibition rules. It deals with the anti-discrimination rules and explains the limits on the criteria to be used to personalise the prices. Then the section deals with competition law and explains under which circumstances personalised prices can be prohibited as exploitative abuse. Finally, the fourth section of the paper concludes with an evaluation of the consistency and the effectiveness of EU rules regarding personalized prices and, on that basis, makes some policy recommendations. This section shows that the different rules can be substitute, in particular consumer protection and data protection rules. However, we show that the rules are mainly complementary, hence reinforce each other, in particular the transparency rules increase the effectiveness of the anti-discrimination rules. Therefore it is key that all EU rules are enforced in a coherent manner. This requires that the different enforcement authorities (such consumer protection agencies, data protection authorities or antitrust agencies) cooperate closely at the national and at the EU level as it is proposed in the Digital Clearing House. This section also recommends, in case of personalized prices, the introduction of an obligation to explain to the consumer how the prices are determined and the main parameters used for its determination. |
15:00 | PANEL: To Decentralize Everything, Hash Here: Blockchains and Information Law PRESENTER: Balazs Bodo ABSTRACT. Blockchain is the latest technological hype to promise disruptive decentralization, especially as it relates to information goods and services. While one of the goals of decentralized technologies is to operate without the need to adhere to existing legal systems, blockchains face challenges of compatibility with these systems so as to facilitate wider adoption. This panel examines these challenges from the perspective of information law, focusing on their treatment under the legal regimes of copyright, data protection and competition. |
15:00 | PANEL : Explaining Responsibly PRESENTER: Aviva de Groot ABSTRACT. Panel proposal for track "AI, Robotics and Responsibility" The interdisciplinary scholarly discourse on 'the right to explanation' of AI-infused decision making processes goes beyond the GDPR's sphere of application, as it addresses understandability needs that are recognized on a global scale. Processes of analyzing, profiling, and predicting human behaviour support decisions in all sectors of society, from credit scoring to fraud detection to decisions on who to hire - or even arrest. The increasing complexity of the technologies used, industry intricacies, and network effects all add to inscrutability and assessment challenges of these applications, on individual level as well as societal levels. The decreasing awareness of ubiquitous automation processes in the background of people's lives raises additional concerns. Increasingly, it is noted that issues of obscurity cannot be 'explained away,' or explained at all on an individual level. While most agree there is a pressing need to make these systems safe, fair, and 'democratically understandable,' there seems to be, at least temporarily, some competition between those that argue for scrutability at higher levels and the ones researching individual explanatory potential. In the meantime, in theory and practice, different approaches and methodologies towards 'explainable AI 2.0' are being designed and tested. The GDPR functions as a catalyst as controllers already need to comply with requirements for explainability. Explanations should be understandable and meaningful. The latter term precisely triggers the above mentioned competition, as it is far from self-evident what a ‘meaningful’ explanation is. What counts as an honest, time-stamped translation of a complex and dynamic computational process? Who gets to decide what that is? Can explanations be misused to obfuscate abuse of power? In the absence of commonly understood and accepted evaluative standards it is hard to assess the beneficence, usefulness and pitfalls of these developing explanatory methodologies. This conundrum might inform us to stop talking about 'responsible explanations' and instead speak of 'explaining responsibly.' As a field of research, it needs to be interdisciplinary. Law, philosophy, data science, cognitive sciences, STS and humanities each have valuable theory and experience to bring to the table. This panel provides such a table, and aims to start the discussion in acknowledgment of the seemingly irreconcilable, acute needs for both individual explanations and high level governance strategies. Confirmed panelists: Reuben Binns, Michael Veale, Martijn van Otterlo, Rune Nyrop. The panel will be presented, chaired and the discussion hosted by , Aviva de Groot, Sascha van Schendel and Emre Bayamlıoğlu. |
15:00 | PANEL: Addressing responsibility concerns about AI through the practice approach PRESENTER: Merel Noorman ABSTRACT. The current momentum in the development of AI has revived debates about the loss of control over these technologies and obfuscation of human responsibility. Increasingly opaque, networked and autonomous technologies have led some to suggest that our existing ways of distributing responsibility, including for example determining legal liability, will soon no longer suffice. How can we evaluate such a claim? And if it is indeed the case that the established ways of distributing responsibility no longer suffice, how can we address this problem? One way of addressing the problem is by looking at responsibility as a set of social practices. These practices involve the accepted ways of evaluating actions, holding others to account, blaming or praising, and conveying expectations about obligations and duties. They can be forward and backward looking and pertain to the various kinds of responsibility, such as accountability, role responsibility, legal responsibility, and moral responsibility. Conceiving of responsibility as a set of practices places the focus on the shared understanding of what is expected, what is owed, and what are likely consequences of failure within a sociotechnical network. It draws attention to the formal and informal mechanisms and strategies to ascribe responsibility, such as laws, policies, procedures, organizational rules, and social norms that promulgate and enforce responsibility. The concept of responsibility practices raises questions that require descriptive and normative analyses: how do the discourses on AI and AI technologies come into conflict with established responsibility practices and how is responsibility understood within these practices? What is it about these technologies that makes the application of particular laws, protocols or norms problematic? Where and how do people (re)negotiate how responsibility is understood or how it is ascribed? In what way do these negotiations affect the design of the technology? But also, how can we intervene in negotiations about the distribution of responsibility to ensure that human beings will continue to be responsible for the behavior of AI technologies? During this panel we will discuss how this practices approach can help us to think about responsibility concerns raised about AI. |