TILTING2019: TILTING PERSPECTIVES 2019 – REGULATING A WORLD IN TRANSITION
PROGRAM FOR WEDNESDAY, MAY 15TH
Days:
next day
all days

View: session overviewtalk overview

09:30-10:15 Session 2: Opening Keynote: Prof. Karen Yeung
Location: DZ-2
09:30
‘Law, Regulation & Technology’: Prospects and pitfalls for a fledgling field
10:15-10:45Coffee Break
10:45-12:15 Session 3A: DP TRACK: Looking outside Europe: comparative law lessons
Chair:
Location: DZ-3
10:45
Diverging Legal Development in the Regulation of Data Localization in Southeast Asia

ABSTRACT. Data take on growing importance with the explosion in the volume and variety of data collected and processed with greater precision. Technological abilities to collect, aggregate, analyze, store, and transfer data across borders has also been improved due to advances in big data analytics, cloud computing, and artificial intelligence. As data becomes increasingly shared and exchanged on the global scale, its relationship to international trade strengthens. Practically no company would be able to increase its market access and excel in global competition or take part in international trade, without the ability to transfer data across borders. While freer data flows are integral to promoting trade and economic development, states often adopt measures to restrict data flows outside national borders due to various policy objectives.

This paper examines how states in emerging markets and the developing region have responded to regulatory challenges arising from cross-border data access, storage, and transfer at the intersection of global trade, data protection and the promotion of other vital state interests such as national security and free Internet. It specifically focuses on how the law and regulation of data localization have developed in Southeast Asia – the region where two conflicting policy stances on data localization compete to determine adequate standards for governing data and information.

While the nature and scope of data localization measures vary across countries, there has been a surge of national laws in Southeast Asia that introduce such measures restricting cross-border data transfers significantly. Compared to similar data laws in other jurisdictions outside the ASEAN (the Association of Southeast Asian Nations) region, their approaches show the most rigid forms of restrictions on cross-border data transfers. Except Singapore, national laws of all other states in the region (that have laws introducing data localization laws one way another) adopt far-reaching, highly restrictive local processing and storage requirements. Their national stances are incompatible with or in direct conflict with the international standards that have been established along with the rise of trade agreements as a key factor in data protection regulation. These multilateral trade initiatives have created rules to ensure freer cross-border data transfers and reduce “digital protectionism.” In most countries across the ASEAN region, there is a clear national-international divide in the legal approaches concerning data localization measures. This research analyzes implications of such diverging legal development. It also evaluates possibility of any major domestic law changes in a way to reconcile differing regulatory approaches developed in this region and beyond.

11:15
How can the experience of the data breach notification obligations in the law of the United States be beneficial for the interpretation of the new personal data breach notification obligations pursuant to GDPR?

ABSTRACT. Following the enactment of the bill California S.B. 1386 in 2002, also known as California data security breach notification law, majority of US states adopted some form of mandatory data breach notification legislation. The experience with this type of legal instrument in the legal systems of the United States over the last decade and a half may therefore serve as a valuable case study for the general data breach notification obligation under Articles 33 and 34 of the General Data Protection Regulation 2016/679. The data breach notification obligation is for most data controllers a new obligatory requirement that presents a new challenge for monitoring of internal processes. Taking into consideration the proximity of the legal systems, similar economic realities, technological development and social values, the substantial record of data breach notification practice in the United States holds a sizeable potential for analysis and an opportunity for comparative transfer of relevant conclusion to help the implementation of the newly established GDPR general data breach notification obligation. There are on the other hand unavoidable differences in conceptual framework of this instrument between American and European approach as well as to some degree within the United States complex legal structure itself. Understanding these limitations is therefore an essential part of the analysis that is affecting the conclusion about the overall potential value of the lessons that can be learned. Beyond the historical developments of the institute, its interpretation and application are of special interest to the contribution the current challenges related to the boom of Internet of Things solutions and the influence this technological change has or may have on the purpose and function of the data breach notification obligation.

10:45-12:15 Session 3B: DP TRACK: Data protection rights and the right to data protection (A)
Location: DZ-4
10:45
Market inalienability of personal data and incomputability of the self in the age of digital imbalance

ABSTRACT. The research aims to prove how the concept of market inalienability, applied to personal data, can mitigate the power imbalance of individuals in the EU data-driven market. Unfair imbalance in the digital market is based on several elements: implicit trade of personal data; inferences/predictions of personal data; nudging of behaviours and mental manipulation of consumers. All these fields are strongly interrelated: considering the huge market of the data-driven economy the challenge is to re-balance the protection of individuals, both considering their right to informational self-determination and their market power. The solutions proposed here are two and interrelated: personal data protection as market inalienability of personal data and privacy as the incomputability of the self (Hildebrant, 2018) in the data driven economy. In particular, the theories on Market Inalienability (Radin et al.) from the Law and Economics perspective should be analysed through the lens of EU data protection law and other relevant secondary laws. Therefore the main research question is addressed through two different sub-questions. The first one is whether and at which conditions the EU personal data protection framework guarantees an ‘inalienability’ framework for personal data of data subjects and which are the limits of this framework. The consequent sub-question aims to prove that inalienability alone is not sufficient to rebalance individuals in the digital market, but should be read in combination with ‘incomputability’ of the self, leading to mental privacy and informational self-determination against market manipulation.

11:15
Turning Data Protection Upside Down and Placing It on Its Feet
PRESENTER: Jörg Pohle

ABSTRACT. Digitization has long since deeply penetrated all areas of society. It has changed the way we live, work and communicate, how we publicly deliberate and make decisions, how we do business and spend leisure time. Information and decision-making processes in everyday life, private businesses, public administrations, the judiciary, but also even politics become increasingly automated and tendentially industrialized. While much of this development is driven by economic interests and rationalities, it is often technical, especially computational, criteria that determine the specific mode of the datafication of social actors, events, and processes. We then see the world with which we interact through the lens of how it was datafied. Or, to paraphrase Niklas Luhmann’s famous remark on mass media: “whatever we know about society, or indeed about the world in which we live, we know through digital technology.” This very mode of both capitalist and technocratic datafication of things and social actors, events and processes, and its subsequent automation creates, amplifies and stabilizes power asymmetries for the benefit of those in control of the design and use of IT systems as well as the datafication, its underlying modelling assumptions and the data that is generated through this. The control over these means of information and decision production confers the power to influence or even control individual, collective and institutional stakeholders and their communications, decisions and actions.

Against the backdrop of these developments, many conceptual assumptions and distinctions underlying the very theory of data protection, but also its implementation in data protection laws are outdated, if not flawed from the very beginning. Making a canonical distinction between personal and non-personal data is simply a conceptual leftover from an earlier debate on private secrets that never made sense when addressing individual or societal consequences of modern information processing. The limitation of protection to natural persons was equally flawed from the very beginning, simply mirroring one of the many origins of the legal debate on data protection: personality rights. Last but not least, the same holds true essentially regarding the distinction of data and processes, both regarding data protection law’s limited focus on data and the equally oversimplified focus on “algorithms” in the algorithm debate: processes (there are more processes than just algorithms, both from a mathematical perspective, e.g. heuristics, and from a social perspective, e.g. administrative procedures and organizational decision programs) produce data, data drives processes – data and processes are to a large extent mutually substitutable.

It is therefore much more productive, both for analyzing and for intervening, to conceptualize data protection beyond (the many different and essentially contested concepts of) privacy, private life, personality rights, the (monadic) individual and personal data, and, as the progressive data protection debate in the 1970s did, to focus on the very problem of “putting the world into a computer system”. As the 1970s’ debate had made clear, the starting point for any analysis of the individual and social effects of modern, automation-supported, increasingly automated and tendentially industrialized information processing must be the real information processing processes, practices and techniques in organizations and society. Data protection then is the flip side to modern information processing, the set of precautions to prevent undesirable consequences of information processing, with “undesirable consequences” defined as those consequences of information processing that counter to the goals of society, goals that we have set ourselves, such as in our constitutions or in the EU Charter of Fundamental Rights. It's not about privacy, it's essentially about social control of technology. The aim is to prevent rationality shifts and distortions in favor of those who design or use technology, as organizations threaten the separation between social subsystems or fields, which characterizes modern, functionally differentiated society, with their specific contexts, characteristics and their own inherent logic, because they subjugate these to their own organizational logic. And last but not least, it is important to prevent a loss of contingency for those affected by organizations that design decision architectures and pre-structure possibilities for action, by disposing in the present of the future on the basis of the past, as it is reflected in the underlying information that often was generated by the organizations themselves, thus tying in with the past and blocking possibilities for those affected to decide and act differently in the future.

This contribution presents an analytical framework for analyzing, but also intervening in, processes of modellification and inscription. Modellification is both the process and the product of modelling, i.e. the representation of the world in information systems and the world’s subsequent substitution by the model. It is essentially based on modelling assumptions and data, variable and parameter selection decisions, which then raises the question of the control over the modellification process, i.e. the decisions which actors, events, states or processes are either analyzed or excluded from the analysis, how they are measured and quantified, how they are categorized, classified and related to already existing information, i.e. models. These models are then inscribed into technical systems, either as information to be processed or as procedures, software libraries or programs to process information. While the modelling of the world is very dependent of the modellers’ perspective, interests, aims and purposes, the models themselves are easily copyable, both in their form as data and as software, and they are widely copied, transmitted and reused. Those who reuse data and programs incur, consciously or unconsciously, the inscriptions with them. It is therefore necessary to uncover and question what and how it is inscribed into the data and the systems in order to be able to address them as a source of information power and intervene in the design and use of these systems.

11:45
A Right to a Rule - On the essence and rationale of the fundamental right to personal data protection

ABSTRACT. There is not, to this day, a univocal, authoritative conception of what constitutes the right to data protection, nor of its constitutive traits, rationale, and essence. EU personal data protection is quite a young right, its emergence as a standalone right a recent phenomenon within the European legal framework. The rise of data protection to the status of fundamental right has also been fairly peculiar: data protection is a sui generis fundamental right in that its content derives from the preceding legislation that regulated personal data processing at a national, international, and Union level. Seminal national legislation, a number of international instruments, and EU secondary regulation chronologically preceded data protection’s formal emergence as a standalone right. Before the Charter of Fundamental Rights of the European Union (the Charter hereinafter), rather than a separate right, data protection was often framed as a facet of the right to privacy, or as an intermediary tool aiming at the protection of overlying rights, such as self-determination and human dignity. It has been written that “the connotations associated with ‘data protection’ have shifted repeatedly and substantially, and further defining the term turned out to be a futile if not tautological quest” (Mayer-Schönberger 1997), which sounds like the kind of journey one ought to embark oneself into for the sheer joy deriving from the pursuit of knowledge. It has also been held, more recently, that distinguishing between privacy and data protection would be of marginal utility, as the two right would be part of the same system: privacy as a substantive right (the game) and data protection as a procedural right (the game’s rules), protecting the same array of rights and freedoms (Hijmans 2016). I disagree, to some extent: I believe that data protection is evolving away from privacy into something very distinct, albeit still connected. EU data protection has become a largely procedural sui generis fundamental right, which emerged as a response to technological development, on one hand, and to the growing importance of secondary data protection legislation, on the other; a societal stance towards how personal data processing has been shaping the modern world. This paper thus seeks to offer a modern interpretation of the rationale and essence of the right to personal data protection. It is methodologically grounded in a historical, doctrinal, and jurisprudential analysis of the right to data protection, and of the elements that make it distinct from the right to privacy. The paper’s main purpose is to contribute to the doctrinal debate which still surrounds data protection, and its blurry boundaries with the right to privacy.

10:45-12:15 Session 3C: PANEL for Digital Clearinghouse
Location: Dz-1
10:45
Striving for effective control in digital rights enforcement: a dialogue between data protection, competition and consumer law 
PRESENTER: Nicolo Zingales

ABSTRACT. Individual control over personal data is increasingly used as lodestar for enhanced rights protection in the digital environment: not only in data protection law, but also in competition and consumer law. Yet significant differences exist on how these parallel regimes strive to ensure the effectiveness of control, in particular in the presence of market dominance and exploitation of behavioral biases. The panel will discuss shortcomings in current enforcement practices, and lessons that can be learned from cross-disciplinary dialogue. 

10:45-12:15 Session 3D: IP TRACK: Data Sharing, Ownership and Governance
Location: DZ-6
10:45
The Chilling Effects of Governance-by-Data on Innovation
PRESENTER: Michal Gal

ABSTRACT. Governance-by-data seeks to take advantage of the bulk of data collected by private firms to make law enforcement more efficient. So far, the literature has generally overlooked the implications of such dual use of data for data markets and data-driven innovation. In this Essay, we argue that governance-by-data may create chilling effects that could distort data collection and data-driven innovation, thereby potentially reducing innovation and harming welfare.

11:15
You Don’t Own Your Tractor: Redefining Ownership in the Internet of Things

ABSTRACT. The growth of the Internet of Things (IoT)—Internet-connected software embedded within physical products—has the potential to shift fundamentally traditional conceptions of ownership. IoT manufacturers have the capacity, through their ownership of the software’s copyright and restrictive licensing agreements with their customers, to impose rules governing their IoT goods, even after purchase. This licensing model, common with digital content, now governs IoT products as a form of post-purchase regulation, as those who own the copyright can govern the use of the product. The following research question guides this paper: how does the shift toward the licensing model affect the regulation of IoT goods and with what consequences for ownership and data governance? This paper argues that the core of these regulatory efforts is the control over data, both in the form of proprietary software and the data collected and generated by IoT products. To make this argument, the paper examines how companies that own the IoT’s software control knowledge through intellectual property laws, especially copyright, and through the ubiquitous surveillance of their customers. Situating itself in critical data studies, the paper draws upon interviews with policymakers, activists, and industry actors and an analysis of companies’ terms-of-service agreements.

11:45
Evaluating the EC Private Data Sharing Principles: Setting a Mantra for Artificial Intelligence Nirvana?

ABSTRACT. On April 25, 2018, the European Commission (EC) published a series of communications related to data trading and artificial intelligence. One of them called “Towards a Common European Data Space”, came with a working document: “Guidance on Sharing Private Sector Data in the European Data Economy”. Both the Communication and the guidance introduce two different sets of general principles addressing data sharing, contractual best practices for business-to-business (B2B) and business-to-government (B2G) environments. On the same day, the EC also published a legislative proposal to review the Public Sector (PSI) Directive. These two simultaneous actions are part of a major package of measures aiming to facilitate the creation of a common data space in EU and foster European artificial intelligence development.

This article focuses on the first action, the “Guidance on Sharing Private Sector Data in the European Economy”. First, because it is one of its kind. Second, because, although these principles do not qualify as soft law (lacking binding force but having legal effects) the Commission’s communications set action plans for future legislation. Third, because the ultimate goal of these principles is to boost European artificial intelligence (AI) development. However, do these principles set a viable legal framework for data sharing or this public policy tool is merely a naïve expectation? Moreover, would these principles set a successful path toward a thriving European AI advancement? In this contribution, I try to sketch some answers to these and related questions.

10:45-12:15 Session 3E: JUSTICE AND DATA MARKET TRACK: Human rights/colonialism
Location: DZ-7
10:45
Data justice and indigenous data sovereignty in the context of traditional knowledge and digital libraries: a law & humanities perspective
PRESENTER: Kelly Breemen

ABSTRACT. By dr. J.M. Breemen & dr. V.E. Breemen*

While, in the 1970s, the disclosure of confidential indigenous information via a book publication hit the courts (Foster v. Mountford & Rigby Limited, (1976) 29 FLR 233, 14 ALR 71), the means to disseminate such data have since the 2000s ever evolved into the digital sphere. The question of agency over the data and the sovereignty and self-determination of source communities, as “populations that were previously digitally invisible” (Taylor 2017, p. 1), thus becomes increasingly pressing. Building on ongoing work to understand the implications of, and operationalize a data justice approach against the background of the international data revolution (Taylor 2017), this paper argues that this discussion should feature not only dominant but also non-dominant voices. Therefore, the paper focuses on the specific issue of indigenous data sovereignty, an umbrella notion which signifies “the proper locus of authority over the management of data about indigenous peoples, their territories and ways of life”(Kukutai & Taylor 2016, p. 14). Notably, UN Special Rapporteur on the Rights of Indigenous Peoples Tauli-Corpuz has called it “ironic” that, “even with the emergence of the global ‘data revolution’” the problems of “a lack of reliable data and information on indigenous peoples” and “misuse of their traditional knowledge and cultural heritage” persist (Tauli-Corpuz 2016, p. xxi).

Hence, since the way indigenous heritage is documented and disseminated is in transition due to digitization and platformation, the question this paper sets out to critically assess more concretely is how an interdisciplinary law and humanities lens - i.e. relying on political-cultural, historical,critical, library and information sciences (LIS), anthropological materials - can help interpret regulatory issues pertaining to the collection, digitization and disclosure of indigenous data in the context of digital libraries, and pave a way for addressing them. Relevant moral and legal issues and questions in this regard stem, firstly, from the perspective of libraries’ stewardship role, which focuses on preservation of and access to cultural heritage based on normative values, and has been shaped by history and the (Western) knowledge management and legal systems of the societies in which they were established, currently the digital information society; and secondly, from the perspective of positive and negative impacts of technologies for source communities, who have their own data systems and customary laws to govern the management of their heritage (Pool 2016, p. 57 and further; Kukutai & Taylor 2016, p. 14-15). Again quoting Tauli-Corpuz, she states that “[if] indigenous peoples have control over what and how data and knowledge will be generated, analysed and documented, and over the dissemination and use of these, positive results can come about. [...] If, however, indigenous peoples lose their control because there are no existing laws and policies that recognise their rights and regulate the behaviour of institutions and individuals involved in gathering and disseminating data and knowledge, marginalisation, inequality and discrimination will persist”. A central doctrine to prevent this from happening is, in her view, the “free, prior and informed consent obtained before data are gathered and disseminated” (Tauli-Corpuz 2016, p. xxii-xxiii).

The issue is thus situated on the crossroads of dominant data protection and cultural heritage laws and non-dominant data systems and customary laws (Pool 2016, p. 57 and further; Kukutai & Taylor 2016, p. 14-15). When measuring library and indigenous perspectives on the production and management of digital data against each other, to which the position of third-party documenters can be added, recurring concepts are representation, sovereignty and self-empowerment, which may result in ‘counter public spheres’ (Dahal & Aram 2013, p. 11); identity, secrecy and cultural privacy (Frankel & Richardson 2009, p. 278; Antons 2009, p. 122), representing indigenous peoples’ protection interests and bearing an element of group privacy (Taylor, Floridi & Van der Sloot (eds.) 2017; Antons 2009, p. 122–125; Brown 2003, p. 27–42); and data decolonisation (Snipp 2016).

Initiatives that build on these concepts and address the issues set out above are developed at various levels, that may converge. Firstly, for instance, states can take care of an inclusive framing of informational rights, also with regard to cultural heritage. A concrete opening in this regard appears in the Convention for the Safeguarding of the Intangible Cultural Heritage (ICH), stating in Article 13(d)(ii) on ‘other measures for safeguarding’, that state parties “shall endeavour to [...] adopt appropriate legal, technical, administrative and financial measures aimed at: [...] ensuring access to the intangible cultural heritage while respecting customary practices governing access to specific aspects of such heritage” (Strecker 2018). Secondly, the library sector increasingly pays attention to ‘indigenous librarianship’ (Burns, Doyle, Joseph & Krebs 2009; Nakata, Byrne, Nakata & Gardiner 2005), incorporating the indigenous perspective on culturally appropriate ways of data sharing and making libraries as allies of indigenous peoples. Thirdly, source communities themselves are involved in indigenous digitization and labeling efforts, using ICTs to self-empower the communities: for instance, the Mukurtu initiative is an archiving platform which, due to its interface and infrastructure, provides different modalities of access to indigenous data. It operates on the basis of a complementary system of an extensive user profile and an upload and tagging system, using the metadata as a “social filter” to get the right information to the right user (Christen 2012). Not only does this evoke ‘privacy by design’ connotations; the labels are moreover endorsed by dominant players such as the US Library of Congress (see www.loc.gov), which shows the convergence between the perspectives.

The best practices that can be gathered from this range of initiatives will inform the law and humanities analysis in this paper, indicating that both legal and non-legal regulatory solutions can contribute to a data justice approach, from dominant and non-dominant perspectives.

----------------------- * The research for this paper is based on the work carried out for the Witteveen Memorial Fellowship in Law & Humanities that the authors were awarded by Tilburg University in spring/summer 2018.

Kelly Breemen defended her PhD thesis on the protection of traditional cultural expressions (TCEs), written at the Institute for Information Law (IViR) of the University of Amsterdam, in June 2018. For her thesis, she analyzed the three legal frameworks of copyright law, cultural heritage law and human rights law.

Vicky Breemen defended her PhD thesis on the library privilege in copyright law, written at IViR, in November 2018. Her interdisciplinary research interests include copyright, culture & law and freedom of expression in general, and more specifically the legal aspects of (digital) libraries.

A central line in their academic work is the interface between law and culture. In the spring/summer of 2018, they were awarded the Witteveen Memorial Fellowship in Law & Humanities by Tilburg University for a joint multidisciplinary project pertaining to new ways of sharing and accessing TCEs via digital libraries and the historical, ethical, cultural-political and legal issues involved.

-----------------------

References

Antons 2009 - C. Antons, ‘Foster v Mountford: cultural confidentiality in a changing Australia’, in: A. T. Kenyon, M. Richardson & S. Ricketson (eds), Landmarks in Australian Intellectual Property Law, Melbourne: Cambridge University Press 2009, p. 110-125.

Brown 2003 - M.F. Brown, Who Owns Native Culture?, Cambridge, Massachusetts: Harvard University Press 2003.

Burns, Doyle, Joseph & Krebs 2009 - ‘Indigenous Librarianship’, in: Encyclopedia of Library and Information Sciences, New York: Taylor and Francis, third edition, published online: 9 December 2009.

Christen 2012 - K. Christen, ‘Balancing act: the creation and circulation of indigenous knowledge and culture inside and outside the legal frame’, in: S.A. Pager & A. Candeub (eds.), Transnational Culture in the Internet Age, Cheltenham: Edward Elgar 2012, p. 316-345.

Dahal and Arul Aram 2013 – S. Dahal and I.A. Aram, ‘Empowering Indigenous Community through Community Radio: A Case Study from Nepal’, The Qualitative Report 2013, Vol. 18(41), p. 1-26.

Frankel & Richardson 2009 - S. Frankel and M. Richardson, ‘Cultural Property and ‘the Public Domain’: Case Studies from New Zealand and Australia, in: C. Antons (ed.), Traditional Knowledge, Traditional Cultural Expressions and Intellectual Property Law in the Asia-Pacific Region, Alphen aan den Rijn: Kluwer Law International, p. 275-292.

Kukutai & Taylor 2016 - T. Kukutai & J. Taylor , ‘Data sovereignty for indigenous peoples: current practice and future needs’, in: T. Kukutai & J. Taylor (eds), Indigenous data sovereignty: toward an agenda, Acton: ANU Press 2016, p. 1- 22.

Nakata, Byrne, Nakata & Gardiner 2005 - M. Nakata, A. Byrne, V. Nakata & G. Gardiner, ‘Indigenous Knowledge, the Library and Information Service Sector, and Protocols’, Australian Academic & Research Libraries 2005, Vol. 36(2), p. 7-21.

Pool 2016 - I. Pool, ‘Colonialism’s and postcolonialism’s fellow traveller: the collection, use and misuses of data on indigenous people’, in: T. Kukutai & J. Taylor (eds), Indigenous data sovereignty: toward an agenda, Acton: ANU Press 2016, p. 57-76.

Snipp 2016 - C.M. Snipp, ‘What does data sovereignty imply, what does it look like?’, in: T. Kukutai & J. Taylor (eds), Indigenous data sovereignty: toward an agenda, Acton: ANU Press 2016, p. 39-55.

Strecker 2018 - A. Strecker, ‘Article 13: Respecting Customary Practices’, in L. Lixinski and J. Blake (eds.), Commentary on the 2003 Convention on Safeguarding the Intangible Cultural Heritage, Oxford University Press 2018 (forthcoming).

Tauli-Corpuz 2016 - V. Tauli-Corpuz, ‘Preface’, in: T. Kukutai & J. Taylor (eds), Indigenous data sovereignty: toward an agenda, Acton: ANU Press 2016, p. xxi-xxiii.

Taylor 2017 - L. Taylor, ‘What is data justice? The case for connecting digital rights and freedoms globally’, Big Data & Society July-December 2017, p. 1-14.

Taylor, Floridi & Van der Sloot (eds.) 2017 - L. Taylor, L. Floridi & B. van der Sloot (eds.), Group Privacy. New Challenges of Data Technologies, Cham: Springer International Publishing 2017.

11:15
(Big) Data and the North-in-South: Australia’s Informational Imperialism and Digital Colonialism
PRESENTER: Angela Daly

ABSTRACT. Australia is a country firmly part of the Global North, yet geographically located in the Global South. This North-in-South divide plays out internally within Australia given its status as a British settler-colonial society which continues to perpetrate imperial and colonial practices vis-à-vis the Indigenous peoples and vis-à-vis Australia’s neighbouring countries in the Asia-Pacific region. This article draws on discuss five seminal examples forming a case study on Australia to examine big data practices through the lens of Southern Theory from a criminological perspective. We argue that Australia’s use of big data cements its status as a North-in-South environment where colonial domination is continued via modern technologies to effect enduring informational imperialism and digital colonialism. We conclude by outlining some promising ways in which data practices can be decolonised through Indigenous Data Sovereignty but acknowledge these are not currently the norm so Australia's digital colonialism/coloniality endures for the time being.

10:45-12:15 Session 3F: AI AND RESPONSIBILITY TRACK: AI in court
Location: DZ-8
10:45
RoboJudge says what? An exploration of capabilities and potential effects of predicting court decisions
PRESENTER: Ronald Leenes

ABSTRACT. Jurimetrics is witnessing a revival in the form of legal analytics. Data Science techniques are being employed to analyse and predict court decisions. Several US studies involving Supreme Court decisions have caused quite some press and upheaval about the potential for predicting court decisions. In Europe, ECtHR case analyses have similarly created optimism about legal analytics capabilities. In this contribution, we provide a critical analysis of the these studies and their implications by examining more closely how these data-driven analyses of court decisions are actually carried out, and by placing them in perspective of law and the legal adjudicative system. We first address the data used in the studies, the methodologies employed in the studies and predictive systems, the scope and quality of the predictions. We then discuss the legal context and politics of the underlying decision processes, touching on the purported nature of law and legal adjudication and the potential effects of predictive systems on access to justice.

11:15
Regulating Unreality
PRESENTER: Lilian Edwards

ABSTRACT. ‘Deep fakes’—the use of AI to convincingly simulate content, voice, images or video for malicious purposes—has become a prominent concern primarily as a means to create realistic but fake pornography involving celebrities or particular victims . As such, there has already been some discussion regarding whether these ‘unreal’ products constitute criminal material such as revenge porn (better referred to as non consensual sharing of private or sexual images) or images of child sexual abuse (‘child pornography’). (1)

Its implications are however far greater. Techniques to generate deep fakes are evolving in response to a parallel arms war of detection techniques. This may eventually result in a world where the problems currently being experienced with ‘fake news’ expand to everything we see, hear and experience, not just news we read. Obvious areas where this may impact include the law of evidence: the law of intellectual property, primarily copyright: and fraudulent misrepresentation and anti-consumer scams. These might only be the start of a deluge when our world of reality becomes inscrutably a postmodern constructed and manipulated text.

We first identify two main paradigms applicable when ‘regulating unreality’. The first paradigm, comprising largely of data protection and intellectual property regimes, focuses on notions of control or ownership. This paradigm primarily presupposes the existence of an objective reality or a canonical thing. The second paradigm, including regulation concerning advertising standards, electoral law, trademarks, defamation, reputation and misrepresentation, primarily avoids establishing a canonical reality in favour of simply preventing harms from deception.

We then turn to examining underexplored tools found within existing bodies of law that might both supplement these paradigms and help move beyond them. These include

- the use of legal fictions, such as in Scottish law on revenge porn, where courts apply legal fictions to allow for uncertainty of what is real, and for the anticipation of what might not be real in the future; - the use of best evidence approaches, where rules, such as corroboration and certification, determine what is or should be considered to be real; - the use of rebuttable presumptions as acceptable defaults, for example deeming twelve to rebuttably be an age of maturity in eg giving consent.

We also consider other governance analogues and lessons drawn from that, including:

- the spam arms race, where technological detection has displaced the need for legal solutions as best governance approach; - the “Black Mirror” approach, where we look beyond law to non-mandatory mechanisms, such as ethical codes, which seek to suppress products, services or activities that might leave us queasy, such as synthesised voices of the dead after death for private or sentimental use.

We end by looking to major social, technical and legal challenges ahead around deep fakes and the regulation of unreality.

(1) Robert Chesney and Daniel Keats Citron (2019) Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 California Law Review __.

10:45-12:15 Session 3G: VICI PANEL 1: Space and its contents: protecting privacy through the ‘containers’ or the ‘contents’ of private life?

Privacy is usually protected in the law through proxies. Often, the law uses as proxies certain ‘containers’ of private life, for instance protection of homes and communication channels, which protect these spaces regardless whether the contents are privacy-sensitive in particular cases. Proxies can also be content-related, protecting the substance of private life and using proxies to capture this substance; for instance, data protection law protects all personal data quapersonal data (even if the data are not privacy-relevant) and protects “sensitive” data using as proxy “special categories of data”, while criminal law protects certain categories of secrets and of confidential communications. Both approaches are useful, but neither can capture privacy perfectly: the proxies are only approximations of privacy, often being both too broad (covering also privacy-irrelevant cases) and too narrow (failing to cover certain privacy-relevant cases). This panel – the first of two panels in the context of Bert-Jaap Koops’s VICI project on ‘Privacy in the 21stcentury’ – will discuss the different approaches in law to protecting privacy and the pros and cons of ‘container’ and ‘contents’ approaches, applied to contexts of contemporary privacy challenges. 

Location: DZ-5
10:45
Three approaches to privacy protection: containers, contents, contacts

ABSTRACT. In this presentation, I present a general framework of how the law tries to capture privacy in terms that are both sufficiently general to provide flexibility and sustainability, and at the same time sufficient clarity and legal certainty to be meaningful in everyday practice. Three types of approaches – focusing on the containers, contents, or contacts of private life – are presented, with examples of traditional proxies for privacy protection (such as home, sensitive data, and professional privilege) and examples of new proxies that may better capture privacy in the 21st century.

11:05
Thinking geologically about privacy protection in cyberspace

ABSTRACT. For promoters of ‘smart cities’ the vision implies, amongst others, thick digital infrastructures enabling dynamic and efficient governance of urban ecosystems (Kitchin 2013). This kind of vision presupposes networks of sensors and actuators seamlessly gathering and communicating large amounts of data about the environment and urban dwellers. This paper argues for a new dimension of investigations and proposes a set of insights from geology and sedimentology to help us grasp the various phenomena in smart urban ecosystems and develop new approaches to privacy protection in cyberspace. Firstly, the paper makes the argument for a geological approach to digital infrastructures. Geology cannot be understood as a domain outside social, political and ethical influences, thus warranting the conceptual undertaking to bring these disciplines closer. Secondly, the paper places the approach in the STS tradition and shows its added value to the understanding of digital infrastructures. I argue that geology offers a rich vocabulary and principles for understanding various phenomena with various degrees of dynamism and depth in digital infrastructures. I have shown elsewhere how the layers of software code in policing profiling algorithms can present phenomena of settling, debris, deposition, accumulation, sedimentation or volcanism with significant potential for privacy harms and the erosion of presumption of innocence. Thirdly, the paper offers a step in this direction by developing, clarifying and adapting the notion of sediment traps in a descriptive, methodological and normative sense. Echoing the protective bubble or membrane, the notion of sediment traps taps into the conceptual reservoir of geology, sedimentology and civil engineering to offer a rich set of insights, knowledge, practices, and principles that can be translated to privacy protection.

11:25
Responsibility in Personal Data Stores. Imperfections and implications for users, platforms and third parties

ABSTRACT. Users generally lack control over their personal data when it comes to many popular internet services. Yet, the risk of privacy-harming data breaches and the fact that personal data is currently often used to influence user behaviour has led to the development of tools to empower users to regain control over their personal information, in theory strengthening users’ data protection, privacy or monetisation opportunities. Personal Data Stores (“PDS”) are one such variety of these tools, providing users with a physical or virtual device within which users themselves capture, aggregate, and control third party access to and transfers to third parties of personal data. This paper explores the limits and problems with the PDS approach. The paper considers how responsibility from a GDPR-perspective is assigned in the context of PDSs, and considers whether the purported empowerment of users indeed offers solutions for the challenges posed by current, ‘centralised’ models of data processing. The responsibilities of two other key stakeholders involved in the PDS ecosystem – the PDS-platform and third parties (as recipients of the distributed personal data) – are also examined. Since PDSs represent an emerging technology, clarification as to the meaning and means for ‘empowerment’ and ‘control’ will bear impact on any analysis. As PDS technology continues to develop and proliferate, potentially providing an alternative to centralised models, we identify urgent legal issues which require consideration.

12:15-13:00 Session 4: Keynote 1: Prof. Niva Elkin Koren

Keynote Intellectual Property and Innovation

Location: DZ-2
12:15
Contesting Algorithms

ABSTRACT. The growing pressure on online platforms to expeditiously remove illegitimate content, is fostering the use of Artificial Intelligence (AI) to minimize their potential liability.

This is potentially game-changing for democracy. It facilitates the rise of unchecked power, which often escapes judicial oversight and constitutional restraints. The use of AI to filter unwarranted content cannot be sufficiently addressed by traditional legal rights and procedures, since these tools are ill-equipped to address the robust, non-transparent and dynamic nature of governance by AI. Consequently, in a digital ecosystem governed by AI, we currently lack sufficient safeguard against the blocking of legitimate content, while securing due process and ensuring freedom of speech.

I propose to address AI-based content moderation, by introducing contesting algorithms. The rationale of Contesting Algorithms, is that algorithmic content moderation often seek to optimize a single goal (i.e., removing copyright infringing materials as defined by rigthholders,). At the same time, other values of the public interest (fair use, free speech) are often neglected. Contesting Algorithms introduce an adversarial design, which reflects conflicting interests, and thereby offer a check on dominant removal systems. The presentation will introduce the strategy of Contesting Algorithms, and demonstrate how regulatory measures could promote the development and implementation of this strategy in online content moderation.

13:00-14:00Lunch Break
14:00-15:30 Session 5A: DP TRACK: Group Privacy
Location: DZ-3
14:00
Algorithmic discrimination and the protection of group privacy in European law
PRESENTER: Sandra Wachter

ABSTRACT. Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviours, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making, often based on sensitive attributes of individuals’ private lives. European data protection law affords greater protection to processing of sensitive data, or ‘special categories’, describing characteristics such as health, ethnicity, or political beliefs. While the special protections for sensitive personal data are clear in the GDPR, the conditions under which the classification applies are not. With regards to inferences, source data can be classified as sensitive in at least two senses. First, when inferred data directly discloses protected attributes, it must be treated as sensitive. Second, when personal data can be shown to allow for sensitive attributes to be inferred, or ‘indirectly revealed’, the source data from which sensitive inferences can be drawn can also be treated as sensitive data. Big Data and AI facilitate precisely this sort of fluid transformation, which raises two fundamental challenges for the treatment of sensitive data in the GDPR. First, if non-sensitive data can become sensitive based on the ability to draw inferences, under what conditions should non-sensitive personal data be reclassified as sensitive personal data? We critically examine two possible conditions that have been previously proposed in response to this question: (1) the intention of inferring sensitive attributes, and (2) the reliability of the data in question for inferring sensitive information. We suggest that the potential for knock-on effects of known proxies for sensitive attributes, as well as the irrelevance of the accuracy of an inference to its eventual impact on the data subject, renders both of these conditions unnecessary in relation to Big Data and AI. Second, insofar as Big Data and AI aim to identify unintuitive small patterns and meaningful connections between individuals and their data, does the ‘sensitive’ classification still sufficiently protect against the novel risks of inferential analytics? The analytics behind much automated decision-making and profiling is not concerned with singling out or identifying a unique individual, but rather with drawing inferences from large datasets, calculating probabilities, and learning about types or groups of people. These technologies thus expand the scope of potential victims of discrimination and other potential harms (e.g. privacy, financial, reputational) to include ephemeral groups of individuals perceived to be similar by a third party. European anti-discrimination laws, which are based on historical lessons, will fail to apply to ‘ad hoc’ groups which are not defined by a historically protected attribute (e.g. ethnicity, religion). Groups of individuals perceived to be similar to one another can be unfairly treated, without being singled out on the basis of sensitive attributes. To determine whether European law will protect the privacy and other interests of such groups, we examine current legislative proposals and relevant guidance on data protection law addressing the protection of collective interests and non-traditional privacy-invasive groupings. We conclude by arguing that the recently proposed ‘right to reasonable inferences’ could provide a remedy against new forms of discrimination and greater protection for group privacy interests.

14:30
Anti-discrimination law and, group privacy — Establishing conceptual clarity in face of Big Data challenges

ABSTRACT. This presentation identifies challenges in the research on anti-discrimination and privacy law in face of Big Data Analytics. It develops a fundamental rights perspective and answers the question: If individuals are grouped and evaluated on the basis of categories which are neither directly nor indirectly linked to protected characteristics in anti-discrimination law, why should we care? Or, formulated differently: Why do ad-hoc groups deserve protection?

New information and data processing technologies, often summarised under the term Big Data, have instigated extensive critical legal research. Among the wide-ranging inquiries, two main concerns have dominated the field: on the one hand, scholars have engaged in a reconceptualisation of privacy. On the other hand, scholars have striven for the awareness and identification of new forms of discrimination resulting from biased algorithms. Both research fields, privacy and discrimination law, have already borne important fruits. However, in the attempt to overcome the limitations of the conventional understanding of privacy and discrimination, the lines between the right to privacy and the right not be discriminated against, have been blurred.

The new tie between the research fields does not come without a reason: In the heart of privacy and discrimination research into the investigation on Big Data lies the analyses of profiling techniques. The research on ‚group privacy‘ is triggered by the concern, that conventional privacy safeguards become ineffective, as they are centred on the protection of the individual, while in fact it is the group, which is singled out and targeted in profiling techniques. Therefore, research on ‚group privacy‘ engages in the determination of what makes a group in big data processing and what it implies to be part of a group. Hence, the research on privacy enters a domain that has long been occupied by the notions, concepts and theories of anti-discrimination research.

The point of this article is not to argue that research on group privacy and discrimination should be kept separate, or that researchers should return to the conventional categories of their fields. Conversely, I argue that the reconceptualisation of fundamental notions in both fields is necessary and I acknowledge that as a result of the broadening of the horizons of each field, the fields may overlap in the future. However, conceptual clarity is very much needed in order not to entangle two distinct concepts, but to create a conductive cohesion between the two fields of research. The right to privacy and the right not to be discriminated against are two distinct rights. While privacy breaches may come in the shape of discriminatory measures and both rights may be infringed upon through one and the same practice, each right retains its own and distinct quality. The rights differ in their foundations, their justification, their history as well as their scope and purpose of protection.

This paper examines the relation between ‚group privacy‘ and anti-discrimination law on three levels. First, the way a group is conceptualised in privacy research is contrasted with the understanding of a group in anti-discrimination law. Secondly, it is examined what implications the identification of an individual as part of a group has in privacy law on the one hand and in anti-discrimination law on the other hand. Thirdly, the purpose of protection of the right to privacy and the right not to be discriminated against are juxtaposed. In its conclusion, the paper offers suggestions on how ‚group privacy' and anti-discrimination research can be positioned in order to best contribute to the solution of the great challenges that lie ahead of both fields.

15:00
The non-personality-trap and the case of group profiling. A feasible solution?

ABSTRACT. The purpose of the first part of this article is to demonstrate that the narrowness of the definition of “personal data” (art. 4(1)) leads to a certain degree of inefficacy of the GDPR. There are, in fact, a number of cases (such as group profiling) where the Regulation is not able to tackle adequately the issues deriving from new data processing techniques. In particular, there is a number of data processing able to impact on rights and freedoms of natural persons even if the data used in the procedure are not personal under art. 4(1) and, thus, the Regulation does not apply. For instance, the inapplicably of the Regulation to processing activities in the context of group profiling is a serious danger to the effectiveness of the whole data protection system, considering that its risks are not less serious than the ones posed by individual profiling. The loophole of the Regulation, in fact, relies in a simple technicality: if the GDPR applies solely when the person concerned by the data processed is univocally identifiable (that means, in the words of the WP 29, “singular” and “distinguishable” from the generality of a group), then no form of protection is provided to non-singularly-distinguishable individuals, even if they are actually reached by the negative consequences of non-personal data processing. In light of this, a methodological premise is needed: in order to assess the effectiveness of a legal tool, we should start by evaluating the adequacy of its field of application. Accordingly, in order to evaluate whether the GDPR is the appropriate legal answer to tackle the problems of the new data economy, we shall carry out a preliminary analysis of the adequacy of the notion of “personality” as “selection criterion” of the situations falling into its material scope. While the effectiveness of its legal tools (its application) are for discussion elsewhere, here, instead, we will carry out a preliminary analysis on the adequacy of its material scope as such (its applicability). Clearly, from this starting question, a second one will raise, concerning the parameters to use in order to make such an assessment. In this regard, we believe that it is necessary to verify the mutual consistency between the ratio of the Regulation – i.e. its declared policy goal, i.e. the protection of natural persons – and its field of application – that is defined by the notion of “personality of data”. In the second part of the paper, we will analyse, as case study, the situation of group profiling in order to demonstrate the consequences of such a narrow notion of personality. This is, in fact, a serious obstacle to the overarching goal of providing natural persons with an effective form of protection from the inherent risks of new data processing techniques, such as group profiling. At first, we will describe the general features of this particular form of processing. Subsequently, we will show that the moment of group-profile-application can take place without the use of personal data (through the use of shared-identity-proxies) and, accordingly, we will claim that the ever-standing “relation of content, purpose or result” between the data used (the profile itself) and the group of undistinguished natural persons to whom the profile will be applied is sufficient to harm them even in the lack of their identificability. In fact, individuals, whereas no singularly identifiable, are still “reachable” by harmful consequences of group profiling. After this general overview, we will expose the inherent risks of group profiling (discrimination, segmentation of society, so-called TOM inferences, de-individualisation, unenforceability of the principles of fairness and transparency, unenforceability of safeguards of art. 22 in the case of fully automated decision making processes, etc.) and we will try to elaborate a feasible solution to some of these problems. Essentially, building up on the acknowledgement that, in the very first phase of group profiling activity (data collection, i.e. before the aggregation, generalisation, anonymisation of data in the formalisation of a de-personalised group profile applicable through shared-identity-proxy), data processed are still personal, our proposal is to apply art. 35 to this moment. The DPIA, in fact, due to its goal-oriented nature (to achieve a fair level of protection for all natural persons “reached by” negative consequences of data processing and not only for data subjects), is able to overcome the so-called non-personality-trap and to force the data controller to set up appropriate measures to appropriately manage the risks deriving from the processing of data, regardless of the non-identificability of the subjects impacted by such consequences. The worthiness of implementing a DPIA in this field is confirmed by a number of statements by official institutions (EDPS, Eu Commission), outlining the necessity to go beyond the limits of data protection law – i.e. beyond the notion personality. In conclusion we will explain why the data protection framework should be reshaped under a more value-oriented lens, able to overcome its procedural limits in order to regulate directly the harmful material consequences of the processing. These, in fact, as demonstrated by the case study of group profiling, are not limited to the situations where the requirements of identificability of data subjects is deemed to be present.

14:00-15:30 Session 5B: DP PANEL: Protecting against data-driven harms in a data-driven world
Chair:
Location: DZ-4
14:00
PANEL “Protecting against data-driven harms in a data-driven world’

ABSTRACT. Data protection law sets the rules for the processing of personal data in order to provide legal protection against possible negative consequences associated with such processing. In the increasingly computation-rich environment where every interaction is being mediated by data, facilitated by increasingly autonomous algorithmic processes, is data protection law still up to the task, and is ‘personal data’ still the right focus of regulation?

The goal of this panel is to explore new ‘centres of gravity’ around which legal protection against data-driven harms can be built, alternative or complementary to the contested notions of personal data, sensitive data and information. This will be done through an interdisciplinary panel with contribution from the areas of information law and regulation, information and communication studies, and economics. For instance, the panel will examine the ways in which law conceptualises or regulates information (and/or data) in other fields and link those to the data protection context. Similarly, can analyses of the concept ‘information’ and related concepts in economics, philosophy of information or other relevant disciplines shed light on what it is about (digital) information that ought to trigger legal protection?

This panel features the guest panellists Dara Hallinan (law); Nadine Bol (communication studies); Paul Belleflamme (economics), in addition to the researchers of the ERC INFO-LEG project (Evelyn Wan, Mara Paun, Raphaël Gellert, Sebastian Dengler).

14:00-15:30 Session 5C: DIGITAL CLEARINGHOUSE PANEL: Governance of data sharing at the interface of data protection, competition and innovation
Chair:
Location: Dz-1
14:00
DIGITAL CLEARINGHOUSE PANEL: Governance of data sharing at the interface of data protection, competition and innovation
PRESENTER: Aurélie Pols

ABSTRACT. Data sharing is subject to piecemeal regulatory approaches, with sector-specific regimes (PSD2, Regulation on non-personal data) set up, while the application of horizontal regimes like data protection and competition law is still unclear. The panel brings together academics, stakeholders, and officials of the European Commission as well as the Dutch Economics Ministry and the UK Government Department for Digital, Culture, Media & Sport to discuss how different interests, like data protection, competition and innovation, can be reconciled and how to create coherent approaches that facilitate a flourishing data-driven economy.

14:00-15:30 Session 5D: IP TRACK: AI, Copyright and Press Publishers
Location: DZ-6
14:00
General Freedom of Action and the New Right for Press Publishers

ABSTRACT. The use of intangible assets in their natural state is based on the principle of the public domain. From the constitutional law perspective, the public domain is based on the general freedom of action. In several Central European countries (Germany, the Czech Republic, Slovakia), the general freedom of action (Handlungsfreiheit) is a human right that can be protected against state authorities, including the legislator. The author will argue that creating exclusive publisher's rights in the Digital Single Market Directive is not sufficiently substantiated and therefore it can be found as an unconstitutional breach of the general freedom of action.

14:30
Art at the Intersection of AI and Copyright Law
PRESENTER: Teresa Scassa

ABSTRACT. The co-authors of this presentation are a law professor and an artist respectively. They use the context of a lawsuit brought against the artist for copyright infringement related to his AI-enabled art project “All We Ever Need is One Another” to tease out some of the complex issues raised by artistic expression in the digital realm. The presentation offers a critical perspective on the intersection of art and copyright law.

15:00
Author MIA. Place of journalists in the post-press publishers’ right world.

ABSTRACT. In the battle over investment in news and platforms’ (supposed) parasitism, journalists have been largely left out of the discussion on press publishers’ right. To address this gap, the contribution explores the possible effects of the press publishers’ related right on the journalists’ copyright. It analyses the stakeholders’ discussion, and through the comparison with other creative sectors, it explores if introduction of the press publishers’ right could lead to the practical re-location of copyright to the press publishers.

14:00-15:30 Session 5E: JUSTICE AND DATA MARKET TRACK: Economics of data brokerage
Location: DZ-7
14:00
The quest for a B2B sharing economy of data in the EU

ABSTRACT. The institutions of the European Union have interpreted digital technologies and data as levers to relaunch European economic development. Within the framework of the Digital Single Market Strategy, the Commission has taken steps to increase the generation, transfer, and use of digital data. Therefore, in the last years, several EU policies and actions focused on the need of establishing a flourishing European data economy and, in particular, on the need to boost data mobility. While the spotlights are turned on the regulatory actions carried out by the European Commission in order to establish a data-driven economy, this paper intends to focus on data sharing between privates, which falls outside the EU regulatory plans. Indeed, business-to-business (B2B) data sharing is included among the drivers to build the European data economy. Data sharing consists in the sum of three actions: the making available of data from one company; the access to said data by one or more other companies; the re-use of data by a company different from the original data holder, usually following a non-rivalry approach (i.e. the company which accesses data is not a direct market competitor). In this regard, if we consider a company that owns a certain set of data and another company that has access to and re-uses the same set, the value that the first company assigns to its data may be independent of the value acknowledged by the second company. In other words, the fact that a company is making available some of its data to another company does not exclude that it extracted value from the same data. Data sharing can have beneficial effects on both sides: a company which is granted access to data that it is unable or unwilling to collect on its own may then develop or improve processes, products or services that without those data would be impossible or whose quality would be lower. Meanwhile, data producers would be rewarded for sharing data whose value they may already have exploited within their own processes, products or services. In light of this consideration, the debate on how to build a European data economy must place proper data sharing incentives at the forefront, since an extended availability of data is widely recognised as crucial to maximise its value. In this sense, innovation – within data economy – is closely tied to data sharing. This paper aims at providing an analysis of B2B sharing of machine-generated non-personal data – as one of the main weaknesses of the European data economy. First the relevance of B2B data sharing and its role in the EU agenda are framed. Then, an assessment of the potential competitive and anti-competitive effects induced by data sharing on the market is carried out. Last, the paper discusses the necessity to adopt further measures to incentivise B2B data sharing.

14:30
Algorithmic decision-making, price discrimination, and non-discrimination law

ABSTRACT. Algorithmic decision-making advances important goals, such as efficiency and economic growth. But algorithmic decision-making may also threaten fundamental rights, such as the right to non-discrimination. The overarching question for this (legal) paper is: can European non-discrimination law protect people against algorithmic discrimination?

The paper applies EU non-discrimination law to algorithmic decision-making. The paper uses online price differentiation, also called price discrimination, as an example of algorithmic decision-making that can have discriminatory effects. With online price differentiation, a company charges different people different prices for identical products or services, based on information about those people.

Suppose that an online book shop adapts the prices of its books to the consumer’s location (based on the consumer’s IP address). The shop differentiates its prices to maximise profits. It turns out that people pay, on average, 20% extra if they live in streets where a majority of the people have a Roma background. We assume that the shop does not intend to discriminate against Roma, and that the prices are independent of postage costs, taxes, etc.

In principle, non-discrimination law, in particular the prohibition of indirect discrimination, can protect people against algorithmic discrimination. Roughly speaking, indirect discrimination occurs when a practice is neutral at first glance but ends up discriminating against people with a certain ethnic background, or with another protected characteristic.

But the paper shows that non-discrimination law has severe weaknesses when applied to algorithmic decision-making. For instance, the prohibition of indirect discrimination is often difficult to apply in practice. If differentiation is ‘objectively justified’, the prohibition does not apply. Whether such a justification applies, is context-dependant and requires a complicated and nuanced proportionality test. Therefore, the prohibition is unclear.

Second, non-discrimination law is silent about algorithmic decisions based on incorrect predictions, while such decisions can be unfair. Third, algorithmic decision-making can reinforce social inequality. For example, in some cases, algorithmic pricing has led to higher prices for poor people. But EU non-discrimination law does not protect people against discrimination on the basis of poverty.

The paper concludes with recommendations on how to improve EU non-discrimination law, to better protect people against algorithmic discrimination.

15:00
Impacts of Data Brokerage and Behavioural Advertising on the Right to Privacy and The Rule of Law

ABSTRACT. The work in progress paper argues that the behavioural advertising market, as part of a landscape of actors who collect and broker personal data, has undermined the fundamental rights to privacy and data protection in the European Union with consequences for the Rule of Law. This is a result of the current model of behavioural advertising which permits digital actors (including data brokers) to gather information beyond that required for the purposes of advertising, provides that information to an undefined list of third parties without meaningful use restrictions.

This model of data collection and brokerage is potentially in breach of Article 5 GDPR but, more fundamentally, offers a potential for state governments to circumvent constitutional controls on their collection of such data by acting through these non-state actors. The model permits large quantities of personal data to be bought by any actor, which may include political or government affiliates, for purposes which may include political surveillance and targeting.

This is fundamentally contrary to the right to privacy as it offers the potential to sidestep constitutional controls on state action through a privatized ‘back door’ which consumers are either unaware of, or unable to opt out of due to an absence of market alternatives.

This has implications for the Rule of Law because the right to privacy acts as an effective, individually exercised, limitation on state overreach. The cumulative exercise of privacy rights in turn acts as a broader societal control on state action. The paper argues that the right to privacy is thus an essential component of a substantive or ‘thick’ conception of the Rule of Law by providing an effective, if minimal, control on state interference with the individual.

The paper argues the importance of the right to privacy to this conception of the Rule of Law is reinforced by the right’s status as a derivative rights guarantor. In this role the right to privacy enables the exercise of ‘derivative rights’ which contribute to the preservation of a democratic polity. In particular, the right to privacy enables this by securing spaces in which freedom of expression, particularly in the form of political opposition and dissent are enabled, and those expressing such views are protected from discrimination or unjust attack. In establishing the right to privacy’s status as guarantor the paper analyses the historical experience of the United States in its development of the Fourth Amendment and the political and historical context of similar tripartite guarantees of privacy of the person, home, and information which emerged in Europe following the Second World War.

The paper employs a multidisciplinary approach considering the legal argument in light of the socio-political and historical understandings of the right to privacy and its function as part of a democratic system. The paper concludes by considering how the rights to privacy and data protection, and the Rule of Law can be supported in light of the current operation of the digital landscape. The paper considers the potential of consumer protection legislation, interpreted in light of the Charter of Fundamental Rights to act as a limit on the damaging aspects of the current business models used by the behavioural advertising industry.

14:00-15:30 Session 5F: AI AND RESPONSIBILITY TRACK: Responsible AI by Design
Location: DZ-8
14:00
Towards transparency by design for AI: How to determine the desirable degree of transparency in AI environments?

ABSTRACT. Problem Description

The word transparency comes from medieval Latin word transparentem, usually translated as shining through, which at the same time comes from the Latin transparere, trans- (through) and -parere (appear). This adjective usually refers to materials that allow the light to pass through, so that objects behind can be distinctly seen; or to things and concepts that are easy to perceive or detect (Oxford Dictionary, 2018). Interestingly, in the context of computing, the word transparent refers to those processes or interfaces which function in the background, without a user being aware of their presence or existence (Oxford Dictionary, 2018). Indeed, “technology is at its best when it is invisible” (Taleb, 2010).

In order to overcome this dichotomy, the General Data Protection Regulation lays down several principles aiming to ensure the data subject’s awareness of the collection and the processing of personal data. The problem is that, while the GDPR’s strength lies in providing general legal requirements across technologies because of its technology-neutral nature, the lack of guidance on the application of such principles and requirements to specific technologies and associated context risks the neglect of factors that are in fact crucial for protecting users’ data related rights. These challenges also arise specifically with regard to the determination of requirements of transparency. In this article, therefore, we look at the interplay between these aspects in order to determine what degree of transparency is deemed desirable in AI environments.

Transparency can be used performatively and strategically rather than indicating genuine trustworthiness (Albu and Flyverbom 2016), and it can also, at times, have unintended negative consequences (Weller, 2017). Indeed, Ananny and Crawford (2018) highlight in their critical reflection on transparency in algorithmic accountability that problematic aspects of transparency include, among others, that it “can be harmful, [...] can create false binaries, [...] has technical limitations, [...] and prioritizes seeing over understanding.” Moreover, transparency expectations and norms depend on the historical, cultural or specific social context in which a technology is deployed, so that it might be shortsighted and culturally insensitive to impose unrealistic transparency ideals. In short, while transparency is undeniably ethically important, a strategy of maximising transparency as crucial for addressing accountability would be unduly simplistic and maybe counterproductive.

The latest Declaration on Ethics and Data Protection in Artificial Intelligence proclaimed the promotion of “transparency, intelligibility and reachability (…) taking into account the different levels of transparency and information required for each relevant audience,” while “making organizations’ practices more transparent (…) by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided” and “guaranteeing the right to informational self-determination”.

Our Contribution

As shown above, the complexity of transparency in AI environments has been explored in the literature, showing a tension between transparency as normative ideal, transparency as practically inert, and transparency as potential disguise of strategies aiming for the realisation of different goals. However, practical guidance helping navigate those challenges has yet to be developed. Building upon the idea of “transparency by design” as an emerging concept across different contexts (Hildebrandt 2013; Jansen et al., 2017), we aim to bridge this gap between theory and practice by means of conducting an interdisciplinary analysis of different important factors contributing to the value of transparency for users of AI technology. To do so, we propose a taxonomy of relevant factors that could be used to balance benefits and challenges of transparency and thereby determine what degree of transparency is desirable for different AI technologies.

This article will be divided into three parts. First, we will disentangle the benefits of transparency, including increased levels of accountability and responsibility, and advances on the explainability of the systems. Second, we will discuss the associated risks and challenges of the transparency requirement in AI environments, such as the use of transparency as strategically selective, as window-dressing, or as a means of manipulation. Third, we will propose a taxonomy of factors on how to identify and balance benefits and risks with respect to transparency in AI environments. With this taxonomy, we aim to contribute to the development of a “transparency by design” approach.

References 40th International Conference of Data Protection and Privacy Commissioners (2018) Declaration on Ethics and Data Protection in Artificial Intelligence. Retrieved from https://www.huntonprivacyblog.com/wp-content/uploads/sites/28/2018/10/ICDPPC-40th_AI-Declaration_ADOPTED.pdf

Albu, O. B., & Flyverbom, M. (2016). Organizational transparency: Conceptualizations, conditions, and consequences. Business & Society, 0007650316659851.

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.

Hildebrandt, M. (2013). Profile transparency by design?: Re-enabling double contingency. Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology, 221-46.

Janssen, M., Matheus, R., Longo, J., & Weerakkody, V. (2017). Transparency-by-design as a foundation for open government. Transforming Government: People, Process and Policy, 11(1), 2-8.

Taleb, N. N. (2010) The bed of Procrustes. Penguin Books.

Weller, A. (2017). Challenges for transparency. arXiv preprint arXiv:1708.01870.

14:30
Ethical by Design? Responsible Research and Innovation using the Moral-IT Deck of Cards
PRESENTER: Lachlan Urquhart

ABSTRACT. In this paper, we present the Moral-IT deck , a responsible research and innovation toolkit built to support designers’ reflection on ethical issues when creating new technologies. These physical cards build on our earlier research into card-based tools for supporting privacy by design and legal compliance. Here, we tackle the wider question of how to build digital ethics into new technologies by design. Awareness of technologists’ responsibilities is growing as it no longer becomes sustainable to focus purely on building functioning systems, but instead they need to consider the wider social, ethical and legal implications of their work. Whilst technologists’ role in governance grows, the nature of their role remains unsettled, and there is a greater need for practical tools to support structured engagement with the wider, often complex issues. There are many approaches to this, as we see from interest in privacy by design where design patterns or privacy engineering methodologies are being developed.

Our toolkit uses physical ideation cards, a design tool popular in human computer interaction research. We support engagement with digital ethics concepts by posing questions about requirements from law, privacy, security and ethics frameworks in a more accessible, visually appealing card-based form.

Our cards are versatile and can be used in a wide variety of ways, but in this paper, we focus on a user-friendly impact assessment we have developed. It poses questions about risks (including ranking their severity), considering the likelihood of occurrence, mapping out appropriate safeguards, and formulating strategies for implementing the safeguards. This method draws on a wide range of perspectives from Science and Technology Studies, human computer interaction, computer ethics and law such as value sensitive design, Reflective design and Human data interaction . We were also inspired by a variety of impact assessments such as ethical, surveillance, privacy and data protection.

We empirically evaluated our toolkit through a series of workshops using focus groups, exercises and questionnaires. We did this with researchers working on projects from different sectors ranging from projects on health and wellbeing, transportation and entertainment/cultural heritage. We will discuss the the advantages and disadvantages of a card-based approach, presenting findings from our data and lessons for building ethics into design and reflect on initial ideas as to what the use of such cards could tell us about: them as a tool, the technologies under discussion and the nature of the ethics of emerging technology more widely.

15:00
Ethical and Responsible Internet of Things: The Edinburgh Initiative

ABSTRACT. Emerging tools and techniques to capture, process and analyse data from the environment, including sensors, communication infrastructure and software, are opening possibilities for connecting almost any ‘thing’ to the internet and acting upon the new information available. This growing technological trend has been broadly termed the Internet of Things (IoT). The potential for large-scale deployment of devices that gather and transmit data in places and manners never previously seen, and to process and act on this information remotely creates both an opportunity for research and innovation but is also a source of anxieties regarding privacy, security, trust and other issues. In the context of research and education, ethical frameworks complement legal compliance in mitigating the possible risks and threats arising from the use of technology. Based on documents as well as reflections on the authors’ participatory role, this paper gives a reflexive account of the first year of work of an action group (AG) on governance and ethics, established within the organizational structure of a newly created IoT initiative at the University of Edinburgh. The role of the AG is to develop a framework for high ethical standards and appropriate accountability procedures to ensure a responsible use of the IoT infrastructure that goes beyond mere legal compliance with current data protection regulation. We discuss the process of assessing the ethics surrounding the first two IoT use cases of the initiative, and the mitigating arrangements for ensuring the ethical and legal use of any personal data that is processed. The first case is an occupancy-monitoring service to users of the university library, in which personal data are variously collected via sensors and other means. The second case is CitySounds, a project detecting and analysing sounds in a neighbouring park, in which steps have had to be taken to ensure that no personal data is collected. In both cases, public engagement has been undertaken to varying degrees for the sake of transparency and accountability. We outline a processual framework that supports technology-based experimentation involving monitoring of humans, animals and the environment, while providing space for debate over what should be done, involving a broader range of stakeholders than just ‘ethics experts’ and developers in a way that promote dialogue around entrenched and emerging social values. The framework contemplates the development of overarching principles and ready-to-use procedures as well as recommendations for the handling of the data collected in a way that minimizes the potential harm to participants and the environment, and the risk of institutional reputational damage.

14:00-15:30 Session 5G: VICI PANEL 2: Surveillance, criminal investigation and privacy: perspectives from different cultures

This panel – the second panel in the context of Bert-Jaap Koops’s VICI project on ‘Privacy in the 21stcentury’ – will discuss challenges of safeguarding privacy in the context of surveillance and criminal investigation. A general challenge is how to distinguish between major and minor privacy intrusions, now that classic yardsticks (such as distinctions between private places and public places, or between inside and outside of the body) have become less meaningful to assess the privacy impact of surveillance activities. More specific challenges are, for example, how to safeguard privacy in contexts of computer searches, covert remote searches, surveillance in public space, and access to data stored with service providers. Every jurisdiction faces these challenges, but they come up with different answers and approaches to address them. Comparative outlooks are fruitful for law-makers and researchers who consider these issues within their own jurisdiction, as examples from different countries and cultures may provide inspiration and inspire out-of-the-box thinking. This panel features three perspectives from different jurisdictions and cultures, illustrated with various technological developments in criminal investigation and surveillance. 

Chair:
Location: DZ-5
14:00
Differentiating between Privacy Infractions

ABSTRACT. The Supreme Court of India in 2017 confirmed the right to privacy as a fundamental right protected under the Constitution of India. This was first applied to review a claim that the aadhaar project (providing unique identity numbers to all residents in India based on their biometric details (iris scan and finger prints)) was in violation of the right to privacy. The Court upheld the aadhaar project. The aadhaar judgement of the Court reflects its failure to appreciate the current reality of how information is collected, stored and shared. In this context it is necessary to reconceptualize the right to privacy and develop a more robust standard of review by Courts in assessing privacy claims. I develop this argument in the four following steps. First, right to privacy should be given the status of non-derogable and non-alienable fundamental right. Second, not all privacy claims are of similar value and do not deserve equal constitutional protection. Privacy violations can be categorized into two different species - privacy takings and privacy intrusions. Third, privacy takings both by the State and by non-state actors should be prohibited as a general rule, and consideration should be given to cumulative impact of practices rather than a specific instance related to the privacy claim. Fourth, it is important to highlight the normative foundations of the right to privacy in terms of the physical, decisional and informational aspects that it seeks to protect.

14:30
Technology, Privacy and Criminal Law: Understanding the Change in the Czech Republic

ABSTRACT. Technology changes, and rapidly so, and with it changes the understanding of the concept of privacy. However, written law lags behind both technology and concept of privacy. This submission aims to discuss the danger of a covert change of the scope of privacy caused by the interplay between a rapidly changing technology environment and the static legal framework. In this presentation I will ask how the old legal rules in the Czech Republic apply to the new game formed by a wide availability of IT, encryption, big data analysis and artificial intelligence. I will focus on five criminal procedure institutes that are currently being used to access electronic data: obligation to comply with request, seizure of property, interception of electronic communication, data retention, and surveillance of people and things. These provisions served widely different purposes when first formulated and later re-formulated, however now they all play a significant role in obtaining electronic evidence for the purpose of criminal procedure. By mapping their development over time, I will show the back and forth game of balancing the needs of criminal investigation with existing privacy requirements.

15:00
Stratumseind 2.0, Living Labs, and Preserving Privacy in Smart Cities

ABSTRACT. Smart cities and living labs are a common sight in cities around the world, including the Netherlands. Stratumseind 2.0 and its living lab (LL) in the centre of Eindhoven are an illustrative example of such initiatives. The two main goals of the LL (organised in the form of a public-private partnership) are to improve safety on the Stratumseind street through digital technology and to turn this type of cooperation into a business model. However, with the widespread deployment of networked technologies in public space, the possibility of achieving privacy in public space (PiP) is becoming increasingly difficult. By examining the surveillance techniques within the LL, I identify the main dimensions and types of PiP that are at risk (based on the typology of privacy by Koops et al. 2017). This analysis shows that in the context of PiP the ‘self-development’ (or ‘freedom to’) dimension of privacy is at the fore (particularly the associational and behavioural types of privacy), which has an important value not only for the individual but equally so for society and democracy more broadly. However, the deployment of surveillance technologies in public space other than video cameras is hardly regulated in the Netherlands (at least, beyond data protection law). Based on this analysis, I try to identify which combination of regulatory means, exploring tools beyond the law, are best suited to address the identified privacy challenges.

15:30-16:00Coffee Break
16:00-17:30 Session 6A: DP TRACK: Data protection rights and the right to data protection (B)
Location: DZ-3
16:00
Digital Revolution and Constitutional Change: Data Protection as a Case Study

ABSTRACT. Today, it is impossible to ignore the disruptive impact that the advent and the impetuous development of digital technology have been generating in the last decades. After the scientific and the industrial revolutions, the end of the XX century has marked the beginning of the digital revolution. The transformations that contemporary society is experiencing are comparable to those which led to the previous major constitutional changes. The dawn of the new millennium is witnessing a transition from the homo faber/oeconomicus/lundens of the XX century to the homo sapiens informaticus. XXI century’s men do not only act in the physical world, but also live in a virtual ecosystem. Physical bodies do not have access to the digital environment, but human actions are translated into interactions of data. Collections of data, that ultimately are merely digits imputed and stored in information technology systems, represent our bodies, ideas, preferences, relationships in the virtual domain. We are no longer only flesh and blood, or body and soul – as someone may think: we are also our digital selves. The effects of these transformations can be read from multiple perspectives. They have a three-hundred-and-sixty degrees impact on contemporary society: including on its constitutional equilibrium. Constitutional law strives to equalise dominant powers within a community and to balance competing rights, legitimate interests and obligations. However, the factual constitutional equilibrium is not a permanent condition. Societal developments constantly change the landscape in which constitutional law operates, and these transformations could have a variety of outcomes. On the one hand, constitutional law could be able to protect its original values in a context of mutated social conditions. On the other hand, constitutional law, although stretched, could be unable to deal with a societal scenario which is no longer that depicted at the time of the initial elaboration of its norms. However, a peculiarity of constitutional systems is that, in the last described scenario, they do not remain motionless. When the constitutional equilibrium is affected by societal change, the system starts to produce a series of counteractions in order to restore a condition of balance. This is exactly what is happening today: contemporary society is experiencing a new constitutional moment. The progressive emergence and definitive affirmation of data protection law from the 70’s can be viewed as a part of this constitutional process. Data protection law is the response to the challenges generated by the development of digital technology to a series of fundamental rights, such as the right to private life, non-discrimination and human dignity. Data protection law also speaks to the attempt to rebalance the power asymmetry between weak and dominant actors. Traditionally, between citizens and states, but today, in the novel globalised dimension, also between any individual and the multinational corporations managing digital services. Last, but not least, data protection has been ultimately recognised as an autonomous right by the Charter of Fundamental Rights of the EU and in several national constitutions. Following this line of argumentation, can one therefore affirm that data protection principles are playing a constitutional role? A fortiori, does a right to data protection really deserve to be constitutionalised? By adopting a constitutionalist lens, this paper aims to analyse as a case study, what will be defined, the process of constitutionalisation of data protection. It will be submitted that, in a first historical phase, the production of regulatory norms on the collection of personal data was not inspired by a well-defined and autonomous right to data protection, but that, paradoxically, the latter progressively emerged when the relevant regulation was already established. It will be then considered why there is a persisting reluctance to recognise data protection as a fundamental right. To this purpose, this paper will compare two antithetical positions: on the one hand, the view that data protection should still be regarded as an instrumental value, ancillary to the guarantee of a series of fundamental rights and foundational principles of constitutionalism, and, on the other hand, the opinion that the time has come to recognise it as an autonomous right. Finally, embracing this last vision, it will be considered to what extent data protection deserves this constitutionalisation and, in particular, which specific principles of data protection law should aspire to acquire a constitutional status.

16:30
What is an abuse of data subject rights? Fear, facts and fiction about the use of data protection rights in a post-GDPR world

ABSTRACT. The entering into force of the General Data Protection Regulation (GDPR) in May 2018 strengthened the role of the data subjects by facilitating the exercise of their rights – both ‘data subject rights’ as such (to access, rectify, erase, etc. data) and the right to introduce complaints. Since then, both data controllers and data protection authorities (DPA) appear to be confronted with an increasing number of data subjects exercising their rights and/or bringing forward complaints. German DPAs received over 70.000 complaints in the period May to October 2018, the French, Austrian and UK DPA’s saw similar increases. While similar numbers are not available about controllers, anecdotal evidence seems to suggest that they also have had to deal with more data subject requests than before the GDPR, in addition to dealing with complaints first raised to them by individuals. These developments could be a logical consequence of reinforced transparency obligations, and take place while some explore the possibility to rely on automated solutions to make sure data subject rights can be exercised faster, easier, and more often.

In light of this, both controllers and DPA’s sometimes overtly or covertly refer to notions related to the ‘abuse of right’ doctrine to limit their exposure to requests, for instance by attempting to delimit what would be a genuine, justified, non-excessive exercise of data subject right. DPA’s seem to apply the abuse of data subject rights argumentation when deciding upon admissibility of complaints, denying the investigation of abusive claims. Controllers use abuse of rights to justify their denial of reply to a received request. None of them, however, appear to relate their argumentation to the idea that fundamental rights would be applied abusively in the sense of Article 54 Charter of Fundamental Rights of the European Union (CFR), or on the abuse of law doctrine developed around the internal market legislation within the European Union (EU) by the Court of Justice (CJEU) with its decisions in Halifax, Emsland-Stärke and others.

The time is thus ripe for investigating the legal arguments behind a claim that data subject rights of the GDPR are being abused, or not properly grounded or exercised. This paper aims therefore to answer the question what constitutes an abuse of data subject rights under the GDPR, in light of the EU Charter. To this end, the paper will analyse both arguments relating to a breach of Article 54 CFR (abuse of fundamental rights) and to the abuse of law doctrine of the CJEU, as due to the double-headed objective of the GDPR (protecting the fundamental right of data protection and ensuring free flow of data) both could theoretically apply. In addition, the paper will study alternative solutions offered by the GDPR itself to counter the potential negative impact of (some uses of) data subject rights on data controllers and DPAs, focusing on the provisions in the GDPR itself that restrict the use of data subject rights and circumscribe the ensuing legal obligations, in order to assess whether those allow any conclusions about the need to have recourse to the notion of ‘abuse of data subject rights’. The analysis will be based on case law of the CJEU, the GDPR and EU fundamental rights law. Ultimately, by investigating what constitutes an abuse of data subject rights, it will throw light on the conceptualisation of (normal) data subject rights.

17:00
The right of access: A genealogy

ABSTRACT. The right of access: A genealogy The right of access is a foundational principle of data protection law, whose roots in a history of struggle for civil liberties with an aim to contribute to a democratic balance of power is mostly overlooked. In order to fill this gap this paper aims to uncover this history to contribute to a deeper understanding of the foundational nature of the right of access, situate current debates and practices around this topic, and question the prevailing individualistic understanding of this right. Recent legal analyses of the right of access (e.g. (Ausloos & Dewitte, 2018), (Mahieu, Asghari, & Eeten, 2018), (Cormack, 2016), (Van der Sloot, 2014)) do not pay much attention to the theoretical grounding that was provided for this right in the early years of development of the system of data protection legislation. Some parents of data protection regulation, Westin and Steinmüller, dedicated a substantial part of their work to the right of access and its foundational nature. It is worth going back into this past and follow their analysis of the political-philosophical nature of the right of access because current development and future shaping is conditioned how we relate to this analysis. This will help us understand (1) when the right of access is effective, (2) why it has become recognized as a fundamental right in the Charter of Fundamental Rights of the European Union and (3) the context of recent uses of the right by civil society activists and media. This paper will discuss the social and theoretical history and underpinning of the right of access, and trace the arguments that have been made in support of such a right. These arguments can partly be found in the legislative history, in recitals, parliamentary debates and reports written by and for governments that were introducing data protection regulations. In tracing these arguments we will discuss the multiple purposes that the right of access is intended to achieve such as allowing data subjects to verify the accuracy of the data, exercise other data subject rights and check the lawfulness of the processing. We will focus on the works of Westin and Steinmüller, which stand out because of the way they relate the construction of a data protection framework to the underlying political-philosophical principles, and because of the influence their work had on the actual development of early data protection frameworks. Westin’s works, which were written to inform US policy making but had worldwide influence, relate the right of access to the principle of due process and the protection of civil liberties. In his most cited 1967 work Privacy and Freedom, Westin connects the right of access to the fundamental legal principle of due process. A few years later, in 1972 Databanks in a Free Society, he goes even further and proposes that introducing right of access as the first area of priority for public policy. Building on the civil rights movement he is convinced that citizen’s should have a general right of access to records about themselves. This is not because of the digitalization of society but because “the scope of what American society regards as rights and not as privileges has been widened dramatically over the past decade”. Introducing the right of access is a seen crucial step in shifting the balance of power in favor of the citizen and invigorating the democratic processes. Steinmüller discusses the right of access as part of the theory of informational self-determination. Seen from this tradition, the focus of the right is the relationship between the individuals and their data, where the rights of access helps defend people’s inalienable autonomy. While most recent work (e.g. Kammourieh et al., 2014, van der Sloot 2014) only refers to the 1983 German Census case,1 by which informational self determination became part of positive law, the theory of informational self-determination was set out in much more detail in Steinmüller’s 1972 Grundfragen des Datenschutze, which includes a specific discussion of the place of the right of access within the overall system. A rooted historical analysis of the right of access is needed for at least three reasons. First; a string of recent work (e.g. (Ausloos & Dewitte, 2018) (Mahieu, Asghari, & Eeten, 2018) (Parsons, Hilts, & Crete-Nishihata, 2018) investigates how effective the right of access is in practice. In reflecting back on our study we found that the frameworks by which these studies assess the effectiveness is restricted by our limited understanding of the nature of the right. We anticipate that by having a better understanding of the purposes and foundation of the right of access we are able to assess its effectiveness with more nuance. Secondly, the fact that data protection and in particular the right of access have now been recognized as a fundamental right under the Charter of Fundamental Rights of the European Union, underlines the need for an investigation into its roots. While there has been quite some academic discussion about the introduction of Article 8, on data protection, in particular in relation to Article 7, on the right to respect for private and family life, the addition of the right of access has not gained much attention (e.g. Fuster, 2014). Lastly, tracing the histories of the right of access will ground the use of this right within a political-philosophical tradition which deals with attaining a balance of power. Therefore ultimately connecting the way in which the right is currently being utilized in civil society, media, activism, and academia to a history of struggle for civil liberties and freedom. Preliminary Bibliography Ausloos, J., & Dewitte, P. (2018). Shattering One-Way Mirrors. Data Subject Access Rights in Practice (SSRN Scholarly Paper No. ID 3106632). Rochester, NY: Social Science Research Network. Retrieved from https://doi.org/10.1093/idpl/ipy001

Bennett, C. J. (1992). Regulating Privacy: Data Protection and Public Policy in Europe and the United States (1st ed.). Ithaca and London: Cornell University Press.

Cormack, A. (2016). Is the Subject Access Right Now Too Great a Threat to Privacy. European Data Protection Law Review (EDPL), 2, 15.

European Data Protection Supervisor (EDPS). (2014). Guidelines on the Rights of Individuals with regard to the Processing of Personal Data. Retrieved from https://edps.europa.eu/sites/edp/files/publication/14-02-25_gl_ds_rights_en.pdf

Fuster, G. G. (2014). The Emergence of Personal Data Protection as a Fundamental Right of the EU. Springer Science & Business. Retrieved from DOI 10.1007/978-3-319-05023-2_3

Kammourieh, L., Baar, T., Berens, J., & Letouzé, E. (2017). Group Privacy in the Age of Big Data. In L. Taylor, L. Floridi, & B. van der Sloot (Eds.), Group Privacy New Challenges of Data Technologies (Vol. 126). Springer International Publishing. Retrieved from DOI 10.1007/978-3-319-46608-8

Mahieu, R. L. P., Asghari, H., & Eeten, M. van. (2018). Collectively exercising the right of access: individual effort, societal effect. Internet Policy Review, 7(3), 22. https://doi.org/10.14763/2018.3.927

Mayer-Schönberger, V. (1997). Genrational Development of Data Protection in Europe. In P. E. Agre & M. Rotenberg (Eds.), Technology and Privacy (pp. 219–241). Cambridge, MA, USA: MIT Press. Retrieved from http://dl.acm.org/citation.cfm?id=275283.275292

Parsons, C., Hilts, A., & Crete-Nishihata, M. (2018). Approaching Access: A comparative analysis of company responses to data access requests in Canada (Citizen lab research brief No. 106). Retrieved from https://citizenlab.ca/wp-content/uploads/2018/02/approaching_access.pdf

Rouvroy, A., & Poullet, Y. (2009). The Right to Informational Self-Determination and the Value of Self-Development: Reassessing the Importance of Privacy for Democracy. In S. Gutwirth, Y. Poullet, P. De Hert, C. De terwangne, & S. Nouwt (Eds.), Reinventing Data Protection? (pp. 45–76). https://doi.org/10.1007/978-1-4020-9498-9_2

Simitis, S. (1978). einleitung. In S. Simitis, U. Dammann, O. Mallmann, & H.-J. Reh, Kommentar zum Bundesdatenschutzgesetz (1st ed., pp. 47–74). Baden-Baden: Nomos Verlagsgesellschaft.

Steinmuller, W., Lutterbeck, B., Mallmann, C., Harbort, U., Kolb, G., & Schneider, J. (1972). Grundfragen des Datenschutze Gutachten  im Auftrag des Bundesmnisteriums des Innern. Retrieved from https://dipbt.bundestag.de/doc/btd/06/038/0603826.pdf

Taylor, L., Floridi, L., & Sloot, B. van der. (2016). Group Privacy: New Challenges of Data Technologies. Springer.

van der Sloot, B. (2014). Do data protection rules protect the individual and should they? An assessment of the proposed General Data Protection Regulation. International Data Privacy Law; Oxford, 4(4), 307–325. http://dx.doi.org/10.1093/idpl/ipu014

Westin, A. F. (1972). Databanks in a Free Society. New York: Quadrangle Books.

Westin, A. F. (Ed.). (1971). Information Technology in a Democracy (First Edition). Cambridge, Mass: Harvard University Press.

Westin, A. F. (1967). Privacy and Freedom (Fourth Printing edition). New York: Atheneum.

16:00-17:30 Session 6B: DP TRACK: AI & data protection
Location: DZ-4
16:00
“Alexa, cover your ears!”: an analysis of the application of the GDPR to AI powered Home Assistants.

ABSTRACT. This paper analyses how the GDPR can be applied to home assistants such as Amazon Echo and Google Home, in the light of their significant reliance on personal data processing and on their positioning inside the home of data subjects. Home assistants like Amazon Echo and Google Home have arrived in Europe. They are not only on sale on their respective websites, but can now be found in the main electronics franchises, and even in certain supermarkets. They are depicted as family and personal assistants, as the devices that will lead our houses into the future, that will simplify and streamline our lives, or even as additional companions and family members. But how are home assistants going to do all these things? The key to home assistant’s capabilities is Artificial Intelligence, which in turn means the processing of data collected by home assistants via their sensors, the devices they control, and other databases available online. Based on their features and functioning of home assistants, the paper identifies a number of provisions of the GDPR whose application is deemed to be challenging. Preliminarily, the paper unties the knot of the purpose limitations of the GDPR in relation to home assistants: do they fall within the household exemption? Part I of the paper shows how the answer to the first question should be negative, therefore opening the way to the analysis of other provisions of the GDPR. As a logical consequence, Part I of the paper proceeds analyzing the roles of Data Controller and Data Processors, and possible cases of co-controllership with regard to third party application developers, as well as users of the devices (for instance in the presence of guests inside the house). Subsequently, Part III of the paper analyzes the compatibility of home assistant’s data collection, processing, and retention with the basic principles of data minimization and purpose limitation, whose application to the ever-hungry Machine Learning powering the intelligence of home assistants might prove concretely very limited. Part III also provides an overview of the application of Article 12 of the GDPR, concerning the modalities (“concise, transparent, intelligible and easily accessible form”) with which information must be provided to Data Subject, in particular with regard to the vocal interface of home assistants and the presence of both website accounts and smartphone apps to manage the devices. Subsequently, Part IV analyzes the application of Articles 22 and 25, respectively regarding the regulation of automated-decision and of design and default requirements. In the first case, the paper shows how the presence of multiple, daily, smaller automated decisions instead of one unitary, bigger decision can potentially undermine the effectiveness of Article 22, especially due to the presence of cumulative effects on individuals (exacerbated by aggressive marketing policies such as those carried out by Amazon on their online shop). With regard to data protection by default and by design as provided by Article 25, the analysis focuses on both hardware and software specification, in order to evaluate the overall compliance of the devices (especially in the context of household environments creating a network of interconnected IoT devices surrounding the home assistant). Finally, the Conclusions close the paper exploring the margins for the possible elaboration of additional guidelines and provisions to mitigate undesired effects deriving from the intense processing of personal data collected inside the very private sphere of individuals and carried out by home assistants and the companies they belong to. The starting point for these reflections include data protection issues emerging at group level (especially with regard to the inner circle of Data Subjects), the conversational and vocal interface and the relating risks of nudging, as well as behavioral interferences given by the proximity and interaction with home assistants, which the GDPR might not be fully addressing in its current formulation.

16:30
Making Sense of Mobile App Privacy Policies: Unsupervised Machine Learning Topic Extraction Techniques Reveal Content and Structure Patterns
PRESENTER: Ayelet Sela

ABSTRACT. Privacy protection in mobile applications is a cause for concern for consumers, policy-makers, regulators and scholars alike. Mobile apps collect, handle, process, store, share and commercialize large amounts of user data, some of which is personally identifiable and sensitive. The specific terms of their authorization to do so are set in the privacy policy of each app, which users are required to accept in order to use the app. Despite the critical role of these legal documents as a privacy governance mechanism, regulators face challenges governing their content, and users tend to click-sign them without reviewing their terms. This provision of uninformed consent is attributed, at least in part, to the ubiquity of privacy policies and their tendency to be lengthy and complex. In this article, we use a novel unsupervised machine learning technique to shed light on the content and structure of privacy policies in mobile apps. We analyze a corpus of nearly 5,000 privacy policies listed on the Google Play Store. Our automated methodology has significant advantages compared to previous analyses, which used qualitative manual classification by experts or super-vised machine learning techniques. First, it requires considerable less effort, making it a practical and scalable tool. Second, it identifies a more comprehensive list of topics that appear in privacy policies. We present and discuss the implications of our findings, among which, the potential of applying our methodology to improve the effectiveness of regulatory efforts in this area, support user decision-making regarding privacy policies, and conduct longitudinal and comparative research in this area.

17:00
The next challenge for data protection law: AI revolution in automated scientific research

ABSTRACT. The application of Artificial Intelligence (AI) in various fields is transforming the world around us. A considerable amount of literature has been published about autonomous vehicles, robotics in healthcare, the danger of losing jobs and control. So far, however, there has been little discussion about how AI might change the scientific research itself. The AI-assisted scientific research is already a significant boost for the process of discovery. Still, humans are responsible for creating the hypothesis and planning the research, which is aided by AI. Influential corporations, such as Google, have already started to invest in the automation of scientific research, as the next step will be the fully automated research. This radical change in scientific research will have significant consequences. Firstly, if the research process became automated, it may be conducted by anyone, which puts the citizen science in a new context. As developments in hardware (cheaper computers) and software (user friendly operating systems) made personal computers feasible for individual use, automated research may have a similar effect on science in the future. Secondly, unlike researchers, AI and neural networks cannot explain their thinking yet. The fully automated research grows the ‘black box’ even bigger, which makes the oversight and ethical review problematic in systems opaque to outside scrutiny. Furthermore, one of the main reasons for funding and permitting research activities is public interest, which will be hard to demonstrate in a black box situation. The automated research raises many further questions about regulation, safety, funding and patentability. This paper will focus on the issues connected with privacy and data protection, from the General Data Protection Regulation’s (GDPR) point of view. The GDPR aims to encourage innovation by permitting the collection of data without consent, repurposing and applying longer retention periods for scientific research. Moreover, the Regulation permits the EU Member States to provide derogations from the data subjects’ rights, such as the right to access and objection, in the case of scientific research. Still, the GDPR defines scientific research in a broad manner, ‘including for example technological development and demonstration, fundamental research, applied research and privately funded research’. Since the Regulation provides this broad exemption, it would be crucial to clarify the limits and requirements of scientific research and public interest, before the application of AI drastically transforms this field. The paper argues that the GDPR research exemption cannot be applied without limitations for automated research, and the citizen science falls outside the scope of household exemption. Moreover, the level of public interest must be known in the beginning of the processing, thus public interest as a legal base, cannot be applied for a fully automated research.

16:00-17:30 Session 6C: DIGITAL CLEARINGHOUSE PANEL: Challenges of regulating non-monetary price markets
Location: Dz-1
16:00
Digital Clearinghouse PANEL: Challenges of regulating non-monetary price markets
PRESENTER: Orla Lynskey

ABSTRACT. Absent monetary prices, digital services are typically offered in exchange for users’ attention or data. This leads to challenges in competition and consumer law where many tools are built around the notion of price. Data’s recognition as economic asset also creates tensions with the human rights-based approach of data protection. The panel brings together academics and representatives from the Netherlands Authority for Consumers & Markets, the UK Competition and Markets Authority as well as the European Data Protection Supervisor to discuss steps towards developing a methodology for monitoring the impact of non-monetary price offers.

16:00-17:30 Session 6D: IP TRACK: Copyright Exceptions and Design Protection
Location: DZ-6
16:00
Empirical study of the design protection in Europe

ABSTRACT. This article empirically examines the substantive decisions on all types of design rights from the courts of the Member States since the Design Directive and Design Regulation entered into force until August 2017. The article tests several hypotheses. Firstly, it uses descriptive statistics to examine claimants’ relative use of the type of design right and the relationship between the type of design right as a function of the dimension of the design litigated upon. Secondly, the article uses inferential statistics to analyse the presence of differences in the proportion of designs found valid and infringed as a function of the level of the courts, the type of design right, the dimension of design and the level of specialisation of the judges. The article finds that, overall, the EU design system has been functioning well, and we use our analysis to highlight some further improvements.

16:30
The case for a single, mandatory and broad EU copyright exception for text and data mining

ABSTRACT. The European Union plans to introduce a copyright exception for text and data mining (TDM). The aim is to help the EU better compete in big data analysis and artificial intelligence. However, the EU looks set to adopt two TDM exceptions: a mandatory but relatively narrow exception for research institutions and a broader but optional and weaker exception for everyone else. This paper argues that the EU should discard this two-tier approach and adopt one very broad TDM exception.

17:00
Reconciling user-rights approach in copyright with the current EU copyright framework and CJEU case law

ABSTRACT. The user is one of the subjects, whose rights and interest should be balanced against the rights and interest of rightholders. The “fair balance” should be achieved by exceptions and limitations that are regarded as mere defences or privileges, not actionable rights. The current doctrine offers a different – user-rights – approach, i.e. the user should have certain rights in regard to the protected subject matter. The paper explores whether this approach could be actually reconciled with the current EU copyright framework.

16:00-17:30 Session 6E: AI AND RESPONSIBILITY TRACK: Autonomous vehicles and responsibility
Location: DZ-8
16:00
A Model for Tort Liability in a World of Driverless Cars: Establishing a Framework for the Upcoming Technology

ABSTRACT. The development of driving support and cruise assist systems in the automotive industry has been astonishing, accelerating dramatically in the last ten years: since the first DARPA Urban Challenge field tests have multiplied in the US – in California alone, there are currently 39 companies testing self-driving cars – and the once remote prospect of “driverless” vehicles becoming commercially available (coming to market) (in the next future) might not be so far from reality. A broad range of scientific studies suggests the implementation of fully automated driving systems may come soon. Highly Automated Vehicles (HAVs) are likely to profoundly transform our social habits, and to revolutionize our way of interacting with the surrounding environment; in addition, legal scholars have already outlined how automated vehicles create a multi-level challenge in terms of regulation, capable of impacting on different areas of the law. One of the areas where research is much needed is tort liability: in addressing the regulation of accidents caused by automated cars, jurists must assess whether tort liability rules – as they are currently shaped – are suited to govern the “car minus driver” complexity, while simultaneously holding on to their theoretical basis. Whether the current framework proves itself to be inadequate and irreparably “out of tune” with the new circulation dynamics, the only alternative will be to amend or renew it. In light of these considerations, our aim is to present a hypothetical system for liability arising from road accidents caused by driverless cars. This model should be interpreted as a theoretical guideline, which must be adapted and declined in accordance with the specific attributes entailed within each legal system. Consistently with this premise, the article is outlined as such: in Part I we set out (and argue in favour of) some assumptions on which analysis rests. The main postulates that we embrace are that: a) we will ultimately reach a degree of technology that is capable of entirely substituting the human driver on the road; b) a fully automated driving system will be able to manage the “behaviour” of the vehicle safer than its “organic” counterparty; and c) the most promising strategy in addressing the HAVs regulation is to focus primarily on investigating the risks involved in the circulation of “totally” automated cars – where the human driver has no role – rather than addressing already existent (or forthcoming) intermediate support technologies. In Part II, we present the main options available to lawmakers in allocating liability for road accidents caused by HAVs. Traditionally, four leading “players” have been traditionally considered – in the academic debate as well as in the regulatory proposals enacted by governmental and independent bodies – “potentially responsible” in case of road accidents involving HAVs: the driver of the car; its owner; the government (or, widely speaking, the general public) and the manufacturer of the vehicle: after analysing each potential figure, we conclude that the manufacturer is the most appropriate figure to be held liable in the case of road accident involving driverless cars. Part III of the article investigates, on the basis of the background established in Part II, the most widely preferred solutions proposed to regulate a hypothetical liability system for manufacturers: on one hand, we consider the role that rules on product liability can play, devoting our attention both to the EU and to the US regulation; on the other hand, we evaluate the impact of different strict liability options. In particular, as for the latter, we then proceed to a specific investigation of the hypothetical system proposed by Kenneth Abraham and Robert Rabin in their article “Automated Vehicles And Manufacturer Responsibility For Accidents: A New Legal Regime For A New Era” (2017). In Part IV, after “laying down the ground” through the analysis of previous solutions for regulating tort liability in accidents caused by Highly Automated Vehicles, and after underlining how none of them seems entirely satisfactory, we present our proposal for allocating risks in the driverless car world. We will illustrate, in particular, how a “two-steps” system – operating through a negligence assessment and a reward fund – represents an optimal solution to mediate amongst the conflicting needs in the regulation of driverless vehicles. In Part V, finally, we draw some Conclusions on the basis of the various aspects addressed in our analysis, and present some alternatives we considered (and excluded) in developing our system.

16:30
Autonomous Vehicles: Key Elements of a Theoretical Legal Framework

ABSTRACT. Today’s road traffic system is functional due to a complex set of rules. The key element of the system is a natural person in the driver’s seat that makes decisions. These decisions may vary from driver to driver, yet all of them can still be compliant. When talking about autonomous vehicles, there is a notion that they need to be instructed on how to behave in all possible situations. However, autonomous vehicles can adopt the same variety of decisions as human drivers do. On top of that, they can communicate with other vehicles, infrastructure and surrounding environment. As a result, autonomous vehicles can solve tasks together in a cooperative manner. Such shift in the capabilities of vehicles is so significant that we should not be asking how to make autonomous vehicles conform to the rules, but instead how to structure the rules so that they can conform with autonomous vehicles. This paper aims to answer the latter question. Firstly, it considers different technical solutions to the question of how autonomous vehicles should interact with each other. Secondly, it takes into account relevant legal research in the field of autonomous mobility. Finally, it provides a summary of key elements for a theoretical legal framework that can encompass autonomous vehicles.

17:00
Should autonomous cars be liable for the payment of taxes?
PRESENTER: Fanny Vanrykel

ABSTRACT. Developments in artificial intelligence have made possible cars to drive autonomously, without requiring the intervention of a human driver. Autonomous or self-driving cars, which have been tested by many car manufacturers, are sensed to be “just around the corner”. This contribution discusses several legal issues associated with the expected rise of such vehicles.

Firstly, taxation of autonomous vehicles has been considered in the perspective of infrastructure financing. Autonomous vehicles not only require roads to be maintained, but also investments to be made in smart infrastructure, such as intelligent transportation systems improvements, vehicle to vehicle communication technology, and vehicle to infrastructure communication. Taxing autonomous vehicles has been regarded as a new source of funding these investments.

Then, although driverless vehicles could improve accessibility, road safety and energy consumption, they could also exacerbate congestion, by increasing the number and distance of motorized trips. Road and congestion pricing strategies have been envisaged by the literature and by policy makers to prevent this risk. For instance, in the State of Massachusetts, two bills of law have been proposed to introduce a mileage tax on autonomous vehicles, with the aim of avoiding the phenomenon of the so-called “zombie cars”, which drive around, for instance to avoid parking fees.

Finally, to the extent that autonomous cars may replace services supplied by traditional taxi drivers and by professional or non-professional drivers in the context of ride-sourcing platforms (e.g. Uber and Lyft), there could be a case for taxing the autonomous car itself. The scale up of autonomous car is indeed associated with a risk of personal income tax and social security losses, due to the disappearance of certain revenues, like salaries. Following several publications on the taxation of robots, such as the work of Pr. Xavier Oberson, taxing “robot car” itself, accompanied or not with idea of granting a legal personality, could compensate these losses.

16:00-17:30 Session 6F: HEALTH & ENVIRONMENT TRACK: The role of citizens and authorities in the energy transition
Location: DZ-5
16:00
SUPERVISION OF OFFSHORE OIL AND GAS EXPLORATION AND PRODUCTION AND THE INDEPENDENCE OF THE REGULATORY AUTHORITIES

ABSTRACT. Offshore hydrocarbons exploration and production are an example of how emerging technologies are contributing to meet the energy needs of modern society, accounting in 2016 for more than a quarter of global oil and gas (O&G) supply. These activities are endorsed by the International Law of the Sea, which confers sovereign rights to the coastal states in order to explore and exploit O&G resources on the continental shelf. In the exercise of their sovereign rights, an increasing number of coastal states have introduced legal regimes and entitled companies to explore and exploit offshore hydrocarbons resources, some of them, new players without experience supervising these activities. However, the history of the O&G industry evidences that these activities entail a tension between at least two interests of the state: economic development and the protection of health, safety and the environment. Major accidents illustrate the risks posed by offshore O&G exploration and production to health, safety and the environment, and the need for a proper regulatory approach to supervise these activities. The Piper Alpha (1988) and Macondo (2010) accidents prompted regulatory responses of governments with jurisdiction in the North Sea, the Gulf of Mexico and beyond, which reformed the authorities in charge of supervising offshore O&G exploration and production. These measures aimed to make the regulatory authorities more independent, separating the function of supervising the activities from the authority in charge of awarding authorisations to explore and produce hydrocarbons offshore. Almost parallel to these reforms, scholars in the field of political science have conceived and advanced the theory of “independent regulators”. The Organisation for Economic Cooperation and Development (OECD) has also made significantly contributions to this field.

The objective of this article is to identify the reforms implemented to the regulatory authorities in experienced countries such as the UK, Norway and the USA after major accidents in the offshore O&G industry for the supervision of offshore hydrocarbons activities and if they contribute to their independence. This research will study day-to-day practices of O&G regulatory authorities in the UK, Norway and the USA for the supervision of offshore O&G activities, as well as their institutional objectives, and effects regarding the protection of health, safety and the marine environment. For this purpose, this research will use comparative legal research and qualitative research methods as case studies. The expected findings are to identify what criteria contribute to the independence of the offshore O&G regulatory authorities which may be used in other states for the supervision of these activities. In this way, this paper aims to contribute to the current debate on how to regulate health, safety and the environment; manage the risks entailed by the spread of new technologies and prevent accidents.

16:30
Towards Cross Border Local Energy Communities – a Case for EU Regulation?

ABSTRACT. Moving away from a system in which electricity is generated by large centralised installations operated by big utilities, decentral approaches for facilitating the energy transition are high on the agenda of EU policy makers. One of these approaches is manifested in what the EU Commission defines as “Local Energy Communities” (LECs) in their legislative proposal reforming the EU energy sector. LECs are envisaged not only to engage in decentral generation, but also to “perform activities of distribution system operators, supplier, or aggregator at local level including cross borders”. This paper analyses whether the envisaged LECs can form a valid case for EU regulation. More specifically, this paper aims at answering the research question to what extent do decentral solutions for the energy transition, inter alia LECs, present a valid case for the adoption of regulation at the EU level?

Currently, interconnection across national borders is exclusively located at the transmission level of the electricity system. This can be explained by the technical setting of the electricity system which predominantly entails large remote generation, high-voltage transmission systems as ‘backbones’, low-voltage distribution systems as ‘appendages’, and mainly inflexible consumption. Accordingly, the incumbent EU legal framework on interconnection applies to the high-voltage transmission system, as this is where the cross-border element is most prominent. Hence, from an internal market perspective and based on the current setting of the electricity system it is at least tenuous for the EU to regulate the low-voltage distribution system level. This may change, however, due to emerging opportunities for increased cross-border interactions at the distribution level.

Technical developments at the distribution system level such as small scale generation connected to the distribution grid (decentral generation), aggregation of these sources, and efforts to improve energy efficiency, require new forms of distribution system operation, possibly also across borders. These developments require extending the internal market perspective by what is here titled a more “functional perspective” of the law, steering towards the uptake of RES and energy efficiency which are set aims at EU level. A functional perspective thus adds to the mere internal market perspective and proposes functional justification of regulation at EU level which appears to be especially relevant for the energy sector in transition towards decentral technologies facilitating a low-carbon sector. Based on insights in an EU Interreg project asserting the preconditions for interconnection at distribution system level, this paper argues that only if the functional element of decentral solutions for the energy transition is of high quality, can EU regulation applicable to the distribution system be adopted in a valid manner.

17:00
The citizen as both prosumer of energy and its potential victim: insights from the ‘AnalyzeBasilicata’ case

ABSTRACT. This contribution inspects a dual and possibly paradoxical nature of energy consumption: the citizen who stands in turn as prosumer of energy, but also as potential victim of health risks posed by energy production. The response of the citizen itself and of the regulatory framework to this contradiction needs to be inquired. The paradox is illustrated through the analysis of a case study, namely the AnalyzeBasilicata initiative. The project was launched in 2015 by the Italian association COVA Contro with the aim to monitor and report the environmental and environmental health problems of the Italian region Basilicata. The initiative quickly obtained a vast social uptake and, through crowdfunding, managed to buy the necessary instruments to collect sample in numerous areas of the region and run chemical tests. The results of the tests fuelled investigations that were subsequently published on local media, on the project’s blog and used to file formal notifications to the competent environmental agency or to the public prosecutor office. Examples of actions of the collective was the investigation aimed to show the correlation between ENI extractive operations in the region and its seismic status or the denounced discovery in local drinking water of traces of halogenated compounds having carcinogenic effects. The organization demanded a legal change by pushing for long-term targets, such as more transparency in energy policies and risk governance, but also concrete specific steps, such as the definition of clear maximum thresholds for the presence of the carcinogenic compounds in drinking water. At the same time, the Basilicata dwellers need energy and benefit from the extractive activities in term of labor offer and financial income for the region. The banning of extractive activities in the region would be opposed by a large section of the population. Drawing on this contrast and inspired by in-depth qualitative research performed with the help of the founder of AnalyzeBasilicata, this contribution focuses on the role of the regulatory system in harmonizing the rights, desires and claims of the concerned citizens with the demands of the energy agenda. If citizen-run monitoring technology could tackle the problem of poor environmental monitoring or hidden environmental data, I stress the need for a regulatory system fuelling transparency and public accountability as well as civic access to environmental information and participation in energy decision-making.