previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 12: Competition and Market Regulation: Recent competition law challenges
Fighting high drug prices: excessive pricing and price gouging in EU and US pharmaceutical markets

ABSTRACT. The problem of high pharmaceutical drug prices is permanently object of political debate at global level. Nowadays it encompasses many different types of drugs, from innovative products and biologics to generic ones. In developed countries, such issue is particularly perceived in the US, where no price regulation exists and spending on prescription drugs has increased at a fast rate undermining the affordability of medicines for patients. But also in the EU, where at national level generally Member State operate price control on prescription drugs through different reimbursement schemes, healthcare expenditure faced by public budget is a crucial concern and is strictly linked to the rise of prices of pharmaceutical products over time. Antitrust enforcement has traditionally targeted anticompetitive behaviors in the pharmaceutical sector. However, in recent years antitrust investigations in the EU have called attention to a different type of conduct, i.e. the imposition of excessive prices. The recent practice in the pharmaceutical sector has mainly regarded old off-patent drugs and post-acquisition strategies (Aspen, Pfizer/Flynn), but also other situations, such as cases concerning orphan drugs, are currently under investigation (Leadiant, Biogen). Recent experience in the US has in a certain way renewed the discussion on excessive prices in pharmaceuticals too (Daraprim, EpiPen). Whereas up to now initiatives aimed at addressing the problem of high drug prices include price transparency and price gouging bills, but not an antitrust involvement, a debate has arisen on whether the appropriate remedy to such types of conduct should be found within the powers of the Food and Drug Administration or instead would require the intervention of antitrust agencies, in the wake of the European experience despite the different approach to unfair prices. The paper aims at investigating the existing practice, by comparing the EU and the US frameworks, in order to evaluate the appropriate role for antitrust in this controversial area and define the boundaries of its intersection with sectoral regulation. The topic appears to be particularly relevant nowadays, as the risks of price hikes during the Covid-19 pandemic has already raised the attention of EU and US competition agencies.

SEPs licensing across the supply chain: an antitrust perspective

ABSTRACT. The rise of the Internet of Things (IoT) and the development of 5G are set to add a new layer of complexity to the current practice of standard essential patents (SEPs) licensing. While, until recently, the debate has centred on the nature of fair, reasonable and non-discriminatory (FRAND) commitments and the mechanisms to avoid hold-up and reverse hold-up problems between licensors and licensees, a new hotly-debated issue has now emerged. At its core is the question of whether SEP holders should be required to grant a FRAND licence to any implementer seeking a licence, including component makers (so-called ‘licence-to-all’ approach), or if they should be allowed freely to target the supply chain level at which the licence is to be granted (so-called ‘access-for-all’ approach). After providing an up-to-date overview of the current legal and economic debate, the paper focuses on the most recent antitrust case law dealing with the matter on both sides of the Atlantic and argues that no sound economic and legal bases which favour licence-to-all solutions can be identified.

‘Competition Policy and Value Chain Resilience – A Primer’

ABSTRACT. The Covid-19 pandemic and the ensuing public health, economic and societal crises have laid bare the vulnerability of our complex societies to high-impact/low-probability events. The outbreak of the Covid-19 pandemic and the public mitigation strategies adopted in response have partially unravelled the globally integrated value chains that form the neural system of our globalised market economies. This disruption of global value chains has led to greater awareness of the interdependence and fragility of tightly-knitted networks of integrated just-in-time value chains. It has also prompted calls for a general rethinking of how various economic policies and regulations could enhance the level of resilience of integrated value chains against exogenous shocks.

This paper follows this invitation. As a primer, it proposes some reflections on the conceptual and economic relationship between competition, competition law, and value chain resilience. The paper makes three contributions. First, the paper seeks to clarify the notion of value chain resilience and its relationship with competition. It thus addresses the basic question of whether competition is conducive or detrimental to value chain resilience. Second, the paper explores by means of three case studies of the liner shipping, meat and dairy processing, and online distribution sectors the complex relationship between competition, competition law and resilience. Third, the paper explores a number of channels to incorporate concerns about resilience into competition law analysis.

11:00-12:30 Session 13A: Competition and Market Regulation: Market Power and Market Definition
Market Definition for Digital Ecosystems

ABSTRACT. In the digital environment, certain market characteristics and competition dynamics have led to the emergence of a number of digital ecosystems, ie ‘groups of complementary goods and services that form a bundle that can be consumed by the final customer’ (Jacobides/Lianos 2021). What plays into the hands of these ecosystems is their reliance on user data for various services, an input that can therefore benefit an ecosystem in multiple ways (Bourreau/De Streel 2019). Digital ecosystems lead to user lock-in and reduce (the need for) multi-homing. These dynamics are reinforced by specific strategies, such as pre-installing certain apps or providing a single sign-on across services, to name but two. The question arises whether EU competition law as it stands is fit to deal with such digital ecosystems and the specific anti-competitive behaviour they display. As a first step to answer this question, we need to assess whether it is possible to delineate a relevant antitrust market for digital ecosystems that allows us to apply the antitrust provisions to them. The present contribution approaches this issue from three angles: (1) Aftermarkets. Under the lens of aftermarket theory, it inquires whether individual markets (eg, online search or email clients) can be regarded as a secondary market that is derived from a primary market – the comprehensive digital ecosystem. What could make the aftermarket analogy particularly fitting for digital ecosystems is that they provide market participants with an additional layer of power over the various markets by the simple fact of providing an integrated solution for services that could also be supplied separately. One and the same product – eg, online search – therefore needs to be seen as a stand-alone product competing with other search engines, in addition to being understood as part and parcel of a comprehensive digital ecosystem. (2) Cluster markets. In terms of cluster markets, one can assess whether a cluster of online services that are not interchangeable with each other may make up a cluster market that can only really compete with other cluster markets – eg, because nobody would want to use a search engine that does not also offer an email client. (3) Conglomerates. Finally, under the heading of conglomerates, we will turn to merger cases that have analysed conglomerates in the past, asking whether we can apply insights from market definition in these cases to digital ecosystems. When dealing with these market definition questions for digital ecosystems, of course, the question arises whether ecosystems necessarily need to be subsumed under traditional categories of competition law – or whether the specificity of these digital phenomena require a new kind of competition law analysis that does proper justice to their particular characteristics and the market failures inherent in these digital markets; perhaps outside of the categories of traditional relevant markets (see already Crémer/de Montjoye/Schweitzer 2019). For this purpose, the functions of the relevant market are briefly revisited, arguing that an approximation to an antitrust relevant market will remain legally necessary in the foreseeable future. All the more reason to engage in this analysis.

Greater than the sum of its parts: Reevaluating the market power of “data ecosystems”

ABSTRACT. Companies such as Google and Facebook are not merely conglomerates of Internet-based services which just so happen to process personal data. Instead they should be conceptualized as ‘data ecosystems’ and treated as such.

This contribution is intended to introduce the concept of ‘data ecosystems’ and to highlight the vast market power such companies have secured. The aim is to push the competition law and data protection narrative forward by emphasizing the complete and fundamental integration of personal data in the business model of data-driven companies, as well as its role in said market power.

Data ecosystems are companies which collect and monetize personal data through a network of widely diverging internet-based services, for the overarching purpose of targeted advertising. In contrast to traditional conglomerates, a data ecosystem is unique in that all of its different branches are interconnected through a shared resource: personal data.

The data ecosystem structure has granted such companies several strong sources of market power. Network effects of personal data lead to services being constantly updated and personalized with increasing accuracy, while simultaneously enhancing the monetization strategy of targeted advertising. Meanwhile, the reach of data ecosystems across the Internet is becoming so all-encompassing that consumers cannot realistically choose not to participate, nor to find suitable competitors for each service. More pressingly, data ecosystems’ services are becoming increasingly vital to consumers’ everyday lives, as pandemic lockdowns across the world have vividly illustrated.

Finally, data ecosystems are characterized by a strong incentive to expand into new additional markets. Such conglomerate mergers are an essential strategy to strengthen their sources of market power. More personal data is brought in, thus increasing the network effects. The reach of the ecosystem is extended, thus locking in even more consumers. These market strategies, always working in tandem and impossible to replicate by competitors, lead to continuously growing market power which concerns both online competition and personal data protection.

It is submitted that data ecosystems’ market power has been structurally underestimated in the past. A new approach that fully appreciates their unique structure and formidable sources of market power is therefore required.


ABSTRACT. Some digital information intermediaries, such as Google and Facebook, enjoy significant and durable market power. Concerns regarding the anti-competitive effects of such power have largely focused on conduct engaged in by the infomediaries themselves, and have led to several recent, well-publicized regulatory actions in the US and elsewhere. This article adds a new dimension to these concerns: the abuse of such power by other market players, which lack market power themselves, in a way which significantly harms the competitive process and undermine the integrity of the relevant information market. We call such abusers “market power parasites.” We provide three examples of parasitic conduct in online infor-mation markets: (1) black hat search engine optimization, (2) click fraud, and (3) fraudulent ratings and reviews. In each of these exam-ples the manipulating parasite utilizes the infomediary’s market power to potentially turn an otherwise limited fraud into a manipulation of market dynamics, with significant anti-competitive effects. This separation between power and conduct in the case of market power parasites creates an unwarranted lacuna which is not addressed by existing laws aimed at preventing abuses of market power. Antitrust law does not capture such parasites because it only prohibits unilateral anti-competitive conduct if such conduct is engaged in by a monopolist. At the same time, fraud torts require proof of specific reliance and are therefore limited to a particular wrong, disregarding the broader competitive concerns resulting from parasitic conduct. To bridge this gap, we suggest a fraud-on-the-online-information-markets rule, akin to the fraud-on-the-market rule in securities law. We propose to eliminate the rigid fraud tort requirement to prove reliance, and replace it with a presumption of reliance that will apply once the plaintiff proves harm to the integrity of an online infomediary. Our proposal strengthens competitors’ cause of action, releasing them from the arguably ill-fitting need to prove specific reliance, thereby increasing enforcement against the anti-competitive acts of market power parasites which harm the integrity of information in digital markets.

11:00-12:30 Session 13B: Data Governance: Governing Digital Identity
Blockchain technology for identity management in the migration context: is a right to personal identity for migrants and refugees possible?

ABSTRACT. The lacking of ID documents of migrants and refugees could represent a problem in formulating an efficient humanitarian response. Their identification is an essential prerequisite to provide them with food, health care and other vital services. Furthermore, these individuals cannot fully integrate into modern society without any identity proof. ID documents, health records, bank accounts, educational credentials are vital elements in modern society. Blockchain tools could play a fundamental role in managing identity data of migrants and refugees. These technological systems have different interconnected nodes which can provide storage and access to data. Users act as a node of this chain in a way that they have the capability to gain control of their identity information. As far as the migration context is concerned, blockchain technologies allow people without ID documents to prove their identity and to transact with other individuals. It is what is already happening in a refugee camp in Jordan. This paper would like to investigate the possible legal and ethical consequences of using blockchain technologies in the migration context, taking into account the concerns worth mentioning. Distributed consensus between every user is a fundamental requirement for the functioning of the blockchain. Every individual should agree in sharing his/her data through the nodes, trusting the working protocol. This article would like to identify the relevant legal norms about consent, recalling the GDPR provisions. Furthermore, it aims to point out the possible cultural and technological gap between migrants and immigration officers about the concept of privacy and the functioning of blockchain ledgers. The paper would like to analyze the role of private subjects in this field, like blockchain providers. More specifically, it would like to understand how their actions could impact the fundamental rights framework of migrants and refugees. The technological response to identity issues in the migration context is not sufficient. Lawmakers should regulate the deployment of technologies like blockchain in this field. Furthermore, public subjects, like States and International Organizations, should listen to the needs of displaced people to formulate an efficient humanitarian response to migration flows.

Bodies that Betray: EU’s Corporeal Borders

ABSTRACT. The European Union has gradually intensified its gathering of biometric data of immigrants, refugees, and asylum seekers, and increasingly makes the resulted data banks available for several immigration-related and Police institutions throughout Europe. Where legal, political, and humanitarian efforts fail, asylum seekers, try to distort their bodies as the source of undesirable biometric data. With methods such as burning fingertips or claiming to be an unaccompanied minor, they attempt to escape the algorithm and defy the problematic Dublin Convention. Consequently, the EU uses technologies such as retinal scans or DNA tests to overcome such attempts. The body is marked with borders and carries the tension of identification: every gesture, breathing rhythm, stammering, and sweating could contribute to constructing the wrong “data double” (Haggerty & Ericson: 2016). This paper scrutinises border control’s intensification through bodily practices and the dynamism of bodily resistance against such measures. The research addresses the historical interrelations between surveillance, identification, belonging, and citizenship (Lyon 2010) and highlights the data-based exclusion of unwelcome asylum seekers by forcing their bodies to reveal their 'deception'. The extreme datafication of bodies and the countersurveillance struggle both coerce the material body to disappear so that an agreeable data double can rise.

References: Haggerty K., Ericson, R. (2000). The surveillant assemblage, British Journal of Sociology, 51, 605-622. Lyon, David (2010): Identification, surveillance and democracy. In Kevin D. Haggerty, Minas Samatas (Eds.): Surveillance and democracy. Abingdon Oxon England, New York: Routledge, pp. 34–50.

Data Protection and Rule of Law: A Challenging Perspective.

ABSTRACT. Every person has the right to a legal identity, the right to recognition as a person before the law, enabling that person to assert rights, enforce contracts, assert or defend a case in court. This right is freestanding thus, not dependent on official identification, and it has been recognized and codified in different international human rights treaties (UDHR in 1948, ICCPR), and in the modern Constitutions. However, in today’s globalized world, there is an increase in linking access to services such as health, education, etc., to possession of several form of identification that, while collecting evidence of people life events, grant them digital identity. To this aim, the most advanced technological tools are used to solve the challenges of traditional weak identification systems and, relying on modern technology, several forms of identification have been presented, studied and still implemented (Allen, C. 2016). In most developed countries, this approach follows the scrutiny of democratic institutions, committees and boards, raising questions linked to de-anonymization problems and focused on privacy and data protections. Concerning less developed countries, some authors (Johnston, S.F. 2018) argue that technology seems to represent a “technological fix” thus, a generic tool for circumventing problems commonly conceived as social, political or cultural. In these contexts, indeed, these systems certainly represents a valuable tool for granting civil rights, but also represents a valuable source of statistics, used as a key tool for shaping public interventions and allowing policy making based on forecasting, for monitoring new trends and planning feasible policies (UN Data Revolution for Sustainable Development, 2014). Often, in such contexts, the rule of law is weak and data collected with technological systems of identification can be misused, leading not only to a greater concentration of power in the hands of non-governmental organizations, but also to complex relationships between asymmetric information and power (Khan & Roy, 2019). It emerges the need of brainstorming on whether group privacy (Taylor, L., van der Sloot, B., and Floridi, L. 2017) remains the main problem, or whether new scenario can emerge, primarily depending on local context and local perception.

11:00-12:30 Session 13C: Open Track: Panel 'Escaping “the law of everything”. Should we separate ADM regulation from data protection?'
Panel proposal: Escaping “the law of everything”. Should we separate ADM regulation from data protection?
PRESENTER: Nadya Purtova

ABSTRACT. The central question the panel aims to answer is if in the EU it makes sense to regulate automated decision-making outside of the GDPR, ie not as a part of data protection, and not anchored in or conditioned upon the notion of personal data.

The background to this is the following: In data protection law the desire to include automated decision-making within the scope of data protection is stretching the concept of personal data (determining the GDPR’s material scope) and consequently the scope of the GDPR to the extent that makes everything personal data and the GDPR the law of everything digital. This raises concerns as to the actual ability of the GDPR to resolve all the digital problems. At the same time, the protections that the GDPR provides regarding automated decision-making only concern individual decision-making (when the ADM is based on personal data AND is directed at an individualized / distinguished from a group natural person). The GDPR does not provide any safeguards if a decision is directed towards a group, i.e. more than just one person. In addition, one can argue that providing safeguards in automated decision-making is a distinct rationale of the data protection law, which is different from traditional ‘privacy’ rationales such as creating anonymous space for expression, self-development etc. or preserving a private intimate sphere. Finally, one wonders if there are sufficient reasons to regulate individual ADM and not individual ADM separately, and why they do not warrant the same approach. So, to fix the problem of regulatory overstretch, should we perhaps just have a separate legal regime that regulates individual and non-individual automated decision-making across public and private sectors, narrowing down the remaining scope of data protection to preserving anonymity and intimacy? The panel will be conducted in the form of a round table. Experts from the fields of data protection and specifically regulation of ADM, administrative, discrimination and consumer protection law will be asked to reflect on the reasons why regulation of the ADM was included into data protection scope in the first place, the possibility and implications of separating data protection from regulation of ADM in terms of the need, possibility, gains and challenges.

Chair / moderator: Nadya Purtova

Confirmed speakers: Sofia Ranchordas (administrative law) Margot Kaminski (ADM, US persective) Catalina Goanta (consumer law perspective) Sandra Wachter (discrimination law, ADM)

Invited (not yet confirmed): Mireille Hildebrandt

13:30-15:00 Session 14A: Competition and Market Regulation: Workshop 'Remedies for Digital Markets' (part 1)

Michal Gal, Nicolas Petit, Seth Benzel, Francesco Ducci, Alexandre Ruiz Feases, Inge Graef, John Kwoka, Filippo Lancieri, Mark Lemley, Georgios Petropoulos and Thibault Schrepel

Workshop 'Remedies for Digital Markets'


In the rich conversation on antitrust in the digital economy, remedies are often treated as an afterthought. Recent enforcement and regulatory initiatives do not tell clearly whether the goals of antitrust remedies in the digital economy should be preventative or restorative. And if remedies should be restorative, it is unclear whether antitrust remedies should seek to reengineer digital competition by diversification v commodification v disintermediation or by other mechanics. The remedy question is, however, a preliminary issue that any antitrust process should address, lest it under or over fixes”.


Organizers: Michal Gal (Haifa) and Nicolas Petit (EUI)



- Seth Benzel (MIT)

- Francesco Ducci (EUI)

- Alex Ruiz Feases (Tilburg)

- Michal Gal (Haifa)

- Inge Graef (Tilburg)

- John Kwoka (Northeastern)

- Filippo Lancieri (Chicago)

- Mark Lemley (Stanford)

- Nicolas Petit (EUI)

- Georgios Petropolous (MIT)

- Thibault Schrepel (Utrecht, Stanford)

13:30-15:00 Session 14C: Data Governance: Governing Data in Fintech and Credit
Co-Governing Emerging Socio-Technical Systems: Investigating the Implications of Public-Private Partnerships in Smart Cities and Central Bank Digital Currencies

ABSTRACT. The greatest variety of social contexts are currently confronted with disruptive technologies (e.g., blockchain, IoT, AI) deployed by assorted stakeholders. Frequently, these innovations embody complex interdependencies between public institutions, private actors and the civic society, thus generating multifold regulatory hurdles. Two topical examples of this dynamic can be witnessed in (i) smart cities and (ii) central bank digital currencies (CBDCs). Ostensibly, the governance of these emerging socio-technical systems (STSs) needs to face the presence of different layers of actors, interests, expertise and needs. Under these circumstances, joint coordination efforts can be pursued by establishing public-private partnerships (PPPs).

While PPPs offer societal advantages and may foster co-regulatory regimes, their promoters meet significant challenges. In smart cities, PPPs often enable the digitization of services and infrastructure, but may also replicate the power asymmetries embedded in the platform economy. Citizens may be exposed to ever-increasing visibility, as these arrangements frequently entail the convergence of databases held by the private and public sector. In CBDCs, the cooperation can either (a) occur between public institutions (e.g., central banks, governments, supervisory authorities, law enforcement) and private actors (e.g., FinTech companies, law firms) or (b) involve private financial institutions as well. While it allows to take into account regulatory and societal needs, power and knowledge gaps arguably exert great influence.

Against this backdrop, this contribution explores how the PPP model may impact the governance of two diverse STSs: (i) smart cities and (ii) CBDCs. Firstly, we focus on PPPs delivering biometric identification systems in smart cities, which brings forth broader societal issues that cannot be addressed by legal compliance only. Secondly, we consider the interplay between participants in CBDC projects, mindful of the twofold nature of these instruments as both institutional and cryptocurrency-related. Thirdly, for each case study we focus on connections (or lack thereof) between PPP schemes and the emergence of co-regulation mechanisms. Finally, by comparing the specificities of these domains, we highlight the extent to which the logics underpinning each specific ecosystem influence their governance and regulation. In doing so, we contextualize co-regulatory efforts within STSs, from both a theoretical and a pragmatic perspective.

Future money - tracing the imaginaries behind EU policymaking in digital finance.

ABSTRACT. The financial sector is undergoing a fundamental digital transformation. New kinds of mobile financial applications sit between users and traditional banks; social payment platforms, mobile banking and digital payment services constitute the interface through which we interact with our finances. Such process of ‘platformisation’ of financial services is likely to bring about issues typically associated with platform business models and information capitalism. Yet, while financial innovation is widely (and often positively) discussed from business perspectives, it is rarely scrutinized in terms of information control-related risks. European policymaking is mostly concerned with fostering the growth of a European digital payment ecosystem. The 2nd Payment Services Directive favors market liberalization and posits the platform model at the core of future banking and payment infrastructures. Processes of datafication - portrayed as necessary and inevitabile by the private sector - constitute a pillar of the EU Commission Digital Financial Strategy. Corporate interests and expertise - disguised with highly professional and marketing jargon – are the primary source of knowledge for decision-making in this domain. However, considering their direct, concrete impacts on individual lives, the design of financial infrastructures should be sensitive to a broader spectrum of issues. The goal of this study is to identify the 'sociotechnical imaginaries' that guide EU policymaking in the field of financial innovation in order to assess the underlying motivation, interests and possibly biased assumptions. The methodology consists in a systematic qualitative analysis of a) policy documents and legislation issued by European institutions; and b) fintech firms’ public statements outlining business models and firms’ “values”. The discourses in policymaking and corporate environments are then compared in order to identify their mutual influence and respective performative power whereas they contribute to materialize specific forms of payment infrastructures and money. The results of such analysis highlight how corporate imaginaries of technological progress filter into policymaking, allowing or accelerating the transformation of institutional artefacts with or without the involvement of public opinion and counter-imaginaries. We conclude that policymaking in this domain should be more sensitive to critical voices highlighting data protection and power concentration issues typical of the platform economy.

Can online credit solutions reinforce marginalization?

ABSTRACT. One of the effects of the pandemic has been an increase in online purchases including in access to basic goods and services. In this regard, we review the availability of online credit solutions to vulnerable groups and examine whether the applicable regulation in the EU ensures equal access. Consumers ineligible for normal consumer loans, granting of which is conditional upon a proof of stable income and a positive credit score, usually belong to vulnerable groups and are left with alternative financing opportunities including loans available without submitting a credit score or credit solutions that are instantly available at the point of checkout.

Instant checkout credit solutions are data-driven and tend to infer credit risk from non-traditional/alternative data. Many instant checkout loan providers exploit easily accessible alternative data which are taken as proxies for economic status, character and reputation. A consumer is often profiled when she starts exploring financing options. Such profiling is usually automated. Machine learning algorithms can learn to (1) associate creditworthiness with some behavioural patterns that are statistically observed more in the population of advantaged groups and discriminate against those who are not perceived as part of these groups, perpetuating historic patterns of discrimination; (2) infer vulnerability which can be then exploited by lenders. For instance, in the competition for consumers and the smoothest online experience, the service providers may be tempted to employ dark patterns: UX designs that nudge the user to make decisions they may not want to.

Differentiated credit offers based on profiling may lead to discrimination harms, because consumers would not be aware of receiving a differentiated offer and being discriminated against in comparison to other consumer groups. Therefore, even though discrimination based on sex, race and ethnicity in access to goods and services is prohibited under EU non-discrimination law, establishing an individual prima facie case of discrimination would be hard if not impossible. We therefore look into gaps between consumer law, anti-discrimination law and data protection law in the European context with possible negative effects to marginalized, low-income and vulnerable populations; as well as ways these areas of law may work together to overcome such effects.

The study expands upon authors’ article titled “Computer says Hausfrau – can automated credit scoring contribute to the gendered digital divide?” published in Encore - the Annual Magazine on Internet and Society Research 2020/2021. https://www.hiig.de/wp-content/uploads/2021/01/encore2020_magazine.pdf


Aggrawal, N. (2020). The Norms of Algorithmic Credit Scoring. https://doi.org/10.13140/RG.2.2.21817.72800

Wong, K. (2020, August 8). How Financial Apps Get You to Spend More and Question Less. Wired. https://www.wired.com/story/financial-apps-investing-dark-patterns

13:30-15:00 Session 14D: Open Track: Data protection and digitalization in crisis
Sociotechnical Change and Its Place within Law’s Internal Model of Reality: Exploring a New Analytical Lens for Law and Technology Theory

ABSTRACT. Oftentimes, law is considered to ‘fall behind’ technology because changes in law are slower than changes in technology. The aim of this paper is to use the autopoiesis theory to render new theoretical insights into how law as a system relates to sociotechnical reality, and draw new directions for law and technology research. Autopoiesis provides theoretical explanations starting from the characteristics and interactions of different societal systems, to the formation and components of law’s internal model of reality. The theory could act as a coherent conceptual playing field to situate existing law and technology theories (the pacing problem, regulatory disconnection) and thereby develop a new lens for analyzing the challenges brought by the difference in pace between law and technology. Legal concepts, goals, and rules are built on assumptions about how the world works and the effects regulatory intervention might have. Viewed through the lens of autopoiesis, these belong to the legal system’s internal model of reality. It follows that law does not directly regulate social behavior, but formulates rules in reference to this internal model of reality, in the hope of triggering the desired changes in the other societal systems. The unprecedented pace of technological change challenges the validity of (parts of) law’s internal model of reality, and renders existing rules and laws less suitable, or even obsolete in the new context. The potential of autopoiesis as analytical lens ranges from a new perspective to understanding the pacing problem, to identifying and addressing regulatory disconnections not only in abstracto, but also in concreto. For instance, one may use it in a specific legal domain such as data protection, to assess the level of structural compatibility when regulating matters of a new technology, e.g. artificial intelligence. While data protection law is the ‘go to’ legal domain for regulating data-processing related issues of AI, it seems it will not address the full extent of regulatory challenges in this rather new technological context. The analytical lens this paper proposes would help identify with more precision where the link between the law’s internal model of reality and sociotechnical reality appears to be broken, and enable a more targeted update of the regulatory environment where necessary.


ABSTRACT. Identification as a process and the fact of being identified is a boundary concept of data protection. It separates the data that is personal from non-personal. Still, the GDPR only provides guidance on the meaning of identifiability i.e. possibility of identification (Recital 26). The mainstream data protection literature is also focused on the meaning of identifiability. Borgesius [1] examines the meaning of identifiability as “singling out”, and argues it should include identification by name but also non-name identifiers. Finck and Pallas [2] discuss what pseudonymization and anonymization mean and conclude that absolute anonymization under the GDPR is never technically possible, without touching on the meaning of identification. The debates among computer scientists tackle anonymization and reidentification techniques and their (in)effectiveness [3]. Any discussion about identifiability not grounded in an understanding of identification is inadequate. The issue tackled in this paper is the meaning of identification under the GDPR. The contribution of the paper is three-fold: 1) It offers an integrated typology of identification as a process or result of distinguishing a person in a group, outside of the legal context. The typology builds on three socio-technical accounts of identification: four identifiability types by Leenes [4], seven types of identity knowledge by Marx [5], and anonymity as unreachability by Nissenbaum [6]. 2) In addition to the established types, it identifies personalization as a new identification type, i.e. a relatively unique characterization, where one is individualized by being mapped in relation to multiple dimensions within a multidimensional space, where a dimension can be a personal attribute, or the attributes of the surrounding context. The argument builds on the literatures on calculated publics, profiling in recommender systems, agile methods of software development, price and content personalization. 3) It clarifies the meaning of identification under the GDPR. If identification means distinguishing from a group, it legally encompasses non-name identification. However, the ECJ Breyer decision seems to invalidate this approach [7]. I propose a contextual interpretation of Breyer, which negates Breyer’s restrictive potential and brings all identification types within the GDPR. The paper will discuss implications of this reading of identification for data protection.

[1] FJ Zuiderveen Borgesius “Singling out people without knowing their names – Behavioural targeting, pseudonymous data, and the new Data Protection Regulation” (2016) Computer law & security review 32, 256–271; [2] M Finck and F Pallas, “They who must not be identified—distinguishing personal from non-personal data under the GDPR” IDPL 10(1) [3] Narayan and Shmatikov, Sweeney on K-anonymity and responses to it, the works of Dwork and others on differential privacy, e.g. Cynthia Dwork and Aaron Roth, (2014) ‘The Algorithmic Foundations of Differential Privacy’ Foundations and Trends in Theoretical Computer Science Vol. 9, Nos. 3–4 (2014) 211–407 available online http://www.tau.ac.il/~saharon/BigData2015/privacybook.pdf , last accessed 24 July 2020, etc. [4] R Leenes, ‘Do They Know Me? Deconstructing Identifiability’ (2008) 4(1&2) University of Ottawa Law & Technology Journal 135. [5] Gary T. Marx (1999) What's in a Name? Some Reflections on the Sociology of Anonymity, The Information Society, 15:2, p. 100 [6] Helen Nissenbaum (1999) The Meaning of Anonymity in an Information Age, The Information Society, 15:2, 141-144, DOI: 10.1080/019722499128592 and S Barocas and H Nissenbaum, “Big Data’s End Run around Anonymity and Consent” in Privacy, Big Data, and the Public Good. Frameworks for Engagement. Edited by Julia Lane, Victoria Stodden, Stefan Bender, Helen Nissenbaum, CUP 2019, pp. 44-75 [7] Peter Davis (2020) ‘Facial Detection and Smart Billboards: Analysing the ‘identified’ Criterion of Personal Data in the GDPR’, University of Oslo Faculty of Law Legal Studies Research Paper Series No. 2020-01, available online https://ssrn.com/abstract=3523109 last accessed 27 July 2020

Gaps in GDPR private enforcement: How are conflicts of laws triggered and what do they to data subjects’ rights

ABSTRACT. The paper seeks to explore the evolving nature of the data protection enforcement in cross-border cases in the European Union (EU), identify the outstanding legal gaps in the legal framework, and to propose a new approach to deal with this gap. Over two years into the applicability of the General Data Protection Regulation (GDPR), its enforcement is still in its infancy. Although the public enforcement path via regulatory authorities has been used, there are still numerous uncertainties surrounding private enforcement of the GDPR, namely seeking judicial redress for infringements before the courts by individuals or their representatives. The aim of this paper is to explore how the changing focus on data protection rights, moving from ensuring their compliance to enforcement, showcases the legal gaps in the GDPR. This paper will take private enforcement as a specific example of this evolution. This papers’ objective is to exemplify gaps in the GDPR caused by, on the one hand, numerous opening clauses in its text that yield a large margin of manoeuvre to national legislatures, and, on the other hand, references to the law of member state without providing the rules of the determination of the applicable law. Further, the objective is to show how these gaps in the GDPR trigger the necessity to approach the GDPR from the perspective of another EU legal regime, namely the EU private international law, as well as potentially from the perspective of national laws of member states. This will be achieved by exemplifying the gaps of the GDPR where conflicting national legal rules are triggered and by exploring the legal acts that need to be used to remedy these conflicts. Further, the legal and regulatory risks will be shown from the perspective of consistency of data subjects’ protection in the Union and the legal character of the GDPR as an EU regulation. The expected outcome of this paper is to highlight the need for legal change in addressing intra-Union cross-border private enforcement cases. In the long term, this paper’s results will also add to the debate on the enforcement of individuals’ rights in the digital space.

13:30-15:00 Session 14E: Energy and Climate Crisis: National perspectives on (renewable) energy
Who pays the bill for the energy transition? Regulatory and justice remarks on the forthcoming Spanish National Fund for the Sustainability of the Electricity System

ABSTRACT. The energy transition - in its technological dimension of decarbonisation process - is an undeniable issue and a priority on the international, European and national political agendas. The need to promote renewable energies in order to achieve the corresponding energy and climate policy objectives is also uncontroversial. However, the energy transition poses a number of regulatory problems, among which the allocation of the costs of a new electrified and sustainable economic model should be highlighted. In times of crisis, these problems become crucial when certain factors are added together: on the one hand, the fall in energy demand and, on the other, the depressing effect of renewable energies on electricity market prices; the result can be a mismatch between income and costs in the electricity system. This is the case in Spain, where it is feared that the cost of financing renewable energies (7 billion euros per year approx.) will fall on electricity consumers. This paper aims to address the debate on who should bear the costs of financing the support schemes for renewable energies. To this end, it will analyse the draft Spanish law that will establish the National Fund for the Sustainability of the Electricity System. Through this reform, the Fund will be created to finance the costs associated with the renewable energy remuneration system. So far, these costs are financed by electricity consumers and are included in electricity bills. Under the new system the Fund will be financed by all energy supply operators on the basis of sales of gas, oil or electricity. The aim of the law is threefold: to avoid increases in the price of electricity, to give clear signals of the electrification of the economy and to provide legal certainty to the system in order to allow the mobilisation of the necessary investments. However, the draft legislation has been heavily criticised by certain business sectors and appeals have already been announced against the future legislation, arguing (wrongly) that it would violate the European principle of technological neutrality, free competition and even the Constitution itself. In order to frame the issue correctly and assess the reform, a series of considerations will also be made from the perspective of the theory of regulation and energy and climate justice.

Enabling Digital Renewable Energy: A Case Study of Law and Technology in Scotland

ABSTRACT. Renewable energy deployment presents great opportunities for the reduction of carbon emissions and the realisation of the UN’s Sustainable Development Goals. Digitising renewable energy deployments - whereby hydro, wind and solar generators also generate data about their activities which can be collected, stored, analysed and re-used - enables more efficient operations such as optimising the systems, reducing wasted energy and creating ‘smart grids’ which may, for example, enable small scale renewables producers including households to sell energy they generate. However, the collection and use of digital data in renewable energy systems also creates some risks and vulnerabilities, such as cybersecurity hazards like data breaches and hacks which may be devastating for the systems. Furthermore, the digital information needs its own physical infrastructure to be stored and processed in; equipment which will have its own energy needs and environmental cost. If the risks and costs outweigh the benefits of digitised renewable energy then overall it cannot be a sustainable tool.

Using mixed methods, we scope these challenges for the safe and economical governance of renewable energy systems, with a renewable energy deployment in Scotland as a case study, to determine how the benefits and challenges of digitisation are identified and addressed in renewables’ regulation and governance. We will pay particular regard to our current context of the UK having exited the EU and its legal system, an increasingly strong and autonomous devolved Scottish Government prioritising both digitalisation and decarbonisation, and Glasgow’s hosting of the UN COP26 Climate Change Conference later in 2021 further raising the profile of, and imperative for, net zero interventions in our local area.

From the context of renewable energy deployments and consideration of the broader impacts of policy decisions, we demonstrate an integrated approach, from the nexus of law, digitalisation and climate considerations, to drive towards the essential benefits, alongside careful management of vulnerabilities.

Norway’s Quest for Renewables in the Context of Greenpeace v. Norway Case

ABSTRACT. The Norwegian Supreme Court decided on 22th of December 2020 in a crucial case at the intersection of climate change and energy policy after seven days of hearings (from 4th to 12th of November 2020). This case is based on Art 112 of the Norwegian Constitution granting the right to clean and healthy environment. Historically, Norway created an original model of sustainable development by enacting strict petroleum regulatory framework focused on controlled, slow and safe oil exploitation, but also on using oil money to invest in renewable energy. This model is contested, in this case, on the ground that oil exploration, especially in the Arctic, is wrongful because it will generate considerable emissions when burnt, irrespective of Norway’s investments in renewables. It is called the Norwegian paradox, where the perception of Norway as a clean energy user and a sustainability orientated nation contrasts with its oil production which, after being exported and then burnt, generates greenhouse emissions around the world totaling ten times more than all the Norwegian emissions together. Before reaching the Supreme Court of Norway, the Oslo District Court dismissed the case stating “[e]missions of CO2 abroad from oil and gas exported from Norway are irrelevant when assessing whether the Decision entails a violation of Article 112” but admitted that that Article 112 of the Constitution is a rights bearing provision. Later, The Borgarting Court of Appeal affirmed the District Court's decision ruling with one notable exception because it declared that Article 112 is applicable even for the emissions generated by the combustion of exported oil and gas. The Supreme Court of Norway heard the case in a plenary session with the participation of all 19 judges and also “in full” so all the legal arguments, facts and questions were discussed and were subject of the judgment. The Supreme Court found that Art 112 is applicable only in Norway as a general rule with some severely limited exceptions and that this article is not granting an individual right to challenge the petroleum activity in court. The Court analyzed the issue of foreign burnt Norwegian oil in the context of high demand for oil and gas, the competition in the energy field and the trends in global markets. The Court also took into consideration that if Norway would relinquish its oil and gas activities, other oil producing countries will provide the required fossil fuels. However, the court is mainly silent about Norway being a stable, democratic and environmentally responsible nation investing its oil money in renewables. The Norwegian State oil company (Equinor) spearheaded large wind energy projects, designing and building the world first floating wind farm concept (Hywind). Moreover the company became a world leader in carbon capture and storage (CCS), dedicating 15-20% of its total capital expenditures in innovative energy solutions by 2030. In addition, the Parliament encouraged the Norwegian oil fund (Pension Fund Global) to invest in unlisted renewable energy infrastructure increasing the cap for such investments from 60 billion kroner to 120 billion kroner. In addition to that, Norway is using energy from renewables to exploit oil. This paper addresses the new legal paradigm created by the Supreme Court, how this interpretation fits in the Norwegian long term plans regarding renewable sources of energy and how this decision will influence the future of investing in technologies necessary to develop renewables. Moreover it addresses why the Supreme Court, in its decision, dedicated so limited space for analyzing the role of renewables in the fulfillment of duty to care enshrined in Art 112 of the Norwegian Constitution.

15:30-17:00 Session 15A: Energy and Climate Crisis: Economic and trade approaches to sustainability
Green House Gases Emissions under WTO Jurisprudence

ABSTRACT. COVID-19 pandemic of 2020 has shown at least two different aspects of global climate change problem. On the one hand, Wuhan, the 11 million-strong Hubei province city at the center of the coronavirus outbreak has been on lockdown since late January till March 2020, when WHO announced COVID-19 outbreak a pandemic, the atmosphere above China in NASA satellite images appears virtually clean of nitrous oxide emissions. On the other hand, if you look at the emerging infectious diseases that have moved into people from animals or other sources over the last several decades, the vast majority of those are coming from animals. And the majority of those are coming from wild animals.There facts show how Process and Production Methods (PPMs) negatively influence planet’s climate.

The notion of the PPMs was specified in the OECD report of 1997, where it was stated that the term PPMs refers to processes and production methods and is defined as the way in which products are manufactured or processed and natural resources extracted or harvested.Climate change related policies targeting GHG reductions often do not deal with products pes se and generally address process and production methods, whole focusing on broader variables such as sectors, industries, firms or installations. The more process and production method is carbon-constrained the more costs it requires. That results in losses of the export income and necessity in diversification of economy what is more burdensome in its turn.

There is a risk that ‘economic activity will shift to less carbon-constrained jurisdictions and GHG emissions would not be reduced, but simply shifted to other national locations. Therefore the more carbon-constrained countries will fill aggrieved about additional competitive pressure from the same industries in less constrained countries’. The resulting of energy-intensive industries relocating to countries with less stringent environmental policies is generally referred to as ‘carbon leakage’. The purpose of including PPMs in the context of climate change policies is to incorporate the (social/environmental) cost of production in the price of products so as to give an incentive to both producers and consumers to limit the use of carbon intensive or environmentally unfriendly products and therefore to internalize environmental costs connected with climate change mitigation.

The following GHG-related measures could be used to internalize environmental costs with respect to PPMs: taxes, technical regulations and standards, subsidies. Moreover PPMs are mentioned in the Agreement on Technical Barriers to Trade (TBT Agreement) and Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement), therefore PPMs have a legal meaning under WTO law. Thus further this paper will focus on legal regime of taxes, technical regulations, standards and subsidies under WTO law as possible means of internalization of costs aimed at fulfillment of Paris Agreement objectives and connected with the PPMs.

Section 1 analyses the issue of “likeness” which is crucial for defining the legal status of the PPMs under WTO law. Section 2 discusses carbon taxes under WTO law and jurisprudence as a traditional instrument for internalizing externalities while Sections 3 and 4 focus on technical regulations, technical standards and subsidies. Section 5 reflects on possibility to justify combating climate change measures under general exceptions enshrined in Article XX of the GATT. Section 6 contains general observations on interconnection between WTO law and Paris Agreement.

Carbon taxes, law and technology : a global conversation ?

ABSTRACT. This is a legal analysis of carbon taxes, as a regulatory strategy to curb greenhouse gas (GHG) emissions. Carbon taxes are often praised as a cost-effective and efficient solution to address climate change and foster technological innovation. Such discourses thus establish a relationship between climate change, taxes, technology and the law, one that put them under an instrumental frame. With this paper, my aim is three-fold. First, I intend to demonstrate the topicality of carbon taxes within and outside environmental law scholarship, as well as the role legal scholars have played in the promotion of carbon taxes. Second, my purpose is to explain the main assumptions and shortcomings in the approach of the legal scholars with respect the relationship between climate change, taxes and the law. In particular, I identify two opened questions, namely the malleability of carbon taxes and the paradox they entail, that legal analysis can help answer, but does not have so far, if a different perspective was followed. Third, I seek to outline the methodological challenges this topic endorses and proposes ways to overcome them.

15:30-17:00 Session 15B: Competition and Market Regulation: Workshop 'Remedies for Digital Markets' (part 2)

Michal Gal, Nicolas Petit, Seth Benzel, Francesco Ducci, Alexandre Ruiz Feases, Inge Graef, John Kwoka, Filippo Lancieri, Mark Lemley, Georgios Petropoulos and Thibault Schrepel

15:30-17:00 Session 15C: Open Track: AI regulation and explainability
Explanation is a concept in an AI-induced crisis, they say. But badly explained pandemic politics illustrate how its core values have never been safe. Can we use the momentum?

ABSTRACT. Duties to explain decisions to individuals exist in laws and other types of regulation. Rules are stricter where dependencies of explainees are larger, where unequal powers add weight to information imbalance. Of late, decision makers’ decreasing knowledge-ability of decision support technology is said to hit critical levels. Machine conclusions seem unreason-able, and dignitarian concerns are raised: humane treatment is said to depend on the ability to explain, especially in sensitive contexts.

If this is true, why haven’t our most fundamental laws & conventions prevented this corrosion? Covert ‘algorithmic harms’ to groups and individuals were exposed in environments where explanation was regulated, and in sensitive contexts. Fundamental unsafety still slipped in, with highly disparate impact.

Insights from the research fields of epistemic (in)justice help to understand how this happens. When social dynamics of knowledge practices go unchecked, epistemic authority​ easily becomes a factor of other powers, and patterns of marginalization appear. ‘Other’ people’s knowledge, capacities, and participation are wrongly excluded, dismissed, and misused. Wrongful knowledge is made, and harms play out on individual and collective levels. Core values of explanation promote the ability to recognize when, what, and who to trust and distrust with regard to what is professed. In democratic societies, this capability is highly depended upon. It is true that current challenges to these values are not sufficiently met by regulation, but this problem does not follow from technological developments – it precedes it.

When the Corona crisis hit, National authorities based decisions with fundamental impact on people’s lives on real-time knowledge making. Many professed to build on expert advice, science, and technology, but still asked to be trusted for their political authority. Critical choices with regard to expertise and experts remained unexplained, concepts unreasoned. Whose jobs are crucial, who’s vulnerable, what does prioritizing health and safety mean? Patterns of marginalization appeared, policy measures have shown disparate impact.

In times of crisis, the tendency to lean on authority rather than honest explanation and diverse knowledge co-creation is a recurring pattern. This contribution argues to use the dual momentum to assess and reinforce our explanation regulation. If we truly want it to express the fundamental importance of explanation, insights from the fields of epistemic (in)justice should lead the way. This contribution presents a working model of explanation as a type of interactive, testimonial practice to support such efforts.

AI Regulation. Challenges and Opportunities of a Voluntary Label?

ABSTRACT. AI regulation in the EU is emerging. Authorities, NGOs and academics have already issued a series of proposals in the aim to accommodate the ‘development and uptake of AI’ with an ‘appropriate ethical and legal framework’ and promote what the European Commission has called an ‘ecosystem of trust’ in its recent white paper on AI. We are currently awaiting further clarity on the ‘proposal for a legal act of the European Parliament and the Council laying down requirements for Artificial Intelligence’ on which an Inception Impact Assessment has been launched by the European Commission in July 2020. Over a hundred contributions have been received discussing the three options explored by the impact assessment. One of these options envisages to organize a voluntary labelling scheme ‘to enable customers to identify AI applications that comply with certain requirements for trustworthy AI.’ In October 2020, fourteen countries, including the Netherlands and France, have strongly advocated this approach in a position paper as it provides, they argue, ‘incentives for companies to go beyond the letter of the law and drive trustworthy solutions, because they see the competitive advantage in being ahead of the curve.’ This paper proposes to discuss the pros and cons of voluntary labelling as a regulatory tool in the field of AI. In doing so, we will take the responses to the Inception Impact Assessment into account but also make a comparative approach with similar instruments already deployed in the EU regulatory framework, including CE marking, data protection and IT security. We will conclude with possible arrangements to explore and advise about the regulatory conditions that should be met to make this option workable and attractive for businesses.

Unlawful AI, “until proven otherwise”: A new model for AI justification

ABSTRACT. In the last years, legal scholars and computer scientists have discussed widely how to reach a good level of AI explainability and a good level of algorithmic accountability and fairness. The first attempts focused on the right to an explanation of algorithms, but such an approach has proven often unfeasible and fallacious due to the lack of legal consensus on the existence of that right in different legislations, on the content of an explanation and the technical limits of a satisfactorily understandable causal-based explanation for deep learning models. Several scholars have accordingly shifted their attention from the legibility of the algorithms, to the evaluation of the “impacts” of such autonomous systems on human beings, through “Algorithmic Impact Assessments” (AIA). This paper, building on AIA frameworks, proposes a multi-steps test to “justify” (rather than merely explaining) Automated Decision-Making (ADM). In practical terms, this paper proposes a system of “unlawfulness by default” of ADM, where the data controllers have the burden of the proof to justify (on the basis of the outcome of their Algorithmic Impact Assessment) that their autonomous system is not discriminatory, not manipulative, not unfair, not inaccurate, not illegitimate in its legal bases and in its purposes, not using unnecessary amount of data, etc. In order to do that, the concept of “justification” is introduced and analysed. Indeed, the justificatory approach is already required by the GDPR principles at Article 5. A legal and societal justification of automated systems is not only more technically feasible but also more useful and desirable than an explanation of the algorithmic code. Justifying ADM means not merely explaining the logic and the reasoning behind it, but also explaining why it is legally acceptable (correct, lawful and fair), i.e. why the decision complies with the core of the GDPR. This proposal is in line with the most advanced proposals in terms of data ethics (see the German Data Ethics Commission opinion, on “banning” very high-risk AI) but also with the EU Parliament proposal for a Regulation on AI, in which high-risk AI systems are obliged to comply with strict rules about risk mitigation.