TILTING2021: TILTING 2021 REGULATING IN TIMES OF CRISIS
PROGRAM FOR THURSDAY, MAY 20TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 7: Competition and market regulation: Sustainability and competition
09:00
Coordinating sustainable business behaviour in EU competition law. Insights from planning theory

ABSTRACT. This contribution applies insights from planning theory to inform EU competition law to effectively coordinate sustainable behavior amongst undertakings. Planning theory provides theoretical frameworks for the application of group planning based on insights from experiments and observations. Group planning aims to coordinate behavior when, if left to their own devices, people will not end up coordinating their behavior effectively. Especially in business competition, business may not coordinate their behavior effectively, because one may have reasons to worry that businesses will not take public interests (such as competition, environmental protection, human rights) duly into account. This may be due to lack of expertise (due to the complexity of the public interest) or lack of incentives (due to different preferences or values).

EU competition law is a legal planning tool governed by a combination of legislation, government guidance, and case law interpretations, that serve to set boundaries to business behavior. It remains unclear, however, whether and to what extent EU competition law should and could set boundaries to sustainability agreements amongst undertakings. This paper uses insights from planning theory on rational, bounded, and value-based decision-making, as well as the impact of power relations to understand the pitfalls and potential of EU competition law coordinating sustainable business behavior.

09:30
What is the role of EU merger control in ensuring sustainability? Innovation output, innovation diversity and the Commission’s innovation theory of harm in agrochemical mergers

ABSTRACT. This paper analyses the role of sustainability concerns (such as food security, food safety and biodiversity) in the European Commission’s assessment of innovation effects of mergers in the agrochemical sector (Dow/Dupont; Bayer/Monsanto). The paper makes four contributions. First, it provides a clear framework to decide when and how such sustainability concerns should be taken into account in assessment of competitive effects of mergers. Second, it shows that the Commission’s current assessment of innovation in merger cases fails to give full effect to legitimate sustainability concerns because of its exclusive focus on innovation outputs rather than innovation diversity. Third, the paper discusses how a greater role of innovation diversity would allow merger analysis to account for the adverse effects of mergers and market consolidation on sustainability parameters, such as biodiveristy or food system resilience. Fourth, the paper explores several channels and filters to operationalise the concern about innovation diversity in merger analysis.

10:00
The Twin Transition to a Digital and Green Economy: Doctrinal Challenges for EU Competition Law

ABSTRACT. The European Union intends to transition to a digital and green economy, an ambitious endeavour which must be supported by all of its policies and actions. This includes competition law as a central pillar of the European economic constitution. While ‘digital competition law’ and ‘green competition law’ do not share many traits at first sight, upon closer inspection it becomes clear that their insistence on ‘non-economic’ goals defies some of the more established logic of EU competition law and requires a new outlook that reconciles their individual paths. Starting from the constitutional standing of the (quasi-)foundational values of data protection and environmental sustainability, the paper analyses how – on a more theoretical and on a more practical level – EU competition law could include these values into its assessments.

11:00-12:30 Session 8A: Competition and Market Regulation: Data sharing, competition and data protection
11:00
Data protection and competition law: friends or foes regarding data sharing?

ABSTRACT. (Submission for the Call for papers)

Data plays a prominent role in the digital economy, as being able to use data to develop or improve innovative products or services is a key competitive parameter. Consequently, an increasing call for more data sharing is being made in the EU. However, this might create tensions with the policy objectives of the GDPR, as data sharing will often cover both personal and non-personal data mixed in the same dataset. Unsurprisingly, large data holders have started to use data protection considerations to justify refusals to share data with third parties. This could create some serious competition issues, and some authors thus argue that the GDPR strengthens large data holders’ position by increasing concentration in data markets and by reinforcing barriers to data sharing. This finding seems to rely on the double premise that: i) the GDPR is more lenient towards personal data re-use within the ecosystem of these large data holders than it is towards the sharing of this personal data with third parties; and that ii) the way in which these large data holders re-use personal data within their ecosystem complies with data protection law. In this paper, we show that these two premises can be challenged from a theoretical point of view. Yet, even if these premises are challengeable, large data holders do apply “double standards” in practice. They adopt a very restrictive approach towards data sharing with third parties while massively circulating their users’ data internally. We outline that this is mainly the result of a lack of data protection enforcement, which led, in turn, to a lack of competition enforcement. Therefore, to solve this “double standards” issue, we argue that there is a crucial need for more enforcement of the existing rules by data protection authorities. Data protection, which might presently be somewhat of a foe to competition in light of the “double standers” situation, could become a friend if enforced with more pugnacity against large data holders, as it would not only reinforce the protection of data subjects but would also ensure the existence of a healthier competitive environment.

11:30
Regulating Agricultural Data and Concept of ‘Data Ownership’: Approaching the Debate from the Competition Policy Perspective

ABSTRACT. Farming activities have become more efficient and less costly thanks to the effective usage of data-driven ‘Digital Agriculture’ services. However, this new form of ‘Smart Farming’ has also brought about data-related challenges and questions regarding the rights on data such as who owns the farm data, who has which rights on it, and ultimately whether there is a need for regulation. As a consequence, there has been much debate agricultural data rights and regulation. The idea of giving data ownership for farmers has predominantly defended by the literature and even has already adopted by the voluntary data codes of conduct developed by stakeholders both in Europe and US. This paper approaches the data regulation and data ownership debate in the agriculture sector from the competition policy perspective by particularly centralising one of the most prominent problems of farmers: data lock-ins. The paper highlights the reasons of the data-lock-in problem of farmers in the emerging Digital Agriculture sector, and discusses whether existing data ownership approach in the literature and soft law is adequate to solve the problem of farmers. Although providing farm data ownership right with farmers seems reasonable at first sight and dominates the discussions as the prominent idea, centralising transferable data ownership right as a legal concept when regulating data in the sector might be more problematic than beneficial for farmers because ownership rights can be accumulated in the hands of limited integrated agricultural giants by reminding the farmers’ weak bargaining power vis-a-vis ag-tech companies. This might render farmers more dependent on few players and raise entry barriers by reinforcing already strong de facto data-related data lock-ins with legally valid rights. Therefore, despite its appealing disguise, ‘data ownership’ for farmers might bring about undesirable consequences if we consider the regulation debate from the competition policy perspective. In this context, this paper advocates the need for sui generis data access rights including an inalianable data portability right for farm operators beyond the legitimate access opportunities for other stakeholders in the sector by also discussing alternative infrastructural solutions in line with the needs of the sector.

12:00
The German Facebook Case: The Law and Economics of the Relationship between Competition and Data Protection Law

ABSTRACT. Can competition law also take into account effects on privacy or should privacy con-cerns of data-collecting behaviour only be dealt with by data protection law? In this paper we are analysing the German Facebook case, in which certain terms of service (that force consumers to give consent for merging personal data collected through Facebook services with those collected from tracking and third-party websites) were prohibited as exploitative abuse of a dominant firm. We show from an economic perspective that due to the simultaneous existence of two market failures (market dominance, information and behavioral problems) and complex interaction effects between both market failures and both policies in digital markets, the traditional approach of a strict separation of both policies is not possible any more, leading to the need for more collaboration and alignment of both policies. With respect to the substantive question of protecting a minimum level of choice options for consumers regarding personal data vis-a-vis dominant digital platform firms, the recent decision of the German Federal Court of Justice in the Facebook case and the proposed Digital Market Act have opened new perspectives for dealing with privacy concerns in competition law and regulation.

11:00-12:30 Session 8B: Human Rights and AI: Challenges from data processing for security and law enforcement purposes
11:00
Re-purposing data for security: what is the role of purpose under Article 8 CFREU?

ABSTRACT. Modern practices of state surveillance often rely on personal data that are generated during commercial activities. Private companies are obliged to provide security actors with their clients’ data because they are considered a valuable asset in the fight against crime. By sharing this information, however, the data protection law principle of purpose limitation is interfered with. Pursuant to the latter, personal data must be processed for specified purposes and not processed further in a manner that is incompatible with those purposes, unless explicitly defined exceptional circumstances apply. In this case, reasons of public and national security are invoked as the exceptional circumstances that legitimise the restriction of the purpose limitation principle.At the level of primary EU law, the fundamental right to personal data protection enshrined in Article 8 of the Charter of Fundamental Rights of the EU (CFREU) ensures that ‘data must be processed fairly for specified purposes’. The question has been raised whether the fundamental right to personal data protection encompasses through this excerpt the purpose limitation principle in its entirety, that is both purpose specification and compatible use principles, or only purpose specification. Data protection experts put forward conflicting views, while the discussion has yet to be picked up by the Court of Justice of the EU (CJEU). In fact, the CJEU seems to pay little attention to the role of purpose in cases regarding personal data and security. Most commonly, the CJEU is satisfied with the existence on an objective of general interest, i.e. safeguarding national and public security or counter-terrorism. In a way, the CJEU thus equates the element of purpose under Article 8 CFREU with the condition of legitimate objective under Article 52(1) CFREU (e.g. Digital Rights Ireland, Tele2/Watson, Opinion 1/15).In light of the above, the aim of this presentation is twofold. First, it seeks to investigate the current scholarly and jurisprudential positions and their implications on the role of purpose under Article 8 CFREU. Second, the presentation will assess the manner in which purpose under Article 8 CFREU may be more actively engaged in the protection of individuals’ right to personal data and other fundamental rights and freedoms in the context of security.

11:30
The normative challenges of AI surveillance in the analysis of encrypted IoT-generated data for law enforcement purposes

ABSTRACT. This paper explores the normative challenges of digital security technologies i.e., end-to-end (E2E) encryption and metadata analysis, in particular in the context of law enforcement activities. Internet of Things (IoT) devices embedded in smart environments (e.g., smart cities) increasingly rely on E2E encryption in order to safeguard the confidentiality of information and uphold individuals’ fundamental rights, such as privacy and data protection. In November 2020, the Council of the EU published a resolution titled “Encryption – Security through encryption and security despite encryption”. The resolution seeks to ensure the ability of security and criminal justice authorities to access data in a lawful and targeted manner. Nonetheless, in the context of pre-emptive surveillance and criminal investigations, E2E encryption renders the analysis of the content of communications extremely challenging or practically impossible, even when access to data could be lawful.Here, two different layers of complexity seem to emerge. They concern: (i) whether a balance between the values protected by E2E encryption and the aims of law enforcement can be attained; (ii) whether state-of-the-art AI models can preserve the advantages of E2E encryption, allowing for inferences of valuable information from communication traffic, with the aim of detecting possible threats or illicit content.Against this backdrop, we firstly examine whether AI algorithms, such as Machine Learning and Deep Learning, might be part of the solution, especially when it comes to data-driven and statistical methods for applying classification in encrypted communication traffic so as to infer sensitive information about individuals. Secondly, we consider the possible uses of AI tools in the analysis of IoT-generated data in smart cities scenarios, focusing on metadata analysis. We explore whether that AI-based classification of encrypted traffic can circumscribe the scope of law enforcement monitoring operations, in compliance with the European surveillance case-law. Finally, as far as our research focus is concerned, we discuss how the use of AI bears the potential of smoothing traditional trade-offs between security and fundamental rights, allowing for encrypted traffic analysis without breaking encryption.

12:00
Financial information sharing and the problem of borderlines between the GDPR and LED

ABSTRACT. As of May 2018, the EU data protection framework consists in two main legal instruments, namely the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED). Despite the fact that both instruments were adopted on the same legal basis and were enacted as one data protection reform package, they constitute two separate legal regimes with different thresholds for the protection of personal data.

The differences in the range of obligations imposed on data controllers and rights available to data subjects under the GDPR and LED, render it crucial to precisely delineate between the regimes and ensure that there is no doubt as to the borderlines between the scopes of their application. Yet, it appears that in some policy areas, such as the anti-money laundering and counter-terrorist financing (AML/CTF), it can be far from clear, which of the regimes should apply.

This contribution aims at analysing three situations of financial information sharing in the AML/CTF context and seeking answer to the question whether, and if so, where exactly the boundaries between the GDPR and LED lose their sharpness.

To that end, firstly the processing of personal data in the event of the involvement of private financial institutions in the realisation of the AML/CTF policy goals within financial information sharing public-private partnerships (PPPs) will be studied. Secondly, the case of the employment of AI solutions in such PPPs will be scrutinised, particularly where the algorithms are trained on merged sets of data originating from both the private financial institutions and the law enforcement authorities. Finally, the problem of the applicability of the GDPR or LED to the processing of personal data by Financial Intelligence Units (FIUs) will be recalled. These specialised bodies positioned between private financial institutions and states’ law enforcement authorities play a crucial role in the information exchange between the private and public sectors, but the legal regime for the processing and sharing of data from one to another remains disputed.

Ultimately, the findings of this contribution will provide the basis for further discussion on the legality and possible limits of the practices of financial information sharing for the AML/CTF purposes.

11:00-12:30 Session 8C: Data Governance: Data Governance in Smart Cities
11:00
Soft Law in Smart City Governance: Municipal Tools for Regulating Technology

ABSTRACT. This paper examines the use of soft law instruments in the smart city to address gaps in governance of technological innovation at a local level.

Smart cities are urban environments where new technologies are used to capture, store, and analyse data about the city and its users in order to inform local policy- and decision-making. These tech-nologies for instance include facial recognition cameras, responsive streetlights, urban dashboards, and connected networks of sensors measuring environmental data (e.g., air pollution) or sectoral da-ta (e.g., electricity consumption).

The rise of smart city initiatives, observed across Europe and across the world, results from a mar-ket-creating strategy of multinational tech companies such as IBM and Cisco. Consequently, the development and deployment of smart city technologies involves multinational private companies in the design and realization of urban governance. This has lead scholars from fields such as STS and geography to criticize the ‘corporate smart city’ and the privatization of urban governance.

From a legal perspective, smart city technologies raise a range of issues, including but not limited to privacy concerns. Since technology embeds values, smart city technologies are the result of value arbitration that happens largely within private companies, rather than through democratically legiti-mized processes. Traditional legal tools, such as procurement, are not adapted (or are not yet used) to regulate the potential dangers of this arbitration (e.g., loss of transparency, risk of discrimination, problems of accessibility).

In this regard, soft law offers tools for local governments to engage with the challenges of smart city technologies. It enables municipalities to formulate a framework for ethical and political consid-erations to be addressed in smart city projects and in the cooperation with non-state actors. Addi-tionally, soft law allows local governments to regulate beyond their range of action at national level (limits of the internal constitutional order) and at transnational level (limits of state-centred interna-tional law). This paper studies three examples of soft law instruments used in smart city context: technical standards (e.g., ISO/TS 37151), charters or principles (e.g., VNG’s Principles for the digi-tal city), and transnational municipal networks (e.g., Cities Coalition for Digital Rights).

11:30
Post-soapbox data governance: from recognition to intersubjectivity

ABSTRACT. Data governance beyond data protection is the arena to develop better institutional responses to the internet age. Viljoen (2020) argues that an honest appraisal of datafication today requires understanding data governance as social ordering. This vision supersedes both the moralising of mainly European dignitarian approaches, and the commodification of the mainly American propertarian approaches. Instead, reform requires understanding the interests at stake and how these are balanced. The data governance conversation shifts to institutional design and its politics. In so doing, how do we recognize and codify the relationality of datafication processes? What exactly can we see with this new perspective? Current approaches are based on legibility. In other words, current thinking implicitly proposes the remedy of recognition: by clearly codifying social relationships and group memberships, or existing sociocultural context and circumstance, we might expand the set of interests considered welcome to sit at the decision-making table. You can manage what you can measure. Here, we come to the paradox of inclusion at the heart of data justice - what if certain interests cannot be recognized? To explore these questions, this presentation will test the relationalities of existing data governance practices in the case of Singapore. In that particular political context, meaning is often embedded in networked social relationships, and critique expressed through inference, allusion and humour. As such, empirical evidence on public debate around data and AI governance regulations will show the possibilities and limits of recognition to move towards a genuinely inclusive global conversation on data governance.

12:00
The Politics of Setting International Standards for “Smart City”

ABSTRACT. While it has been suggested that the potential role and effectiveness of private regulation in governing digital identity systems, biometrics, and related technologies, this paper argues that international standard-setting in this area is still highly political―a battle among national governments. This paper addresses how national governments are currently competing in setting international standards for smart city: ISO’s 37100-series “Sustainable Cities and Communities.” The ISO technical committee (TC) 268 is working for developing such standards. While the Chairperson of the TC 268 is Japanese, several working groups under the TC are convened either by Japan and China. China’s smart city planning includes digital identity systems and biometrics. Especially, in the time of the battle against COVID-19, the use of digital identity systems and biometrics in cities is expanding. Thus, the successful development of international standards will generate significant consequences in the growing area of smart city.

In the literature on the politics of setting international standards, Büthe and Mattli (2011) found “institutional complementarity” between domestic and international system as a decisive factor for the successful international standard-setting. While this theory appeared persuasive in the context of the standard-setting battle between the US and the EU firms, political dynamics of international standard-setting have been changed after China’s growing engagement in international standard-setting. This paper suggests that how national power and strategies matter in the recent international standard-setting context.

11:00-12:30 Session 8D: Open Track: Panel 'Digital technologies during Covid-19: A multi-disciplinary problematization of privacy’s value hegemony'
11:00
Digital technologies during Covid-19: A multi-disciplinary problematization of privacy’s value hegemony

ABSTRACT. PANEL PROPOSAL

Privacy dominates the current public debate on the harms associated with digital technologies. This has also been the case in relation to attempts to address the Covid-19 pandemic with technological solutions. But the focus on privacy risks crowding out other values which are at stake in digitalization, just as it may redefine broader societal concerns as privacy risks. In this panel, we seek to problematize privacy’s value hegemony in the context of technological responses to the pandemic by drawing on different disciplinary perspectives that point to a need to “move beyond privacy”, including but not limited to, communication science, law, philosophy and computer science.

In this workshop we use the empirical data collected by both of our research consortia as our starting point. Our qualitative, in-depth interviews and longitudinal surveys among a representative sample of the Dutch population point to the phenomenon of privacy hegemony. We interpret this phenomenon from three different disciplinary angles.

From a legal perspective, there are several reasons why governments, civil society, and academics tend to focus on the privacy and data protection aspects of data-heavy digital technologies. But the GDPR cannot ensure on its own that digital technologies are fair, equally accessible, and democratically legitimized. The data protection perspective should not crowd out other, equally relevant perspectives. For example, how can we ensure that digital technologies are really voluntarily used by employees in labor relations that are characterized by unequal positions of power? How do we guarantee that those without access to the newest smartphones can still benefit from novel digital health services? And how far can consent go in legitimizing the implementation of digital tracking and monitoring solutions by governments?

Philosophically, there is a clear need to, first of all, conceptualize privacy beyond privacy-as-data-protection. Secondly, and more importantly, even a broader conception of privacy cannot do justice to the variety of moral concerns that are at stake for citizens -- such as trust and vulnerability -- nor the range of societal harms at stake in digitalization -- including a crowding out of sectoral expertise and norms (in this case public health expertise), and the expansion of Big Tech’s infrastructural power into spheres of social life which may, unwittingly, be facilitated by a focus on privacy.

From a computer science perspective, there has been a lot of attention for the privacy/anonymity-preserving nature of contact tracing solutions (see e.g., literature on DP3T). There are, however, many more possible risks and concerns that can and should be asked already at the design stage of contact tracing solutions. For example, decentralized contact tracing systems might be less effective at detecting exposure risks than the more classically designed and centralized systems. General effectiveness might be reduced when usability and simplicity are sacrificed in the design process, creating larger user errors. Another possible risk to consider is the abuse of the system, for instance by creating false positives and forcing certain groups into quarantine.

Together, the contributions from these different disciplines suggest a need to move beyond privacy.

FORMAT

For this panel, we ask audience members to bring their own experiences and ideas concerning ‘corona tech beyond privacy’. The panelists (listed below) will only briefly introduce their own research on ‘corona tech beyond privacy’ from their respective disciplinary fields, before we open up the floor to the audience. (Team members with various academic backgrounds from both research consortia will also be present at the event to contribute to the discussion). Our aim is to open up a multi-disciplinary problematization of privacy’s value hegemony.

Moderator: Joanna Strycharz [introduction: 5 minutes]. Brief introductions by panelists: - Empirical: Lotje Siffels [7 minutes]; - Law: Natali Helberger [7 minutes]; - Philosophy: Tamar Sharon [7 minutes]; - Computer Science: Joran van Apeldoorn [7 minutes]. Audience contributes ‘Beyond Privacy’ perspectives [45 minutes]. Panelists get a brief moment to react to contributions by audience (and to each other): what are important questions moving forward, did some (potential) synergies emerge? [12 minutes].

13:30-15:00 Session 9A: Energy and Climate Crisis: Workshop on Business and the Energy transition
13:30
Workshop- Business of the Energy Transition (part 1)

ABSTRACT. Workshop: Climate Justice and the Business of Energy Transition TEST

20 May from 13:30-15:00 and 15:30-17:00 (online)

In conjunction with:Netherlands Network of Human Rights Research Constitutionalizing the Anthropocene ProjectTilburg Institute for Law, Technology & Society

https://www.tilburguniversity.edu/sites/default/files/download/programme%20Tilting%20panel%20NNHRR.pdf 

13:30-15:00 Session 9B: Competition and Market Regulation: Platform Liability
13:30
Pornography tubesites: exploring the liability of dominant platforms for third-party content under EU competition law

ABSTRACT. The Covid-19 pandemic caused an increase in demand for digital content, including pornography. The latter is most widely available on free streaming ‘tubesites’ like Pornhub or Youporn. Aside from legitimate content, mostly user-uploaded videos can also feature disgraceful, harmful, or even illegal material. This paper explores whether the proliferation of harmful or illegal third-party content by online platforms could raise suspicion (also) under EU competition law.

First, I describe the business model of tubesites and recent controversies over the content available thereon. I use the example of one of the world’s most popular adult entertainment streaming websites, Pornhub, owned by the adult industry behemoth Mindgeek. Recently, Pornhub faced serious allegations that it is rife with harmful or illegal content including pirated material, rape videos, revenge porn, and child abuse, and that the platform is often not responsive in time to requests for its removal.

Then, I examine how online platforms are regulated with regards to third-party content, including the most recent EU proposals and initiatives concerning the modernization of platform regulation. Currently, online platforms are sheltered from liability for third-party content published thereon; the liability for content lies with individual producers and distributors, indicating a gap in the (existing) regime.

Last, the scope for competition law to address this gap is analysed. In particular, I consider the (in)activity of Pornhub/Mindgeek regarding third-party content under the Article 102 TFEU. Online platforms as “infrastructural agents” are reconstructing how online production and distribution of value takes place; the focus is thus not on the harm to final consumers of pornography, but on the imposition of an unfair bargain (i.e. unfair trading conditions) upon the creators or subjects of harmful or illegal content as an exploitative abuse of dominance. This is because those featured in above-mentioned content need to engage with the dominant platform even if they cannot or do not want to, while the platform profits off of it at their expense.

In conclusion, I examine potential remedies and discuss broader implications regarding the responsibility of private platforms, in particular as regards ensuring fairness on the digital markets through competition law enforcement. In light of just announced content deletion and policy changes at Pornhub, pursuing a more verified-content approach, I also touch upon the role of such actors in regulating the contents of what is ultimately shared online.

14:00
Restricting the access to legal content online: is it time for additional European regulation?

ABSTRACT. When the access to opinions that differ from the majority in society are restricted, progress in (controversial) debates would quickly stagnate. A change in society often takes off because a small number of people disagree with the status quo and challenge it, such as for example pro-euthanasia activist Alain Cocq’s initiative to demonstrate his personal suffering due to a lack of access to active euthanasia in France. In September 2020, Facebook prohibited and blocked the live streaming of his euthanasia attempt. The social media platform did not refer to the fact that active euthanasia is illegal under French law, nor that President Macron has denied Cocq’s request for euthanasia. Rather, it based its decision solely on its own house rules. According to Facebook, allowing Cocq to stream his death would go against its Community Standards which prohibits portrayals of suicide.

One could wonder how that decision would have turned out if identical content was live streamed by a Dutch, a Canadian or a Colombian national – all countries where active euthanasia is legal. Despite the fact that there is no consensus on the legality of euthanasia worldwide, Facebook chose to regulate the matter privately, and consequently applies a unified rule to its 2.7 billion users worldwide. This paper focuses on the freedom of (online) private companies to draft and enforce such house rules. On the one hand, private companies enjoy the freedom to conduct business, which should not be unnecessary restricted by state intervention. On the other hand, online platforms have grown into major sources of information, with over 57 percent of US millennials, and over 50 percent of adults in seven European Member States, accessing their news through social media. With so many people actively using online platforms to impart information, restricting the access to legal content poses a risk to a pluralist debate, and consequently can negatively affect societal progress as a whole.

This paper sheds new light on the current debate on online content regulation. Firstly, it will show that European legislation predominantly focuses on the prevention of illegal and harmful content on online platforms. Unlike the United States, where President Trump has recently addressed the issue of private regulation of legal content, a discussion on the regulation of legal content by (monopolistic) private companies has been largely omitted in the European Union. Secondly, it will evaluate whether the current European legal framework can curve this type of ‘selective censorship’ by private companies. To do so, the author focuses on the limitations to the freedom to conduct a business from three angles: a purely contract law, a consumer protection, and a competition law angle. Lastly, the paper defines to what extent online platforms can limit online access to legal content in their house rules and concludes whether it is time to look closer into the desirability of additional European regulation.

14:30
Platform Regulation as Rule of Law Development Assistance

ABSTRACT. Analogies between platforms and states are common in contemporary platform governance discourse. Yet those analogies are typically fairly loose, and limited to the paired claims that (1) platforms have state-like power in view of the degree of influence they exercise over user behavior and/or social outcomes, and (2) as a consequence, platforms ought to adopt protections for users and/or society rooted in the constraints that apply to governments, like human rights law. However, such approaches assume that platform companies have the institutional capacity to adopt such protections—the capacity, for example, to exercise granular control over the implementation of their own policies and to effectively bind themselves to long-term plans that require the repeated sacrifices of short-term profit in favor of stability and credibility. Unfortunately, there is little evidence that existing companies have that capacity.

As a simple example, consider the skepticism surrounding Facebook’s Oversight Board, and the frequently expressed belief that the Board can fail at any time if Mark Zuckerberg just decides to ignore it. (Full disclosure: I was a consultant for Facebook and assisted, in a small way, with the design of the Oversight Board.) That belief could be rephrased as: Facebook lacks the capacity to actually commit to constraining its own behavior by Oversight Board decisions—and it may well be true. Or consider recent revelations that Amazon employees ignored company policies and rooted through third-party seller data to inform their decisions as to the company’s first-party products: evidence that the company has thus far been unable, because of deficiencies in internal governance (or a lack of will), to commit to a plan to pursue long-term profits (by encouraging third-party sellers to trust the integrity of their data) in the face of the incentives it creates for its employees to grab short-term moneymaking opportunities.

In the world of real states, we do not just shrug when we learn that a state is unable to effectively regulate itself and the users of its space in accordance with human rights standards. Rather, we offer assistance via the international rule of law development community, which has a collective mission to build state capacity so that governments can actually control their internal affairs and deliver the goods that we demand of them. And while the rule of law development community is (alas) notoriously ineffective, the broad idea of assisting a state to develop its capacity to regulate according to universal human rights standards is sound, and can be applied to platforms. Indeed, the concept of the rule of law is particularly apt with respect to the previously noted examples, for (as I have argued in other work) the rule of law is a tool that states can adopt for the purpose of improving their own governance institutions in order to be able to commit to long term plans, and it is just that capacity that seems to be lacking.

Accordingly, this paper will develop three ideas: (1) the notion of a capacity-building program, similar to those deployed in the international development community—but for platforms; (2) the specific case of the rule of law for platforms—what it might mean, and why we might think that existing platforms lack it; and (3) what kind of policy interventions governments might make to assist platform companies in developing capacities associated with the rule of law.

This paper is a combined extract from two (or perhaps 3) chapters in-progress of a book in progress, which is under contract with Cambridge University Press and supported by a grant from the Knight Foundation. The book is tentatively called The Networked Leviathan, and is an effort to introduce research in institutional political science and constitutional design more broadly to the problem of platform governance.

13:30-15:00 Session 9C: Open Track: Public spaces, private concerns?
13:30
AI governance in practice: safeguarding public interests and democratic values in urban crowd management.

ABSTRACT. During the recent Corona crisis, AI-based applications and services have been pushed once again as technological quick fixes to difficult sociotechnical collective problems in cities, such as contact tracing and crowd control. Although AI certainly has the potential to help address some of these problems, it also introduces a new set of challenges including ensuring the safeguarding of public interests and democratic values in agile innovation processes. An example is the governance of projects to monitor and track the public in the interests of security or public health, as in the case of crowd monitoring systems or COVID contact-tracing apps, that take shape within extended public-private partnerships. Such data driven AI-based technologies disrupt existing governance structures as they introduce new complexities and dependencies on a technical level, but also on an organizational and institutional level.

This paper takes a closer look at how governance practices take shape and are negotiated in smart city innovation projects that center on data-driven AI-based technologies. In particular, it will look at how concerns about public interests and democratic values are translated (and sometimes) overlooked in the design of sociotechnical systems focused on crowd management in cities. The analysis draws on a series of workshops conducted in one European city with experts involved in particular ongoing projects. Based on this empirical research, the paper will examine how and when public interests are addressed in these projects and how responsibilities in decision-making processes are renegotiated as part of the changing governance structures around these projects. The paper will then examine what the insights of this analysis can contribute on a theoretical level to the development of the concept AI governance.

14:00
Data protection law beyond identifiability? Atmospheric profiles, nudging and the Stratumseind Living Lab

ABSTRACT. The deployment of pervasive information and communication technologies (ICTs) within smart city initiatives transforms cities into extraordinary apparatuses of data capture. ICTs such as smart cameras and sound sensors are trying to infer and affect persons’ interests, preferences, emotional states, and behaviour. It should be no surprise then that contemporary legal and policy debates on privacy in smart cities are dominated by a debate focused on data and, therefore, on data protection law. However, several notable hurdles might prevent data protection law to successfully regulate such initiatives. In this contribution, we examine one such hurdle: whether the data processed in the context of smart cities actually qualifies as personal data, thus falling within the scope of data protection law. This question is explored not only through a theoretical discussion but also through an example of a security-focused smart city initiative – the Stratumseind Living Lab (SLL) in the Netherlands. Our analysis shows that the requirement of ‘identifiability’ might be difficult to satisfy in the SLL and similar initiatives. This is so for two main reasons. First, a large amount of the data at stake do not qualify as personal data, at least at first blush. Most of it relates to the environment, such as, data about the weather, noise and crowding levels, rather than to identified or even likely identifiable individuals. This is connected to the second reason, according to which, the aim of many smart city initiatives (including the SLL) is not to identify and target specific individuals but to manage or nudge them as a multiplicity – a combination of the environment, persons and all of their interactions. This is done by trying to affect the ‘atmosphere’ on the street. We thus argue that a novel type of profiling operations is at stake. Rather than relying on individual or group profiling, the SLL and similar initiatives rely upon what we call ‘atmospheric profiling’. We conclude that it remains highly uncertain, whether smart city initiatives like the SLL actually process personal data. Yet, they still pose risks for a wide variety of rights and freedoms, which data protection law is meant to protect, and a need for regulation remains.

14:30
Privacy and pandemics: Augmented crises in times of reality

ABSTRACT. Crises promote the development and adoption of new methods, practices, and technologies. Among the anticipated legacies of Covid-19 is the reorientation of public space and the forms and factors of technologies that mediate its experience. During the crisis, restrictions and lockdowns have closed businesses and limited gatherings, leaving public space to become a “tabula rasa” into which new purposes and meanings can flow. As sites of negotiated use over time, years of shared meaning and incremental policy making have been swept away. In some instances, commercial areas and car-based transportation corridors are re-designated for walking and bicycling. In others, shuttered brick and mortar shops are being supplanted by online behemoths. Public space can also become sites of unrest as agitators exploit atomized online media to fan the flames of accumulated loss and frustration. Here, we see that public space is not only physical and guided by local community governance; it is linked to the digital world by personal and public technologies and their complex, often extralocal politics.

Under these conditions, wearables, smartphone apps, sensing hardware, and other technologies loosely categorized as augmented reality are poised to make further strides into acceptance in pursuit of “normalcy.” Personal devices can monitor a user’s location, reach out for data about nearby others and issue alerts of potential risks (e.g. track and trace). Biometric sensors will see expanded use in airports, restaurants, and other venues to answer the urgent need of detecting disease. Yet, as before the current crisis, these surveillant technologies import private agendas and export accountability. Expanded adoption is driven by consumer choices and commercial incursions but also where governments are pressured to address crises quickly and to do so with technology provided by platforms and large firms.

Leveraging prior work by Katell et al. (2019), we argue that public technologies ought to be met with a renewed commitment for accountable governance, and that decisions about its use subject to participatory decision making. Rather than acquiesce to a seemingly inevitable reality augmented by successive crises, we propose principles of regulation based in human dignity, democracy, solidarity, and the fair distribution of technological power.

13:30-15:00 Session 9D: Human Rights and AI: Panel 'Discrimination and algorithmic decision-making in insurance'
13:30
Insurance, algorithmic decision-making, and discrimination

ABSTRACT. Insurance, algorithmic decision-making, and discrimination

Insurance, algorithmic decision-making, and discriminationInsurance companies could use algorithmic systems to set premiums for individual consumers, or deny them insurance. More and more data become available for insurers for risk differentiation. For example, some insurers monitor people’s driving behaviour to estimate risks. To some extent, risk differentiation is necessary for insurance. And it could be considered fair when, e.g., high-risk drivers pay more.But there are drawbacks. Algorithmic decision-making could lead, unintentionally, to discrimination on the basis of, for instance, ethnicity or gender. Too much personalised risk differentiation could also make insurance unaffordable for some people. Furthermore, risk differentiation might result in the poor paying more, thereby worsening economic inequality.Intended audience: scholars from fields such as computer science, law, human-computer interaction, data justice, ethics, economy. The panel is also relevant for other stakeholders, such as national and international policymakers and NGOs.The panel addresses unjust effects of digital information gathering, such as discrimination, fairness and transparency, from different perspectives. A series of experts from different fields and sectors (law, computer science, insurance), from academia and from the industry, enter a lively discussion with the audience. This is a discussion panel. Hence, the panel will not include long presentations or slides.

The panel discussion is guided by questions such as:• How should discrimination on the basis of ethnicity and other grounds be avoided? • Can non-discrimination norms be built in the computer systems of insurers, and if so, which norms? • How can discrimination by algorithmic systems be identified by those affected? • Are current laws sufficient to protect fairness and the right to non-discrimination in the insurance area? • Should poor people be protected against paying extra? • Is it always reasonable when high-risk insurance consumers pay extra? • Should health insurance be regulated and approached separately?"

13:30-15:00 Session 9E: IP Law: AI: Artist and/or Enforcer?
13:30
Analysing the Creative Potential of Artificial Intelligence through its “Moral Control” of the Market

ABSTRACT. The rise of Artificial Intelligence (AI) within the so-called Fourth Industrial Revolution (FIR) and its creative potential have been discussed in different academic sources and forums throughout the world. For example, institutions, such as, the World Intellectual Property Organization (WIPO) have organized different consultations and forums to discuss the legal nature of the works developed by AIs that go beyond simple expert programmes. Most of these efforts tend to rely on accurate analyses that emerge from questions like “who controls the economic rights?” However, in rare occasions, we stop to consider who has the right to make modifications on a work created by an AI or who decides that this latter is ready to be displayed and/or offered within the market just as we did with the case of Banksy’s “Girl with Balloon.” Of course, to answer these questions, we have to go beyond economic rights and analyse those moral rights that could be involved in the display and the offer of the work beyond the simple paternity right. With these elements in mind, the present paper will offer an analysis that will complement existent works highlighting the relevance of moral rights in the control over the work within the market to determine: 1) who can be labelled as the “Kantian master mind” behind the work and 2) what do we need to consider an AI as a potential author.

14:00
Automated copyright enforcement: Democracy in crisis

ABSTRACT. Automated copyright enforcement spells disaster for the very touchstone of the European Union (EU), democracy, as it could diminish the role of the judiciary in resolving copyright conflicts. While this technology has existed for over a decade, Art. 17 Digital Single Market Directive compounds the problems as it shifts the liability of content-sharing service providers. Now intermediaries must make best efforts to ensure that works communicated by copyright holders are not uploaded or reuploaded following take down.

Despite the Directive clarifying that general monitoring is not required, the standard of best efforts is ambiguous and presents an unresolvable conflict. Realistically, the only way that intermediaries could comply with art. 17 is by using artificial intelligence (AI). This would mean that software would detect and decide whether a digital use of a copyright work amounts to infringement. However, in a pandemic world, where society is increasingly dependent on digital technology to access works, the notion that AI could determine copyright disputes without the parties entering a court is chilling.

This is particularly the case regarding the enforcement of user rights and the public domain, or lack of, by automated enforcement. These are already complex legal issues that are not completely realized by the legislator and require interpretation by the judiciary. Thus, the very nature of AI complicates the resolution of a balanced approach to digital access of copyright works as it relies on the law being distilled into lines of code. Further, it is unlikely that when users are faced with the constant removal of content, they will seek costly judicial redress.

Despite the obvious curtailment of freedom of expression, this is an opportunity to reconceptualise the EU’s copyright framework in the digital age. This presentation seeks reframe the dialogue by outlining how bolstering user rights and the public domain generally could support the application of automated copyright enforcement as a useful mediator of copyright holders, intermediaries, and users.

13:30-15:00 Session 9F: Data Governance: Data Rights and Platform Resistance
13:30
What's Work Got To Do With It? Data Rights and Platform Resistance

ABSTRACT. Speakers:

  • Gloria Gonzalez Fuster, (introduction to data rights), Research Professor, Vrije Universiteit Brussels  
  • Luca Stevenson, (platform resistance and online sex workers), sex worker and coordinator of the International Committee on the Rights of Sex Workers in Europe (ICRSE)  
  • Niels van Doorn, (platform resistance and gig drivers), Assistant Professor, University of Amsterdam  
  • Astha Kapoor, (broader reflections on the data economy and platform resistance) Co-Founder, Aapti Institute 

Platform companies wield enormous power over digital infrastructures, unilaterally reconfiguring group dynamics and communities. With the objective of solidifying their power over data production, extraction, and exploitation, these platform companies are strategically safeguarding their rule-setting position. The COVID-19 pandemic has catalysed the further entrenchment and solidification of platform power across the board, increasing vulnerabilities and dependencies for some more than others, but also boosted public awareness about the complex risks and challenges raised. While much of these power dynamics are enabled by the law, we believe the law can also offer the opportunity to individuals and communities to countervail these existing power structures. Data rights in particular appear to be a promising avenue for challenging platform power, in order to safeguard individual and collective interests in a wide variety of contexts. Yet, important questions remain as to the suitability, scope, and effectiveness of doing so. With that in mind, we propose to have a panel discussion to explore the following open questions:

• What are the collective and individual dynamics of data rights? (e.g. Is there a normative basis for collectivising? What is the most appropriate and efficient governance model for the collective exercise of data rights?)

• How do context-dependencies factor into harnessing data rights? (e.g. Can we generalise across the board? What general rules or recommendations apply across the board, where do we need topic/community specific rules? What are the shortcomings of the existing framework?)

• Can we see the emergence of a new category of ‘data rights’ – in the GDPR, but also increasingly in digitally-oriented regulatory proposals such as the Digital Services Act and the Data Governance Act – aimed at challenging data-driven power asymmetries? What are some potential updates of existing legal and regulatory tools that would offer better protection pertaining to data rights?

• In addition to the GDPR, what other legal (or non-legal) means can, or should, work alongside it?

This multi-disciplinary panel will discuss these questions through pre-identified use-cases that will be used as examples which showcase distinct communities with different dynamics, while still falling under the category of gig work: (a) individual content creators challenging financial censorship decisions by fintech intermediaries embedded in online sex platforms, and (b) platform economy workers demanding better labour conditions through access to their data and insight to algorithmic management by ride-hailing and delivery companies.

We believe this panel to fit the selected track perfectly, as it specifically aims to (a) explore where GDPR-rights fall short, by contextualising it through specific use cases, and more importantly (b) position ‘data rights’ as a wider legal and policy tool in the platform-resistance debate. This will also inform the wider discussion on how data rights can evolve constructively and systemically as a tool to address social and economic injustices.

15:30-17:00 Session 10A: Competition and Market Regulation: Competition in the health sector
15:30
Health Data Sharing for Cooperative Research Beyond the Pandemic: What Art. 101 TFUE Can Learn From the Emerging Data Regulation

ABSTRACT. The covid-19 pandemic has advanced the race for health data of both private and public stakeholders in scientific research projects. Data sharing and data pooling are being encouraged by European regulators across both the vertical and horizontal rails of the European internal market: while the European Union is preparing a legislative proposal specifically focusing on the European Health Data Space, players active in such space are currently struggling with two major hurdles: i) the lack of legal certainty regarding the boundaries between lawfulness and unlawfulness of data sharing practices under European competition law; ii) the lack of coherence of the European competition framework with other regulatory branches of European Union law. This is suggested by the recent consultation by the Commission on the evaluation of the two block exemption regulations and the guidelines on horizontal cooperation agreements. This contribution aims at addressing the identified concerns, questioning under which conditions health data sharing for research purposes is legitimate and thus promoted under the present competition framework. For these purposes it enquires the relevance of health data sharing agreements as research and development collaborations under art. 101 TFUE. It moves from the consideration of the two frameworks regarding horizontal cooperation agreements and the measures taken during the Covid-19 crisis as a basis for the development of a long-term competition policy that positively enables pro-competitive health research occurring through collaborative data sharing, without undermining smaller businesses’ and research entities’ ability to compete in innovation markets. In this respect, the study identifies some criteria relevant for assessing the lawfulness of health data sharing agreements under art. 101(1). These criteria are drawn from a combined reading of the mentioned framework and the principles grounding the recent Data Governance Act and Digital Service Package. They encompass subjective (type of undertakings involves); objective (type of health data shared); structural (degree of openness) and teleological features (public interest or commercial-oriented research) of health data sharing arrangements. The study conversely demonstrates the persisting difficulties in positively evaluating non strictly economic efficiencies as health-related ones in anticompetitive health data sharing arrangements under art. 101(3) TFUE.

16:00
Mergers that harm Our Health

ABSTRACT. We are currently facing a new wave of healthcare mergers in the United States. More and more health insurers, such as Aetna, have started merging with powerful drug suppliers, such as CVS. What do these companies hope to achieve by merging? They want to increase their access to our health data. They want to know our individual biology, our health history, our level of well-being; they want to know where we go, what we buy, how much we sleep; if we can resist sugar, junk food or nicotine; if we exercise and how often we exercise. In other words, they aim to shape our digital health ID. Why? On one hand, health insurers aim to reduce their risks and therefore their costs by improving our level of well-being. On the other, health insurers aim to reduce their costs by discriminating against us. Indeed, by allowing health insurers to gain access to consumers’ health habits and data these types of data driven mergers can create substantial barriers to entry for high-risk consumers who want to enter the health insurance services market. Can the U.S. antitrust enforcers take into consideration the harm to access such mergers may create for high-risk, vulnerable consumers? And, if so, how? This study examines three potential ways in which the U.S. antitrust enforcers may consider the harm to high-risk consumers such mergers may actually create. First, the antitrust enforcers may conclude that vulnerable, high-risk consumers constitute a separate relevant market. Second, the antitrust enforcers may take the stance that although the net effect of the proposed merger on all segments of consumers should be assessed, the merger’s negative impact on the high-risk consumers should weigh more than its positive impact on the low-risk ones. Third, they may hold that such mergers help a health insurer to violate the Affordable Care Act and should, therefore, be prohibited. This study supports the claim that the U.S. antitrust enforcers should not disregard the risk to high-risk consumers these mergers may create. If so, they risk applying the antitrust law in a way that further exacerbates the existing health inequalities in the United States.

16:30
The Public Health Emergency Consideration in UK Merger Control: A new wall around British Business?

ABSTRACT. The UK Enterprise Act 2002 empowers UK government ministers to prohibit or modify mergers on a number of public interest grounds including national security, media plurality and financial stability. Yet, more recently, the UK government has reformed the list of public interest considerations, allowing ministers to intervene in acquisitions, with a view to maintaining the United Kingdom’s ‘capability to combat and mitigate the effects of public health emergencies.’ This comes amid concern in policy circles as to the efficacy of extant rules in protecting businesses on the frontline of the COVID-19 pandemic from hostile or predatory takeovers. This paper argues that the latest reforms amount to a defensive wall around a great number of British businesses. This allows ministers to make interventions where the business concerned controls capabilities that are critical to the UK’s response to present and future public health emergencies. The paper further argues that the scope of the reforms allow the possibility of protectionist interventions which might imperil the UK’s reputation as a haven for foreign investment.

15:30-17:00 Session 10B: Human Rights and AI: Facial recognition, surveillance, within and beyond borders
15:30
Development or Dystopia: Does the GDPR regulate the interconnected Law and Technology challenges raised by Facial Recognition Technology?

ABSTRACT. This paper is set within the overall thesis project of the same title. The research will unveil whether the General Data Protection Regulation (GDPR) (or assimilated European regulatory instruments such as the Law Enforcement Directive) regulates the law and technology challenges posed by Facial Recognition Technology (FRT) in its current state of the art.FRT is a disruptive technology with huge potential and impact.1 The addition of Artificial Intelligence (AI) to FRT leaves an ‘open door’ to applications based on its categorisation functions (e.g., sexual orientation detection, sentiment analysis, predictive policing) 2, increasing the level of risk that it poses. Moreover, FRT is fed by facial images that are biometric data, which entails different challenges from both the privacy and data protection (DP) point of view (e.g., data minimisation, purpose limitation, fairness, accountability, etc).3 Empowered by AI, FRT continually processes biometric data and is contactless. It does not imply, unlike other biometric technologies such as fingerprint, iris or palm scanners, an active action from the data subject. The face template can be extracted without the individual noticing and thus consenting. This nature makes the technology particularly worthy of attention compared with many other AI-empowered surveillance tools. In this sense, its ground-breaking character and potentially harmful uses pose a significant threat to the right to privacy and threaten a potential revolution to it, in a similar manner to the advent of instant photography.4 Simultaneously, picture recognition, computer vision and cryptography will offer technology-based options for some of the visual privacy and DP issues for the first time.5Since the GDPR has not been thought to respond specifically to FRT, some of its intrinsic characteristics might not be sufficiently addressed by the norm, resulting in privacy and DP infringements. As the GDPR allows Member States (MS) to provide additional specific rules regarding the processing of biometric data, we must resort to national law, leading to a fragmented and unpredictable picture.6 Further, there is also a recurrent debate at the EU level about banning FRT completely.7 Unlike other places such as certain cities and states within the U.S., it is possible to use FRT within the EU. However, the ‘meantime’ situation with frequent negative decisions by diverse DPAs regarding its deployment enhances insecurity in this respect.8 These reasons reinforce a state of uncertainty that threatens the interests of all, from citizens to governments, as well as the development of the sector itself. In this line, the FRT industry has also spotted some incongruences in attempting to apply the legal text to the actual situation.This study becomes especially relevant within the current moment when the European Union (EU) is involved in a long-term project that plans to ensure biometric recordings of all citizens within the EU by using fingerprints and facial images. The EU citizens are part of these provisions since EU Regulation No. 2252/2004 includes obligatory biometric attributes (such as facial images) in passports and travel documents issued by MS. Moreover, most governments in Europe are currently implementing biometric eID cards.9 The initiative will initially concentrate on non-citizens by storing biometrics on databases such as Visa Information System (photos and fingerprints of short-term visa applicants) and Eurodac. Additionally, MS are also invited to ensure that biometric data from people accused of criminal offences is obtained for storage in European Criminal History Database Systems (ECRIS-TCN).10 Additionally, the future legislative proposal on AI by the EU Commission, which potentially will focus on high-risk AI applications such as FRT, emphasizes the right timing of this analysis.11This chapter will analyse the specific GDPR (or assimilated instruments as the Law Enforcement Directive’s) provisions that intersect with FRT to frame the current and actual threats posed by the technology. It will also tackle the possible solutions that have been proposed up to this moment (based on GDPR obligations).The GDPR dispositions are the core of this chapter. These dispositions will be complemented by its interpretation, adapted to the specific FRT case by the literature. The work will also consider other documents such as DPA (Data Protection Authority) decisions, studies by the European Parliament and reports by law and technology research institutes.Finally, the work will incorporate real-life FRT deployments to exemplify the actual (lack of) level of ‘privacy by design and default’ in the current technology placements.The chapter proceeds in seven parts. The first part analyses the suitability of the GDPR to explore the privacy and DP issues arisen by FRT. The second part will unveil the scenarios where biometric data processing and therefore FRT are permitted according to the GDPR. The third part will dig deeper on the first of these scenarios, consent, being the one that leaves power, and therefore responsibility, within the hands of the data subject. The fourth part will analyse the viability of creating data protection by design and default FRT to incorporate the privacy and DP criteria previously considered. The fifth part will explore data protection impact assessments as mechanisms to enforce data protection by design and default FRT. The sixth part will study security threats as a very sensitive aspect within data protection impact assessments due to the novel character of state-of-the-art FRT and the immense amount of biometric thus sensitive data it manages. Finally, the conclusions of the chapter will be constructed.

16:00
Data subject rights as human rights safeguards against surveillance measures from outside the European Union

ABSTRACT. 2020 was a year of crisis. In the area of personal data protection within the European Union (EU), this crisis was not only caused by Covid-19, but also by a judgment of the Court of Justice of the EU (CJEU) (Schrems II) that once again invalidated the existing transfer mechanism with the United States based on surveillance concerns. Ever since the CJEU’s original Schrems I judgment, the transfer of personal data outside of the EU has been closely intertwined with the protection of fundamental rights threatened by surveillance by public authorities abroad.

Within the EU, protection against third country surveillance measures should be achieved via the data transfer rules established in the General Data Protection Regulation (GDPR), and, in a law enforcement context, the Law Enforcement Directive (LED). To safeguard the individual, both of these acts grant a number of rights to data subjects that can be relied upon against private and public actors conducting surveillance both inside and outside of the EU.

However, in reality it is not clear how these rights offer effective protection against surveillance measures abroad. On the one hand, there is a need to clarify the standard of protection offered by the rights granted in the GDPR and the LED. On the other hand, many questions remain about how the EU data protection regime as set out in the GDPR and the LED extends across the EU’s external borders. With Schrems II, the CJEU found (again) that granting (some) rights to individuals is mandatory for (nearly) any export of personal data from the EU, but which rights and how they should be granted continue to be open questions.

In this presentation of my ongoing PhD research, I will outline my (preliminary) findings on these two aspects by offering first answers to the following question: What are the rights given to data subjects in the GDPR and the LED and when and how do they have to be guaranteed beyond the EU’s external borders? Answering this question will be crucial for finding a way to transfer personal data outside of the EU while safeguarding fundamental rights.

16:30
Consent Mechanisms for the Use of Facial Recognition Systems among Vulnerable Groups: A rights based approach to biometric intervention during times of crisis

ABSTRACT. Facial recognition software holds the potential to alleviate social, political and economic inefficiencies for instance, by getting rid of redundancies in identification and registry and providing greater security. The mere introduction of facial recognition software cannot, however, alleviate human struggle if its potential to cause harm is unaccounted for. Facial recognition software has added to complex and layered systems known to threaten the wellbeing of vulnerable groups particularly in humanitarian crises responses. Vulnerable groups are at greatest risk in lacking access to technologies, access here being characterised by: availability, affordability, awareness, digital literacies, and self-efficacy. Focusing on self-efficacy, this paper posits that indigenous communities in Africa, such as the Tigray, are rendered vulnerable by histories of exclusion and marginalisation and face greater risk of consent being violated from invasive facial recognition technologies in times of crisis. As a technology tool that is easy to deploy, facial recognition compared to other forms of biometric identification poses huge risks to establishing consent. The paper will propose a framework for establishing consent among these communities with respect to facial recognition technologies.

15:30-17:00 Session 10C: IP Law: COVID-19 and Intellectual Property
15:30
Access to and ownership of data to tackle Covid-19: some lessons IP laws should learn for good

ABSTRACT. The fight against the coronavirus needs (big) data to orient decision making and healthcare policies. For various reasons, researchers, private organisations, and non-research public bodies process large amounts of non-personal data which can tackle the pandemic in various ways. Two data processing settings can be singled out. The first one encompasses processing of scientific data (e.g. data input vital to find a vaccine), but also statistical data and other research inputs. Actors harvesting them are, in the first place, researchers in the fields of natural and social sciences (‘scientific & research data’, ‘SRD’). On the other hand, private enterprises and non-research public bodies (e.g. national authorities) hoard data as a result or as a by-product of their activities. These datasets prove likewise crucial to address issues of public interest such as the Covid-19 pandemic (‘privately & publicly collected data’, ‘PPCD’). The success of data-driven policies, however, mostly depends on how data is managed. During the Covid-19 crisis, two main tendencies have emerged in this respect. On the one hand, there exist a data management system resting on (access) barriers which restrict data access. Conversely, an alternative data management system is founded on open data access approaches valuing data availability amongst a wide number of actors. This contribution aims to describe and assess the two data management systems concerning SRD and PPCD, presenting the main tendencies which have arisen during the Covid-19 pandemic. Accordingly, it concludes with some policy arguments on the role of IP and other areas of law as to fostering data access for public interest purposes.

16:00
Analyzing Patent-Literature for Mapping and Evaluating Covid-19 Innovation

ABSTRACT. The Covid-19 pandemic has prompted several patent offices worldwide to provide companies and investigators with a large number of new web-based services to cope with logistical and financial problems related to patent proceedings and to foster research and access to information relevant for developing new, more appropriate products and technologies. As of spring 2021, it is still too early to have a complete view about if and how innovation stemming from this “new normal” situation has been captured and claimed in patent applications. Given the potential financial and strategic importance for some applicants to get a patent quickly granted, the examination and publication of patent applications claiming products and technologies related to Covid-19 may have been accelerated shortly after their filing, at least in some countries. The paper present the data that have been extracted from patent databases using a methodology described in a recent publication (Falciola L and Barbieri M, “Searching and Analyzing Patent-Relevant Information for Evaluating COVID-19 Innovation”; Falciola L and Barbieri M, “Searching and Analyzing Patent-Relevant Information for Evaluating COVID-19 Innovation”; posted on Jan. 26th 2021; available at SSRN: https://ssrn.com/abstract=3771756) about the earliest patent literature explicitly mentioning the relevance of claimed subject matter for Covid-19 pandemic that has been published by major patent offices worldwide. This analysis has been performed in three main dimensions: the claimed technologies (in medical and other domains), the type of patent proceedings (as regular patent application or utility models, already granted or not), and countries (having own medical, IP, and economic policies). The patent publication trends have also been studied by distinguishing between two periods (March 2020 - August 2020 and September 2020 - February 2021) to identify how the temporal evolution of the Covid-19 pandemic may have affected the patenting strategies for protecting innovation in an emergency situation, as some evidence would suggest.

16:30
Open Covid Pledge
15:30-17:00 Session 10D: Energy and Climate Crisis: Workshop on Business and the Energy transition (part 2)

Workshop: Climate Justice and the Business of Energy Transition

20 May from 13:30-15:00 and 15:30-17:00 (online)

In conjunction with:Netherlands Network of Human Rights Research Constitutionalizing the Anthropocene ProjectTilburg Institute for Law, Technology & Society

15:30-17:00 Session 10E: Open Track: Technological responses to crises
15:30
A Proportionality-Based Framework for Government Regulation of Digital Tracing Apps in Times of Emergency

ABSTRACT. U.S. law lacks an overarching legal approach to assess government powers during emergencies. In the absence of a comprehensive doctrinal approach, as governments are looking into Digital Tracing Apps (DTA)-based strategies to fight the pandemic, what is being assessed is policy measures – the applications, rather than the policy as a whole. Public, private, and government institutions have focused primarily on privacy as the main value worthy of protection and found DTAs to be a desirable measure as long as their design is privacy-preserving. However, analysis based merely on DTAs’ features and privacy considerations misses out some tradeoffs: First, the more privacy-preserving DTAs’ design, less effective they are, and vice versa. Thus, a limited benefit is unlikely to advance the policy’s goal and a greater benefit is unlikely worth the infringement of rights. Second, the social price paid when due to emergencies mass-surveillance mechanisms are being institutionalized and accepted.

This Article looks beyond privacy law as the make-or-break standard for DTAs evaluation, borrowing a European law tool, the doctrine of proportionality methodological framework. Regularly, embraced in areas where government actions directly affect the citizens, the doctrine of proportionality offers a procedural method to evaluate the suitability and necessity of different measures, and requires governments to balance between public interests and individual rights. Using proportionality’s procedural method during emergency allows encompassing ethics, politics and economy more flexibly than through strict law, thus reaffirms commitment to fairness, embodies governmental accountability, and encourages public trust and compliance, which are essential to overcoming national emergencies The analysis concludes that since traditional methods offer less violation of rights to smaller parts of the population, limited scope of such violation and for limited amount of time while achieving similar, and arguably better results to those suggested by DTAs, DTA-based policy is riskier and less proportionate than traditional public health methods to address COVID-19.

16:00
The Intention; Requirements for software as a medical device in EU law

ABSTRACT. The role of software in society has changed drastically since the start of the twenty first century. Software can now partially or fully facilitate anything from diagnosis to treatment of a disease, regardless of whether it is psychological or pathological, with the natural consequence of software being comparable to any other type of medical equipment.

We see this with the medical device legislation not being explicitly created for software, as well as unpredictable developments in the field such as contact tracing applications.

Any uncertainties concerning the interpretation of these legal tools must therefore be cleared.

In this paper, we show how and when software is considered a medical device in EU law, more specifically in the Medical Device Regulation. To do this, we first create a framework for the intention of the manufacturer, since this would otherwise pose a barrier that would prevent software from being seen as a medical device. We show how and where software is specifically mentioned in the regulation, and how it is understood. We also include a comparison between US law and the EU regime, and take a quick look at contact tracing applications as such, as well as the practice of two European regulators.

We finally create, from existing requirements, a decision diagram to illustrate our findings and combine this with with the framework from earlier.

We find that it is possible to determine whether software is a medical device or not, but it remains very unsure. This is because of the ambiguity of the criteria as well as how the medical device regulation is designed. A literal interpretation of the regulation yield immediate issues with whether the software should be included or not. Furthermore, if software is considered a medical device, they are subject to strict obligations of both organisation and cybersecurity, and can face severe penalties such as withdrawal or bans on their software. At the same time, enforcement is left to local regulators in each memberstate, which increases the risk of lack of oversight or even induces local interpretations, as well as lacking clear legal practice. We see that there exists an actual risk of circumvention of the MDR, and show that the Principle of Circumvention can be applied.

To put it into perspective, we also find that US law, via FDA guidelines, has measures that can be used as inspiration, including an exclusion list and overarching federal penalties in the form of withdrawal, for not reporting which and how one's software is a medical device.

16:30
Big Tech Platforms as ‘societal problem solvers’: How to organise democratic oversight and control

ABSTRACT. Never waste a good crisis to adopt new digital technology fixes: whether it is combating misinformation, disciplining the abuse of political freedoms or managing a global health crisis, digital technologies become an important element in governments’ responses to solving societal problems. With a growing reliance on technological solutions, governments increasingly entrust commercial technology providers, such as Google and Facebook, with important governance functions. Doing so raises many fundamental questions but one is particularly pertinent and insufficiently debated: if, and under which conditions should global commercial players be allowed to play a role in determining a nation’s public health policy? It is this question that our contribution tackles, using the example of the introduction of contact tracing technology in managing the COVID-19 crisis as a springboard to address the entanglements of public-private relations.

Our paper analyses the debates around the adoption of contact tracing apps in Germany, Italy, UK and the Netherlands. The paper will illustrate how governments across Western Europe were torn between accepting the services of technology platforms by invoking voluntariness and consent as sufficient legal grounds for citizen's adoption of the apps, and the need for finding new ways to democratically legitimise the outsourcing of public tasks. We will identify the different routes that these governments have been taking in solving that conflict, including a discussion of the (few) instances in which governments went ahead to adopt laws that laid down the conditions for democratic accountability and the transfer of public tasks. Building on this analysis, the paper will contribute a more systematic framework for discussing if, and how, to organise democratic control and oversight over technology platforms that governments increasingly rely on to solve societal problems. The COVID-19 contact tracing debate has made it abundantly clear that developing such a framework is crucial and much-needed, especially against the background of pending regulatory proposals from the European Commission such as the Digital Services Act, Digital Markets Act and the introduction of new ‘due diligence’ obligations to safeguard fundamental rights and public interest.

15:30-17:00 Session 10F: Data Governance: Sector Transitions/Transgressions Panel
15:30
Sector Transitions and Transgressions during the Pandemic

ABSTRACT. The Covid-19 pandemic has spurred intense ‘sector creep’, with firms such as Google, Facebook, Amazon and Palantir seeking new markets and opportunities in global public health. These ‘sphere transgressions’ embed new possibilities for the monitoring and control of public and private life which will not disappear with the waning of the pandemic. We will bring together researchers and CSOs to feed back on our research findings and debate the effects of this phenomenon. We have three aims with this session. 1) to surface this new and rapidly developing phenomenon as a specific issue for the rights community globally, consulting with participants about its manifestations in different places; 2) to connect it to a range of civil and political rights issues, including but going beyond privacy; and 3) to debate possible responses on the part of civil society and rights groups. Debating this with the field is essential for our framing of the problem, and in turn for understanding what leverage should be brought to address it – data protection and privacy claims, regulatory measures, civil society awareness-raising and resistance, pressure on government for transparency and democratisation of decision-making, or norm-building in international fora. The session will include, but go beyond privacy and data protection to examine the risks sphere transgressions pose to civil society. We will be presenting on research conducted by the Global Data Justice project on this issue, as part of a group looking at the effect of the pandemic on tech in the EU. The session will be interactive - we hope to use it as an opportunity to learn about new cases of sector creep around the world, and different views on responses. Our aim with this panel, as with the broader project, is to build a community around this issue - both by surfacing related issues from different countries and regions, and by involving participants actively in the search for responses. We want to connect the privacy and data protection community, who have been working on issues related to this for a long time, with those coming from other rights perspectives who may have new insights and responses from their own fields.

 

Panelists

Tamar Sharon

Stephanie Hankey

Astha Kapoor

Matthias Speilkamp

Bruno Bioni

17:00-18:30 Session 11A: Data Governance: Vulnerabilities in Data Governance
17:00
Regulating Facial Recognition Technology: What’s in a face and what to regulate?

ABSTRACT. The quick roll out of facial recognition technology (FRT) in the private and law enforcement sectors has attracted vehement criticisms and strict regulations in major jurisdictions due to negative impacts on significant societal values including privacy, transparency, fairness, accountability, equality, etc. However, both advocates and opponents have largely neglected an essential dimension in addressing the current FRT regulations (to protect human faces) and these related values, that is: what is essentially in a human face that needs special (legal) protection against FRT in the digital age. There is insufficient discussion of changed and still changing functionalities of human faces in modern communities against the backdrops of our increasingly digitalized, connected daily life.

This short paper seeks to reflect on the current, mainstream FRT regulatory approaches more from a functional perspective. It will first analyze what is in a human face that deserves legal protection by providing a short review of the changing functionalities of human faces in social life: namely, from identification, non-verbal communication, and authentication, to further associated reputation and dignity, and most recently to data carriers ( facial data, as points of identity and authentication). Then it will discuss the underlying rationales to protect human faces (i.e. facial images, facial characteristics or profiles) and the related laws and regulatory approaches - including banning masks and beards in public spaces, or special societal settings (e.g. in the military) - especially when some technologies can change human faces (i.e. cosmic surgery) and thus challenge the protected values. Last, the paper intends to speculate on how, when compared to other technologies, FRT can really have so different impacts on society and individuals that it shall qualify a full ban or very strong restriction as already witnessed in some major jurisdictions (i.e., the EU and US). In concluding, it tentatively argues that with an increasing variety of multiple sensors and perception AI all around, the current FRT regulations are not workable (to the desired ends), fair (to FRT) and not sustainable (especially in view of the cons and an inevitable IoT world).

17:30
The AI-Assisted Surveillance Industry: “Hello, How May I Spy for You Today?”

ABSTRACT. This paper examines the under-regulated Artificial Intelligence (AI)-assisted surveillance marketplace that caters to non-western governments. Current discussions on AI-assisted surveillance (e.g. facial recognition and biometrics) tend to focus on the largest tech corporations with the most expansive data infrastructures. It also is preoccupied with controversies that occur in the west. What has fallen out of the purview are the smaller companies based in the west that export surveillance to governments elsewhere.

Presenting a dataset of such corporations, several case studies, and the physical mapping of these flows, this paper argues it is vital to examine the supply chain of today’s surveillance industry as there is immense agenda-setting powers for companies that shape the supply of government surveillance. Furthermore, despite the long history of some of these companies, many appear to lack rigorous human rights audit and close monitoring of contextual harms incurred when exporting their subscription-based services. This oversight has led to contentious deployment of surveillance systems against protesters in places such as Hong Kong and India. Likewise, serious concerns around data practices have emerged. As Aurora (2019) contends, certain community’s privacy is seen as more expendable than others. The opacity in such operations are further entrenched by the prioritization of client confidentiality, proprietary prerogatives to closed-source algorithms, and claims of infrastructural neutrality. Furthermore, citizens from these affected countries only have a limited remit to hold these corporations responsible, as these companies are situated in distant jurisdictions that these populations have little influence over.

Multiple pressing implications arise. First is the asymmetries that exists between regulation over domestic use of AI-assisted surveillance versus exported systems, particularly in western democracies. Here, corporate responsibilities are held to a different standard depending on the locality in question. Second, is the human rights violation that occur when non-western countries become testing grounds for the latest technologies. The backlash from such countries are drowned out in the larger international platform. Third, is the dependencies and inequities that are reified between western corporations and non-western clients through the lending of infrastructures, opaque systems, and data pipelines that ultimately are designed, calibrated and controlled by the west.

18:00
Privacy as a privilege? Privacy expectations of vulnerable data subjects in smart cities

ABSTRACT. While it can be challenging for non-experts to understand how to take self-determined decisions regarding complex technical systems and legal provisions, it is near impossible for some people. How can individuals exercise their individual rights if there is already a struggle to, for example, read and write, to speak the language in which a privacy note is composed or to set up an email account? The hypothesis underlying this study is that there is a mismatch between data protection law and expectations of citizens and/or the civil society organisations representing them regarding the processing of their personal data. The GDPR implies knowledgeable data subjects that have the capabilities to exercise their access rights and/or knowledgeable civil society organisations representing vulnerable data subjects (Christofi et al., 2021).

In order to test our hypothesis we address the position of marginalized and vulnerable social groups in increasingly digitalised (or smart) cities and their expectations vis-a-vis the protection of their personal data : people with learning deficiencies, older people, platform workers … The ‘ideal’ subject of the smart city is seen as “tech-savvy, independent, and uber-modern, able to produce digital data and analyze it to hold city government accountable” (Burns and Andrucki, 2020). Data processing in public space (smart city, surveillance) and public services (e-government) seems a particularly interesting context in this regard, as usage is often not voluntary and both benefits and high risks can be discussed clearly (see e.g. van Zoonen, 2016). To find out more about those social groups and their data protection expectations, this work in progress is taking a stepwise approach. Based on a theoretical discussion, it first turns to surveys that have been conducted within the field previously. Then, a short survey was distributed to 22 civil society organizations representing different marginalized and vulnerable groups in Flanders, Belgium. In addition, interviews were conducted with some of the respondents and finally, a focus group is planned with one group of vulnerable data subjects.

17:00-18:30 Session 11B: Competition and Market Regulation: Regulating data flows
Chair:
17:00
Anticompetitive-by-Design: Preventing Dark Patterns in Data Sharing

ABSTRACT. Data sharing practices like interoperability and portability have long been touted by policymakers for their potential to improve competition and innovation. In the US, multiple sectors have made regulatory and self-regulatory efforts to enact such policy, but in each case, data senders have resisted making data available to consumer-permitted third-party data users. Data users have thus resorted to more costly and less security/privacy-friendly data collection practices such as web scraping, vulnerability exploitation, and reverse engineering.

In this paper, I explore the anticompetitive dark patterns data holders use to subvert data sharing regulatory efforts and retain exclusive access to collected data. I use examples in the US from banking, healthcare, energy, and agriculture to highlight three categories of dark patterns data holders use to mislead: bad faith privacy (e.g. scare screens, data hoarding), bad faith cybersecurity (e.g. excessive encryption, selective security enforcement), and bad faith user experience (e.g. not adopting standards, slowing data transmission, withholding documentation).

Data holders employ anticompetitive dark patterns because of misaligned incentives around data sharing. Building and maintaining a data sharing system incurs costs that a data holder may not be able to recoup. Anticompetitive dark patterns let data holders maintain a facade of pro-competitive fair play while reserving potential sources of future innovation for themselves. Most importantly, data holders are concerned that they will be held liable for the cybersecurity and privacy practices of their data users.

I conclude with two suggestions for regulators. First, anticompetitive dark patterns should be rooted out at the individual sector level. Broad data portability mandates like GDPR’s Article 20 have failed to stop these practices. Privacy, security, and user experience concerns are unique to different sectors and only uncaptured standards bodies with deep industry expertise can sort out the good faith arguments from the bad. Second, data sharing policy should not ban individual dark patterns but rather realign data holder incentives. Data sharing regulation alongside privacy and cybersecurity rewards and punishments could force data holders to engage in more honest data practices by increasing scrutiny and protections for data that consumers cannot easily take elsewhere.

17:30
Biosupremacy: Data, Competition, and Monopolistic Power Over Human Behavior

ABSTRACT. For decades, technology companies have avoided antitrust enforcement and grown so powerful that their influence equals that of many governments. Their power stems from data collected by devices that people welcome into their homes, workplaces, schools, and public spaces. When paired with artificial intelligence, this vast surveillance network profiles people to sort them into increasingly specific categories. However, this "sensing net" was not implemented solely to observe and analyze human behavior; it also enables control. The sensing net is paired with a network of influence, the "control net," that leverages intelligence from the sensing net to manipulate people's behavior, nudging them to modify their behavior through personalized newsfeeds, targeted advertising, and dark patterns. Dual networks of sensing and control form a global digital panopticon, a modern analog of Bentham's eighteenth-century building designed for total surveillance. It monitors billions of students, employees, patients, and members of the public. Moreover, it enables a pernicious type of influence that Foucault defined as biopower: the ability to measure and manipulate populations to shift social norms.

A handful of companies are vying for a dominant share of biopower to achieve biosupremacy, monopolistic power over human behavior. The COVID-19 pandemic, and society's increasing reliance on platforms for activities of daily living, has tightened their grip. This Article analyzes how firms concentrate biopower through conglomerate mergers that add software and devices to their sensing and control networks. Acquiring sensors in new markets enables cross-market data flows that send information back to the acquiring firm across sectoral boundaries. Conglomerate mergers also expand the control net, establishing beachheads from which platforms exert biopower to assault social norms.

Competition agencies should adopt biopower as a lens for examining the behavior of economic actors. Regulators should account for the costs imposed by panoptic surveillance and the impact of coercive choice architecture on product quality. They should scrutinize conglomerate mergers, halt acquisitions that concentrate biopower, prohibit dark patterns, and mandate data siloes to block cross-market data flows. To prevent platforms from locking consumers into panoptic walled gardens, which concentrate biopower, regulators should force tech companies to implement data portability and platform interoperability.

18:00
Structural Blind Spots: Harms to Digital Consumers in EU Antitrust, Consumer and Data Protection Law

ABSTRACT. The relationship between online platforms such as Google/Alphabet or Facebook and individuals is governed by a complex mesh of intersecting legal doctrines and enforcement tools. Among other aspects of this generative relationship, the most intractable is perhaps the governance of personal data, which has been growing in salience in Europe at least since the coming into force of the GDPR. This paper examines the interplay of three important branches of EU law – data protection, antitrust and consumer protection – in light of recent reforms including the Digital Services Act package and argues that while each has the potential to play an important role in the governance of personal data and of the platform economy, there is an unaddressed gap in the protection they can jointly afford.

The interpretation and application of competition, data and consumer protection reveals a number of family ties, common ideological roots and doctrinal overlaps but also gives rise to two sets of interpretive disagreements. The first is between “purists” who highlight the differences between each body of doctrine and seek to preserve their separate internal coherence, and “pluralists” who emphasize commonalities, seeking to make sense of these areas as a dynamic family of laws together capable of grounding the EU’s evolving digital social market economy. The second disagreement is between “formalists” who defend faithfulness to legal doctrine or principle, and “realists” who are concerned with the underlying power structures and blind spots in law.

This paper examines both interpretive disagreements in the EU digital context, defending pluralism and realism, respectively. Data, consumer and competition law must be interpreted pluralistically as part of a joint family of laws. Yet their joint interpretation must also be scrutinized with realism. Indeed these areas leave unaddressed three structural blind spots: (1) a gap in protection against national political threats in online spaces, (2) a relative neglect of collective governance mechanisms and lack of protection for third parties, and (3) a timid critical stance toward surveillance and informational capitalism.

In the absence of institutional reinvention, EU digital law remains ill equipped to address digital market harms. The DSA reform is a welcome step forward but remains insufficient.