next day
all days

View: session overviewtalk overview

09:00-10:30 Session 1A: Human Rights and AI: Algorithmic decision-making and state surveillance
Why algorithmic decision-making in the criminal justice system needs to be evaluated from a constitutional perspective

ABSTRACT. Algorithmic decision-making (ADM) tools that use machine learning techniques (ML) are increasingly being deployed in criminal justice contexts for several purposes, including biometric identification, geospatial prediction, and ‘recidivism’ risk assessment to inform individual custody decisions. Advocates claim these tools enable ‘smarter’, more efficient, consistent and objective decisions, while avoiding the bias, subjectivity and error associated with human decision-making. A growing cadre of scholars have demonstrated the tenuousness of these claims, subjecting ML-based ADM tools to critical examination. Yet these critiques have overwhelmingly focused on concerns regarding bias, discrimination, and a lack of transparency. In this paper we argue for the urgent need to subject these tools to careful constitutional evaluation, focusing on ML-based recidivism risk assessment tools to support our claims. We argue that contemporary critiques have failed to attend properly to the ways in which these tools are developed and implemented in ways that threatens to violate normative principles and constraints upon which the basic constitutional architecture of liberal democratic states rests – including the rule of law, due process, and respect for human rights. Taken together, these principles are rooted in a foundational commitment to constitutionalism which, ultimately, insists upon the need for effective institutional constraints to safeguard against despotism and the abuse of governmental power.

Adherence to the constitutional principles of liberal democratic government demands (among other things) that the exercise of governmental power must open to public scrutiny, that those who wield governmental authority bear legal and democratic accountability for its exercise, and ultimately, that the exercise of such powers must be legally justified and always in accordance with law. Yet there is clear evidence that these constitutional requirements are not being consistently observed in the design and implementation of ML-based ADM tools. In turn, these tools threaten to produce novel and significant forms of arbitrariness that could create widespread injustice, and expose individuals to the exercise of arbitrary governmental power in ways that liberal democratic communities should not be prepared to tolerate. Unless such deficiencies are properly remedied, then the use of ML-based ADM tools for criminal justice purposes should be regarded as constitutionally illegitimate.

Digital Platforms and the Digitisation of Government Surveillance

ABSTRACT. In Europe today, digital platforms, such as Facebook, Twitter and YouTube, provide essential means for millions of people to express themselves, engage in public debate, and organise politically (Poell & van Dijck, 2018). Indeed, the European Court of Human Rights has emphasised how platforms provide an ‘unprecedented’ means for exercising free expression, ‘undoubtedly’ enhance the public’s access to news, and facilitate widespread dissemination of information (Cengiz v. Turkey, 2017, para. 52) (Dobber, Ó Fathaigh & Zuiderveen Borgesius, 2019). However, these platforms are built upon vast systems of data collection and monetisation (Cohen, 2017; Van Hoboken & Ó Fathaigh, 2019), which raises major concerns in terms of privacy and data protection; while platform use of algorithmic and AI systems is shaping information dissemination, which impacts on users’ free expression (Eskens, Helberger & Möller, 2017).

Crucially, governments are leveraging the power of platforms to impose new forms of restrictions on free expression, and engage in surveillance of individuals and online activism. This has profound implications for the rights to freedom of expression, privacy, and data protection. Further, platforms that once refused to cooperate with governments in identifying users responsible for disseminating allegedly illegal or harmful content are now expanding cooperation with authorities, including sharing data about users flagged by law enforcement and other authorities. As civil society organisations warn, this trend is contributing to ‘invasive and unlawful digital surveillance’ (Amnesty International, 2019, p. 24).

This paper examines how European governments are leveraging the power of digital platforms to engage in government surveillance online, and assesses the compatibility of these measures with European human rights law. The paper applies a unique interdisciplinary perspective, bringing together law, political communication and surveillance studies. First, the paper examines how platforms’ algorithmic systems shape (and limit) information dissemination. The paper then critically analyses government-platform initiatives that exist to surveil citizens and gather information, including new measures under the EU’s proposed Digital Services Act. Third, it assesses how these measures comply with freedom of expression and the right to privacy, and concludes with recommendations on remedying problematic elements of the role platforms play in digitisation of government surveillance.

Fraud detection, the digitisation of the welfare state, and the Dutch SyRI judgment

ABSTRACT. Fraud detection, the digitisation of the welfare state, and the Dutch SyRI judgment Abstract for TILTing Perspectives 2021

In 2020, a Dutch court ruled in a case about a digital welfare fraud detection system called Systeem Risico Indicatie (in short: SyRI). The court decided that the SyRI legislation is unlawful because it does not comply with the right to privacy under the European Convention of Human Rights. This is among the first times that a court invalidates a welfare fraud detection system for breaching the right to privacy. A thorough discussion and broader analysis of the SyRI judgment does not yet exist in English scientific literature. In this paper, we discuss the facts of the SyRI judgment in-depth: How exactly was the SyRI system defined in the Dutch legal system (this is unclear from the judgment)? How did the SyRI system violate the right to privacy? How and why does the court discuss both the right to privacy (of the European Convention of Human Rights) and the General Data Protection Regulation? What are the legal implications of the SyRI judgment? Finally, we discuss whether the SyRI judgment brings any changes to the Digital Welfare State, and whether this judgment has implications for other countries than The Netherlands. We discuss the judgment in such a way that people who are not specialised in Dutch or European law can follow the discussion.

Preliminary table of contents

1 Introduction 2 The digitisation of the welfare state can erode human rights 3 The SyRI system 3.1 What is SyRI? 3.2 The data flows of SyRI projects 3.3 Technical aspects of SyRI 3.4 Lacuna in knowledge about SyRI 4 The SyRI judgment: the main points 4.1 Article 8 European Convention on Human Rights 4.2 Necessary in a democratic society: proportionality and subsidiarity 4.3 GPDR principles 5 The SyRI judgment: analysis 5.1 Testing the legislation of SyRI against human rights 5.2 The interplay between the GDPR and the right to private life 5.3 Criticism 5.4 After the judgment 6 Conclusion

* * *

09:00-10:30 Session 1B: Competition and Market Regulation: Regulation and institutions beyond competition law
Gatekeeper Power: Definition and Assessment

ABSTRACT. The Digital Markets Act Proposal, tabled by the European Commission in December 2020, bases the imposition of a series of obligations and prohibitions on the control of gatekeeper power. Such power will be determined by the Commission on the basis of a cumulative “three criteria test”, namely: (i) their large size and impact on the EU internal market; (ii) their control of an important gateway for business users to reach end-users; and (iii) whether the control in question is entrenched and durable. To facilitate and speed up the designation process, the proposed DMA establishes a presumption that the three criteria test is met when several financial and customers size thresholds are passed. However, as size thresholds are not necessarily correlated with gatekeeper power, the presumption may be rebutted by relying on an open list of quantitative and qualitative indicators, such as the financial size, the number of customers and their lock-in, the entry barriers or the scale and scope effects.

The paper aims to analyze in detail what gatekeeper power - this important but new concept in EU law - is and how it should be determined.

To do that, the first section of the paper will review the definition of gatekeeper in the economic (esp. industrial organization) literature. For instance, Caffara and Scott Morton define gatekeeper as “an intermediary who essentially controls access to critical constituencies on either side of a platform that cannot be reached otherwise, and as a result can engage in conduct and impose rules that counterparties cannot avoid.” This section will also review the definition of related concepts such as bottleneck and competitive bottleneck.

Then, the second section of the paper will review the use of the gatekeeper concept in EU competition law as well as the use of related concept such as dependency by national competition laws (e.g., in Germany, France or more recently, Belgium). The section will also review the use of gatekeeper or related concepts by EU economic regulation.

On that basis, the third section will confront the insight of economic and legal literature with the DMA proposal to assess the relevance of the suggested three criteria test to identify gatekeeper power. This section will also make recommendations on how the suggested indicators should be used by the Commission in assessing gatekeeper power.

The Digital Services Act and the Digital Markets Act: Towards a New Competition Paradigm

ABSTRACT. If the year 2019 was the year of reports, the year 2020 is definitely the year of actions in the area of the digital economy. Landmark investigations are launched, landmark cases are adjudicated, landmark legislations are introduced in various jurisdictions across the globe. The situation on the markets is changing with unprecedented dynamics, and the regulatory responses follow suit. One of the most ambitious systemic legislative initiatives aiming to break the ex-ante/ex-post regulatory deadlock and thereby the very normative structure of competition rules is the introduction of the Digital Services Act (DSA) and the Digital Markets Act (DMA).

The main systemic idea of the legislative initiative of the Commission is combining the interpretative flexibility and the very regulatory philosophy of the ex-post competition rules with the expediency of the ex-ante regulation, getting thereby the benefits and abandoning shortcomings of the two regulatory wings of competition policy. This objective is approached from two directions moving to the joint centre.

Methodologically, one of the challenges for the DSA & DMA is their aim to achieve a major synergetic goal of the revision of the whole spectrum of the digital markets related rules by covering the whole spectrum of diverse digital services, each of which has its own specificity. For this reason, it is difficult to define a single centre of gravity of the Regulations in the wording of the proposals itself as different articles address very different problems, the problems originating from very different factors, having very different rationales and requiring very different remedies. This holistic legislative methodology nudges to search for the overarching motivation for the proposals beyond the wording of the DSA & DMA or giving different semantic weight to their different provisions.

The purpose of this article is to outline the central elements of both proposals, focusing more on the DMA; to highlight and comment on the most common concerns about this new remarkable regulatory phase of the EU digital policy; to embed the discussion into the broader macroeconomic context predetermining the DSA/DMA initiatives and to articulate some ideas strengthening the advantages and mitigating side-effects of the proposals.

Regulating radical change: competition and data protection enforcement in times of COVID and digitalisation

ABSTRACT. Over the last few decades, the digital economy has been evolving rapidly: the internet and new technologies have changed the way we work, live and interact considerably. Despite the potential, digitalisation gives rise to difficult issues, ranging from the impact of technology on the economy and society, to market failures.

These issues affect various legal areas, e.g. competition and data protection law. The fields are traditionally applied in isolation, but the digitalised world challenges the boundaries between existing legal branches: due to the monetisation of data, the different fields of law may collide. The question to what extent competition law should leave room for data protection considerations is for example fiercely debated. At the same time, questions relating to institutional structure and design arise, because the simultaneous application of various fields of law results in overlapping competences of different enforcers. In addition, physical borders become less relevant in the digital world: online transactions often entail a cross-border element. This triggers the competence of several enforcers at different levels.

A similar trend can be scrutinised with regards to COVID-19. The virus is not limited by borders, and its immense impact is – to a greater or lesser extent - visible in each EU Member State. Additionally, measures and means to curb the spread of the virus affect a myriad of legal fields. Big Tech companies play, for example, an important role in the development of Corona tracking apps in several countries. This results not only in questions regarding data protection law, but may also potentially strengthen the competitiveness of those undertakings vis-à-vis their competitors, which can result in competition law concerns.

Consequently, both trends - despite their fundamental differences - result in an increase of overlapping competences of several public enforcers. At the moment, various cooperation and coordination mechanisms that potentially streamline the relationship between authorities are in place. For example, competition law enforcers cooperate in the context of the ECN, the GDPR contains a consistency mechanism, and the voluntary initiative ‘The Digital Clearing House’ aims to realise inter-branch cooperation. It is however questionable whether those mechanisms are, in light of recent transformations, apt to guarantee coherent and effective enforcement of competition and data protection law.

My working paper aims to assess to what extent the current framework for cooperation and coordination between public enforcers of competition and data protection law is capable to ensure coherent and effective enforcement in these fields of law, particularly in light of COVID-19 and digitalisation. Firstly, I briefly take stock of the effect of those developments. The impact of digitalisation on competition and data protection law is stressed, with a particular focus on how this affects the boundaries between the two legal branches. Apart from those substantive issues, an overview of the responses from the relevant enforcers to this trends will be provided. Then, the existing mechanisms are set out in order to clarify how cooperation of and coordination between authorities is currently realised. Third, the existing framework is analysed in light of the principle of coherence. It is assessed to what extent the status quo is in line with this principle. The working paper relies on a normative connotation: greater policy coherence is therefore presumed to improve the functioning of the legal system. Lastly, insofar needed, the need for change will be set out and recommendations to improve the coherence of the current framework will be provided.

Institutional structure and design have a significant influence on enforcement, which in itself is a prerequisite to make rules effective. Consequently, in the analysis of the impact of modern transformations of the economy and society, the institutional framework cannot be overlooked. In this respect, the working paper fits well in the track ‘Competition and Market Regulation in Times of Transformation’. I aim to contribute to the discussion on the role of competition law and market regulation by highlighting the institutional side of this debate.

09:00-10:30 Session 1C: Data Governance: Social Protection and Biometrics Roundtable (part 1)
COVID-19 and biometric ID: implications for social protection

ABSTRACT. Panel Proposal – TILTing Perspectives 2021 Track: Regulating Emerging Technologies: Governance Beyond Data Protection

The question on data justice implications of COVID-19 responses has been dealt with by recent works (cf. Taylor, Sharma, Martin & Jameson, 2020) in the aftermath of the WHO declaration of COVID-19 as a pandemic. Taylor et al. (2020) investigate such responses in the light of the notions of function creep, i.e. the repurposing of existing digital systems to track, predict and influence, and market-making for large private software developers in the new architectures of epidemiological surveillance. Among the data justice problems opened by COVID-19, an important stream of questions is raised in terms of implications of the pandemic for digital systems of social protection.

Biometric social protection has been shown to generate a trade-off where greater effectiveness, mostly in the form of more accurate targeting, comes at the cost of greater exclusions of entitled beneficiaries (Muralidharan et al., 2020). As COVID-19 hits vulnerable populations worldwide, the trade-off of biometric social protection does not support social protection efforts, as systems are called to face greater vulnerable populations who demand immediate assistance. Rather than narrower targeting, social protection systems need adaptations to avoid the remaking and perpetuation of injustice, as cases from Colombia (López, 2020), India (Drèze, 2020) and Peru (Cerna Aragon, 2020), among others, have powerfully illustrated. As a result, COVID-19 opens a whole new set of questions on how biometric social protection is set to change in response to the pandemic.

Against this backdrop, this panel proposal invites contributions to focus on themes that include, but are not limited to:

- Conceptual frameworks on biometric social protection during COVID-19; - Empirical studies of digital social protection in the pandemic; - Studies of data justice implications of biometric social protection schemes in the pandemic; - Analyses of extant schemes (e.g. India’s Aadhaar-enabled social protection, or Brazil’s digital access to Bolsa Familia) in the light of changes occurred, or not occurred, during COVID-19; - Analysis of new social protection measures, e.g. Colombia’s ingreso solidario, in the light of implications for recipients’ data collection, use and treatment, - Normative proposals of ideas for enabling effective and data just social protection in the light of the COVID-19 crisis.


Cerna Aragon, D. (2020). On not being visible to the state: The case of Peru. COVID-19 from the Margins, https://data-activism.net/2020/06/bigdatasur-covid-on-not-being-visible-to-the-state-the-case-of-peru/.

Drèze, J., (2020). The perils of an all-out lockdown. The Hindu, 23 March 2020, https://www.thehindu.com/opinion/lead/the-perils-of-an-all-out-lockdown/article31136890.ece

López, J. (2020). The case of the Solidarity Income in Colombia: The experimentation with data on social policy during the pandemic. COVID-19 from the Margins, https://data-activism.net/2020/05/bigdatasur-covid-the-case-of-the-solidarity-income-in-colombia-the-experimentation-with-data-on-social-policy-during-the-pandemic/.

Muralidharan, K., Niehaus, P., & Sukhtankar, S. (2020). Balancing corruption and exclusion: Incorporating Aadhaar into PDS. Ideas for India, https://www.ideasforindia.in/topics/poverty-inequality/balancing-corruption-and-exclusion-incorporating-aadhaar-into-pds.html.

Taylor, L., Sharma, G., Martin, A., and Jameson, S. (Eds.) (2020). What does the COVID-19 response mean for data justice? In Data Justice and COVID-19: Global Perspectives, London: Meatspace Press, pp. 8-18.

Biometric ID systems in India and Kenya

ABSTRACT. This contribution will discuss the implications of biometric ID on social protection during COVID-19 in India and Kenya. To do so, it builds on findings from a 2020 research on the countries’ biometric ID systems. The research analysed two cases brought before the courts, and focused in particular on issues of exclusions and harms to privacy. This contribution aims to first discuss how biometric ID systems tend to most impact the poor, as they are more likely to need to use the systems to access vital services, while at the same time the economic consequences of the COVID-19 pandemic have also disproportionately affected this group. Then, this contribution aims to show how in India, the use of Aadhaar during the pandemic for welfare delivery is only likely to accelerate and intensify previously observed trends, such as exclusion and harms to privacy. In Kenya, the fact that the implementation of NIIMS has been on hold since the court case has been framed by the government as a hindrance to the effective delivery of welfare during the pandemic. This contribution aims to question the renewed enthusiasm for biometric ID as a result of the pandemic, and show how the use of such a systems in Kenya is, as in India, likely to adversely impact the poor.

Governing the AI-enabled Health Code System in China: Using Facial and Health Data during and beyond COVID-19

ABSTRACT. Health Code System (HCS) rolled out in China in February 2020 after the outbreak of COVID-19. The HCS has benefitted from years of multi-agency endeavour in implementing national plans of Big Data and AI and Healthy China 2030. These national plans collectively deploy a technology enabled social infrastructure to distribute health resources throughout Chinese society. In response to the crisis, the HCS has shown improving social protection from three aspects: 1) a national strategy of detecting and isolating patients in public space, protecting the majority of healthy individuals and prioritizing hospital facilities; 2) a common recognition standard that enables individuals to move freely across borders so that the economy can reopen; and 3) a scaled-up solution to expedite coordination across public and private institutions to distribute health resources. However, beyond the crisis, old and new forms of social discrimination may be reinforced/embedded by algorithmic learning models arising as a result of linking sensitive facial and health data. The public-private partnership of sharing and processing data also raises questions as to how to allocate liability to ensure fair handling of sensitive biometric data post-COVID. Finally, the conversation will reflect on the implications of similar plans such as “vaccination passports” currently under debate in the EU.

How Facial Recognition technology is transforming the Welfare State: affecting social equality and data protection for the benefit of efficiency?

ABSTRACT. In recent years, facial recognition techniques (FRTs) have been increasingly employed by public authorities. Although one of the most experienced use of FRT is crime prevention and detection, biometric identification systems exploiting FRT are nowadays more and more acknowledged as valuable tools to detect frauds and identity thefts and increase efficiency and a correct allocation of resources. In France, for example, the ALICEM app - using FRT - has been implemented by the French Government: this AI tool permits the creation of a ‘digital identity’ necessary to access the app and, consequently, apply for some relevant public services. In Ireland, the Department of Social Protection adopted the controversial Public Services Card (PSC) program, which drew even the attention and concerns of the UN Special Rapporteur on Extreme Poverty and Human Rights: this card is a domestic ‘digital identity’ scheme, required for a wide range of welfare services. Thanks to FRT, the Department control citizen’s identity by comparing different facial images, in order to assess cases of identity frauds and double claiming of benefits. Notwithstanding the potentialities, these sophisticated systems raise profound concerns about the impact on fundamental rights such as data protection, privacy, human dignity and non-discrimination. In the specific Welfare State sphere, the mandatory nature of a ‘digital identity’ and the use of FRT obliges citizens to give up their personal and sensitive data to obtain access to services they are already entitled to; moreover, it risks exacerbating already existent social inequalities, to the detriment of disadvantaged and less ‘digitally educated’ segments of the population, increasing digital divide and social exclusion. The presentation aims at analyzing these complex issues through the study of two selected case-studies: the recent decision of the French Conseil d’État on the legitimacy of the ALICEM app and the debate over the PSC in Ireland. These cases impose to seriously think about possible balance-points able to guarantee privacy and equality without renouncing to new technologies’ potentialities. Establishing proper rules and safeguards represents the only instrument capable of avoiding a dystopic society where public authorities have unprecedented and unlimited power to surveil and profile.

Designing Privacy and Security in Community Networks with Hyperlocal Design: A case study in Rural India

ABSTRACT. Privacy as a concept is in a perpetual evolutionary phase. The concept directly concerns the user as the data owner. Given the cultural dynamics and societal structure of interdependence in India, it is challenging to assure the uniqueness across a population of 1.3 billion people. Aadhaar, India’s ID program and the world’s largest biometric digital ID program, uses biometric information to allocate a unique ID number to every Indian resident, thus allowing every individual to access a multitude of public and private services, including financial services, through their IDs. Aadhaar has achieved the first stage of national level inclusion; since its launch in 2009 and the program has generated IDs for about 95% of India’s population and 89% of Indian bank accounts are linked to Aadhaar. However, in a country as large and diverse as India, there are inevitably gaps in coverage. The State of Aadhaar 2019 report highlights regional and demographic gaps as well as indicates Aadhaar might not effectively reach groups such as rural women. The report further mentions how a general lack of household-level data makes it difficult to effectively address these disparities. The second stage of achieving national inclusion is assessing coverage among excluded groups, determining barriers in coverage, and ensuring every individual can capitalize upon services provided by Aadhaar. In the year 2013, Aadhaar enabled payment system (AePS) was introduced to enable government social welfare schemes to reach the masses. One of the challenges that exist in these biometric-enabled technological interventions is that they did not account for user-centric and community-based approaches for ensuring security and privacy. Reliance on internet connectivity has also been one of the major drawbacks for the use and utilisation of AePS in rural communities which are still not connected to the internet.

This paper will discuss field implementation of a recently completed project where a remote rural village in Maharashtra, India has been enabled with internet connectivity by seeding the growth of community network in this village. We will also elaborate on how this connectivity is being used by a woman banking correspondent in the village with the help of AePS for facilitating banking services for the community. This has been extremely helpful to the community during the Covid times when the village and the community was totally cut off due to the national lockdown. However, the community being indigenous tribal population needs privacy and security of their local traditional knowledge related to art, craft, folk tales, folk music, language and biodiversity. Since this traditional knowledge is created and owned by the community, how do they ensure protection of the knowledge. The paper will also critically evaluate AePS and the shortcomings of this application in the different domains such as local user perspectives on privacy preferences, perceptions and cultural considerations. The paper will conclude by discussing the need for a hyperlocal design of privacy which can take into that can take into consideration local, cultural and gender preferences. Through this paper we will be able to analyse and understand two faceted aspects of privacy ‘perception Vs reality’. The paper is part of a continuous effort by us to understand privacy from a user-centric approach. Currently we are working with communities to design their own applications of privacy which can bring in the sense of ownership of the connectivity by communities.

13:30-15:00 Session 3A: Energy and Climate Crisis: Data security and sharing in the energy sector
Sharing energy data: the intersection between data protection and energy legislation

ABSTRACT. Data sharing plays a crucial role in the proper functioning of the electricity market in the European Union (EU). This is the case, because electricity markets in the EU are encompassed by multiple actors involved at different parts of the value chain, which need information from each other to perform their tasks and to exercise their rights. Data of the final customers stand out as one of the most relevant types of data to which the different actors in the energy market need access. Acknowledging this, the Directive (EU) 2019/944 (known as the Electricity Directive), included provisions (e.g. Art. 23) that require Member States to specify rules for access to data of final customers by eligible parties. The data in question include metering and consumption data, data required for customer switching, demand response and other services. The Electricity Directive (Art. 23 par. 3) also highlights that when personal data are processed pursuant to its provisions, this should be done in compliance with the General Data Protection Regulation (GDPR). This will be the case in respect of electricity data of household customers, to the extent that they relate to identified or identifiable natural persons, following the definition in Art. 4 (1) of the GDPR. Against that background, the sharing of electricity data of household customers will be governed by two simultaneously applicable regimes: on the one hand, the GDPR, and on the other hand the sector-specific rules laid down by Member States when transposing the Electricity Directive into national law. These two regulatory frameworks have different policy objectives, scope and levels of implementation and yet intersect when it comes to the sharing of energy data that qualifies as personal. This paper investigates to which extent this intersection between horizontal (GDPR) and sector-specific (electricity) legal frameworks is as smooth or frictionless as assumed in the Electricity Directive, and identifies possible synergies and tensions between the two frameworks. To study how this intersection appears in practice, this paper analyses the content, context and consequences of a ruling issued by the Dutch Trade and Industry Appeals Tribunal in early 2020 (ECLI:NL:CBB:2020:3). The ruling put forward an interpretation concerning one of the lawful basis for data processing (necessity to comply with a legal obligation) that indirectly led Distribution System Operators to stop sharing with the suppliers data used to prepare a so-called ‘customized offer’ for final customers. With this exploration, this paper aims at providing insights on the complexities of regulating data sharing in a context in which increasingly multiple interests, policy objectives and rights intersect.

European data protection law and policies against climate change: conforming or diverging goals?

ABSTRACT. Making the European Union (EU) a world leader in the circular economy, as well as a trend setter as to the deployment of critical technologies (e.g. blockchain, 5G, quantum computing) compliant with the highest privacy -and data protection- standards are two cornerstones of von der Leyen Commission's priorities for 2019-2024. However, are the objectives pursued by the EU data protection law and the policies against climate changes conforming or diverging?

Up until now, the interrelationships between EU data protection law and policies against climate change have largely remained unexplored by (legal) scholars, most probably due to the (apparent) independence of the two sectors.

Nevertheless, the two fields merge, inter alia, in the context of smart cities. From a circular economy perspective, smart city projects and initiatives are often showcased as a sort of panacea to many environmental concerns. In effect, they may tackle environmental issues by e.g. improving mobility and transportation, lowering emissions, making waste management more efficient. At the same time, smart cities bring with them a plethora of data protection (and environmental-related) challenges depending, among other things, on their typical technical infrastructure, based on large-scale deployment of Internet of Things (IoTs) devices, extensive use of big data analytics and cloud computing.

Considering that the lack of coordination between EU data protection law and policies against climate change may undermine both data protection and sustainability goals, at the detriment of fundamental rights of European citizens, the longer-term ambition of this work is to stimulate a discussion among academics, practitioners, policymakers and civil society at the intersection of data protection law and environmental policies.

Using as a case study the data protection and environmental challenges raised by smart cities, this paper will provide an overview of the (possible) inconsistencies between EU data protection law and policies against climate change and suggest solutions to reconcile them.


Technology for the Energy Crisis: in search of a coordinated response to cybersecurity concerns of energy consumers

ABSTRACT. Digitalisation, a global phenomenon affecting several fields of law, has been at the heart of contemporary legal debates given the challenges it raises from a regulatory perspective. Transformation through the introduction of digital technologies, such as the Internet of Things, Artificial Intelligence, Big Data, and Blockchain, has been intensely felt in the field of energy where such technologies are considered as being key to achieving the energy transition and the European Union (EU)’s decarbonization objectives . Digitalisation, however, comes at the cost of significant risks for consumer protection: cyberattacks and cybersecurity incidents can have a significant impact both on the security of energy supply and on the privacy of consumer data. Building upon general EU legislation on cybersecurity , the recently adopted Clean Energy Package (2019) insists that Member States implement smart metering systems “having due regard of the best available techniques for ensuring the highest level of cybersecurity protection while bearing in mind the costs and the principle of proportionality” . It also envisages the adoption of a new network code on cybersecurity by the year 2021 . The aim of my abstract proposal is to explore the manner in which the above-mentioned EU legislation regulates the use of digital technologies in the energy sector in order to prevent and / or effectively manage cybersecurity threats on the energy consumer’s security through ICT devices. In this context, I intend to proceed to a critical analysis of the legislation that has already been adopted building upon an initial set of interviews conducted with representatives in the energy sector . The principal argument of the proposed paper is that, at the current stage, multiple uncoordinated certification initiatives across the EU result in fragmentation that risks compromising resilience. The adoption of common certification standards and a better definition of the role of actors in the energy sector should hence be a priority in the negotiations leading to the future adoption of the cybersecurity network code.

13:30-15:00 Session 3B: Human Rights and AI: Algorithmic surveillance in electronic communications
Artificial Intelligence and Democratic Values

ABSTRACT. Artificial Intelligence and Democratic Values: The AI Social Contract Index is the first global survey to assess progress toward trustworthy AI. The AI Index 2020 has these objectives: (1) to document the AI policies and practices of influential countries, based on publicly available sources, (2) to establish a methodology for the evaluation of AI policies and practices, based on global norms, (3) to assess AI policies and practices based on this methodology and to provide a basis for comparative evaluation, (4) to provide the basis for future evaluations, and (5) to ultimately encourage all countries to make real the promise of AI that is trustworthy, human-centric, and provides broad social benefit to all.

Artificial Intelligence and Democratic Values focuses on human rights, rule of law, and democratic governance metrics. Endorsement and implementation of the OECD/G20 AI Principles is among the primary metrics. Opportunities for the public to participate in the formation of national AI policy, as well as the creation of independent agencies to address AI challenges, is also among the metrics. Patents, publications, investment, and employment impacts are important metrics for the AI economy, but they are not considered here.

The first edition of Artificial Intelligence and Democratic Values examined AI policies and practices in the Top 25 countries by GDP and other high impact countries. These countries are Australia, Belgium, Brazil, Canada, China, France, Germany, India, Indonesia, Italy, Japan, Korea, Mexico, Netherlands, Poland, Russia, Saudi Arabia, Spain, Switzerland, Taiwan, Thailand, Sweden, Turkey, United Kingdom, United States. High impact countries include Estonia, Israel, Kazakhstan, Rwanda, and Singapore. The report is available here: https://www.caidp.org/aisci-2020/

The Human Rights Challenges of using Emotional Artificial Intelligence for Surveillance

ABSTRACT. This paper assesses the impact of emotional artificial intelligence (EAI) on the right to freedom of thought. The paper finds that EAI poses prima facie challenges to three key elements of freedom of thought: the right not to reveal one’s thoughts, the right not to have one’s thoughts manipulated, and the right not to be punished for one’s thoughts. However, these elements are at a latent stage in IHRL, with very little case law or soft law relating to freedom of thought specifically. The paper also contributes an analysis of EAI from a surveillance studies perspective, discussing the motivations for, mechanisms, and consequences of emotional surveillance.EAI uses machine learning, computer vision, and other techniques to read, interpret, and predict people’s emotional state using a variety of data sources as inputs and producing outputs such as emotional classification and confidence score. Despite significant criticisms of the underlying assumptions of EAI, it is being used for a broad range of surveillance purposes. The paper briefly explains EAI, before exploring some illustrative examples of EAI’s actual and potential uses for surveillance purposes. While EAI offers similar affordances to other forms of surveillance, its novel affordances are discussed to illustrate the more general point that AI-enabled surveillance presents new challenges for human rights.The main body of the paper is then split into two sections. The first section analyses the nature of emotional surveillance; what causes it? How does it operate? What are its effects? It also discusses the body-objectifying and predictive nature of emotional surveillance. The second section adopts an approach suggested by McGregor, Murray, and Ng,* who argue that IHRL offers an effective, ready-made framework for defining, assessing, and addressing the harms caused by AI. Applying IHRL as an analytical framework, this paper assesses the impact of emotional surveillance on the right to freedom of thought, in the process highlighting some of the strengths and weaknesses of IHRL. The paper closes by summarising some of the open questions this analysis of EAI and freedom of thought raises.* Lorna McGregor, Daragh Murray and Vivian Ng, ‘International Human Rights Law as a Framework for Algorithmic Accountability’ (2019) 68 (2) ICLQ

EU competence creep in the field of national security: Implications for mass surveillance of online communication

ABSTRACT. Mass surveillance of online communication is the dominant paradigm to protect national security. In the EU, countries such as the Netherlands, Germany, and France have sophisticated schemes in place to collect internet and telephone metadata from potentially every citizen. These mass surveillance schemes interfere with the rights to privacy and data protection and may chill the exercise of freedom of expression or association.Intelligence services engage in cross-border data sharing of data with partner services. Such sharing of data is done on a voluntary basis, and often, domestic legislation leaves gaps in the protection of individual rights in the context of cross-border data sharing. In addition to that, oversight of multilateral data sharing by intelligence services suffers from an accountability gap, since there are several limitations in domestic legislation that organizes such oversight.An open question is if the EU could play a role in ensuring that the fundamental rights of citizens are protected when intelligence services share data across borders. In principle, article 4(2) of the Treaty on European Union (TEU) provides that national security is the sole responsibility of each Member States. The EU thus formally has no competence to regulate national security issues. At the same time, the Court of Justice of the EU (CJEU) has brought several mass surveillance and data retention schemes that were set up with the aim of protection national security into the scope of EU law.In this paper, I describe the legal reasoning of the CJEU with which it has brought national security issues within the reach of EU law, focusing on, among others, the recent case of Privacy International (C-623/17). I contextualize these developments by looking into other fields of law where the CJEU has found ways to review policy areas that are excluded from the EU’s competences in light of EU law (e.g. sports; health; education). Such competence creep relies, among others, on the increasing importance of the Charter of Fundamental Rights in the EU legal order. I then consider the implications of such competence creep for mass surveillance of online communication by EU Member States.

13:30-15:00 Session 3C: IP Law: Tensions at the Intersection: Competition, SEPs and other IP
The crisis of property and user empowerment in the Internet of Things

ABSTRACT. IP plays a crucial role in allowing uses of the Internet of Things (IoT) that are detrimental to society and preventing beneficial uses. ‘Your’ phone belongs to the holders of the copyright on the code running on it, the manufacturers owning its design and the patents on how it works, as well as trademarks not only on logos, but also on things such as the way you swipe. What happens when it is no longer just computers and phones to be embedded with software and with other IP-protected digital contents? These proprietary smart devices (‘Things’) are everywhere: in your bedroom, in your bathroom, in your body? Our Things’ Terms of Service, Privacy Policies, etc. restrict heavily our behaviour. We have become digital tenants, not owning or controlling any object around us and data about us. To the point that, one can argue, that we no longer own: we are owned. This paper will present the main IP issues in the IoT and will focus on the death of ownership and related digital serfdom, using Joshua Fairfield’s Owned as an analytical framework. It will then move on to the pars construens and critically assess whether this issue can be resolved: (i) by relying on IP exceptions or defences, which is linked to the problem of IP overlaps; (ii) by means of antitrust interventions in the field of standard-essential patents (SEPs) that are licensed on free, reasonable, and non-discriminatory (FRAND) terms; (iii) by embracing the knowledge commons as a form of collective resistance.

Should SEPs be licensed to component makers or producers of final products? A competition law perspective

ABSTRACT. Questions concerning licensing of standard essential patents (SEPs) have attracted attention of scholars, policy makers and competition authorities in Europe, Asia and the US. One of the issues discussed is the question whether SEP holders may choose freely whether to license makers of final products or makers of components who actually implement their SEPs.

Court and competition authorities particularly in Asia have been critical of licensing policies that envisage refusing SEP licenses to makers of components, finding them anticompetitive, both exclusionary – namely leading to elimination of competitors – and exploitative – basically allowing for charging excessively high royalties. In the US the dispute between the Federal Trade Commission (FTC) and Qualcomm ended with the 9th Circuit Court of Appeals (2020) finding no antitrust violation in Qualcomm’s practice of licensing final products makers only.

Disputes over who is entitled to a SEP license have also reached courts in the EU and led to disputes between car makers and SEP holders. Here, the SEP holders have preferred licensing car manufacturers rather than makers of components although it is generally the component makers who implement SEPs allowing for network connectivity. The importance of component-level / final product licensing of SEPs is crucial for various producers in numerous industries as they must ensure that their products enable network connectivity through the use of such SEP-protected technologies as 3G, LTE and soon 5G.

The questions whether refusal to license component makers on part of SEP holders constitutes an abuse of dominant position within the meaning of art. 102 TFEU has recently been referred to the CJEU in Nokia v. Daimler by the Duesseldorf court. The answer to questions posed to the CJEU is not an easy one. Competition law requires harm to competition. The decision to license at a particular level of the supply chain might be a matter of convenience for the licensor – targeting final product makers might be more efficient. However, it might also be seen as a means for obtaining (excessively) high royalties or excluding competitors, especially when the SEP holder is vertically integrated.

Enforcing Copyright Through Antitrust? A Transatlantic View of the Strange Case of News Publishers Against Digital Platforms

ABSTRACT. The emergence of the multi-sided platform business model has had a profound impact on the news publishing industry. By acting as gatekeepers to news traffic, large online platforms have become unavoidable trading partners for news businesses, and exert substantial bargaining power in their dealings. Concerns have been raised that the bargaining power imbalance between online platforms and content producers may threaten the viability of publishers’ businesses. Notably, digital infomediaries are accused of capturing a disproportionate share of advertising revenue relative to the investments made in producing news content. Moreover, by affecting the monetization of news, the dominance of some online platforms is deemed to have contributed to the decline of trustworthy sources of news. Against this background, governments have been urged to intervene in order to ensure the sustainability of the publishing industry. The EU has decided to address publishers’ concerns by introducing an additional layer of copyright as a means to encourage cooperation between publishers and online content distributors. And the French Competition Authority has recently accused Google of adopting a display policy aimed at frustrating the objective of the domestic law implementing the EU legislation, hence requiring Google to conduct negotiations in good faith with publishers and news agencies on the remuneration for the reuse of their protected content. The Australian Competition and Consumer Commission has instead embraced a regulatory approach, developing a mandatory bargaining code. This paper analyzes different solutions advanced to remedy these problems in order to assess their economic and legal justifications as well as their effectiveness.

13:30-15:00 Session 3D: Open Track: Law and the pandemic
Soft law as EU crisis response in public health emergencies

ABSTRACT. As illustrated by the COVID-19 crisis, public health emergencies are catalysts for the adoption of soft law measures. This paper will examine the adoption of soft law in the European Union as response to public health emergencies, focusing on the regulation of pharmaceuticals and medical devices. In fact, throughout the history of EU health law, crises like the Thalidomide tragedy, as well as the Mediator and the PIP Breast Implant scandals have let to the increasing harmonization of the regulatory frameworks of the Member States, not only through the adoption of legislative acts, but increasingly also through the adoption of soft law measures by EU administrative actors.

In the COVID-19 crisis soft law measures adopted by the Commission and the European Medicines Agency played a central role in the EU health law response to the virus, through ensuring the availability of medical equipment and devices including personal protection equipment, the quality and capacity of COVID-19 testing, and addressing the effects the pandemic had on clinical trials. It also played a central role in creating accelerated procedures for COVID-19 treatments and vaccines and in developing a vaccination strategy. Especially in the context of the pharmaceutical responses to the COVID-19 crisis, the adoption of EU soft law went hand-in-hand with global regulatory cooperation with other pharmaceutical regulatory authorities.

In analyzing the regulation of pharmaceuticals and medical devices and the role soft law plays within these regulatory frameworks especially in times of crisis, it will be shown that soft law had allowed for rapid response urgently needed in public health emergencies and regulatory flexibility when adopting complex technical and scientific risks regulation measures. Moreover, it allowed for EU action in a field that is where the extend of regulatory power is contested between the Union and its Member States. In this regard, the COVID-19 crisis has ignited a debate surrounding the question if the EU should be awarded more competences in health policy.

This paper will also analyze problems with regard to the resort to soft law in EU crisis response, especially in terms of the democratic and judicial accountability for the adoption of such measures. It will be shown how the flexibility and speediness in the adoption of soft law measures is paid for with democratic oversight (e.g. by the European Parliament) and also raises questions concerning the procedural protection of individual rights.


ABSTRACT. This contribution investigates how national courts in the EU adapted to the Covid-19 crisis. It aims at discovering and analysing common and different approaches to the crisis from the various national judicial systems. How did Courts around the EU operate during the crisis? What can we learn from the common or similar approaches and what from the differences? Furthermore, how different will the post – Covid-19 justice system look like in the EU?

While looking at the various different decisions from MS relating to the function of their courts during the pandemic, we assume that they all share a common goal: to guarantee access to justice. Access to justice remains as vital during crises as in ‘ordinary’ times. This is true for all the MS. Where they differ is in their ICT capacity, levels of digitization and overall readiness for online transition.

The paper will first provide an analysis of publicly available information for each MS, particularly relying on self-reporting to national and EU bodies and to the Council of Europe, as well as on official announcements available on various courts’ and ministries’ websites. It will then assess the overall national performance of the examined judicial systems comparatively and against two key benchmarks: first, meaningful access to justice either on- or offline and, second, each MS’s progress with long-term e-justice targets as set by the EU prior to the Covid-19 crisis.

In sum, showcasing examples from each MS, this work will aggregate data for both the de facto and de jure digital transition of Courts around the EU during the pandemic as well as data showing gaps and deficiencies. Subsequently, the paper will compare the various developments within different MS and evaluate their progress regarding e-justice targets that were set centrally by the EU. At a final and normative stage, the paper will combine literature on law, policy, and institutional crisis management, to suggest novel methods to measure sustainable and meaningful access to justice during crises.

Disinformation and Content Moderation in Times of Pandemics: The Lesson of European Digital Constitutionalism

ABSTRACT. The spread of disinformation online has not stopped in times of pandemic. Conspiracy theories around 5G or false information about Covid-19 treatments are only two examples of the health disinformation flowing on social media’ spaces. Although the cooperative efforts of platforms to fight disinformation during a global pandemic, the use of artificial intelligence to moderate content has led to amplify the dissemination of false content in a time where reliance over good health information has been critical. The decision of Google and Facebook to limit the process of human moderation has highlighted the fallacies of artificial intelligence technologies. The reliance just on automated decision-making has not only led to the suspensions and removal of accounts but also the spread of false content.

Although these actors are usually neither accountable nor responsible for hosting third-party content, nevertheless, their decisions cannot only affect fundamental rights but also led to the dissemination of false content. It would be enough to focus on the role of social media in the spread of disinformation escalating violent conflicts in countries like Myanmar or Sri Lanka where genocide and mass atrocities occurred. This troubling situation is the result of the discretion social media enjoy in deciding how to moderate content by interpreting users’ right to freedom of expression according to their legal, economic and ethical framework. Social media companies can select which information to maintain or delete according to standards based on the interest to avoid any monetary penalty or reputational damage. To some extent, the result of this private-driven activity mirrors the exercise of judicial balancing and public enforcement carried out by state actors.

Despite their crucial role in the digital environment, social media companies do not ensure transparency and explanation of their decision-making processes. As private actors, they are not obliged to respect fundamental rights since their protection can be enforced only vis-à-vis states in the lack of any regulation of content moderation. While, in the US, there has not been a reaction against these new challenges, apart from the limited scope of the executive order on social media, the Union and some Member States (e.g. France and Germany) have started to pave the way towards regulating content moderation (e.g. Copyright Directive) and encouraging platform to adopt procedural safeguards (e.g. Code of Practice on Online Disinformation). Unlike the western side of the Atlantic, the new steps towards the Digital Services Act are just another example of the rise of a new phase of European constitutionalism (i.e. digital constitutionalism) mitigating the influence of private powers on fundamental rights and democratic values. Within this framework, this work underlines how the spread of disinformation during the global pandemic is an example of the constitutional challenges of content moderation. This situation requires constitutional democracies to deal not only with the troubling legal uncertainty relating to artificial intelligence technologies but also the limits to unaccountable private determinations affecting fundamental rights and democratic values. The first part of this work describes the role of social media’ content moderation in the spreading of false content online during the global pandemic. The second part underlines the Union’s efforts to limit platforms’ discretion while the third part underlines the role of European digital constitutionalism in providing a normative perspective addressing the challenges of content moderation to mitigate the spread of health disinformation.

13:30-15:00 Session 3E: Data Governance: Social Protection and Biometrics Roundtable (part 2)
The automated deservedness: the social protection in Colombia after COVID-19

ABSTRACT. The Covid-19 has become a hub for new forms of social protection. In Colombia, the once slow assessment of individual social conditions is transforming towards an automated data-based system. The objective of this presentation would be to analyze this change and its main consequences in terms of social justice. The instrument to assess the individual conditions to receive social rights is the System of Possible Beneficiaries of Social Benefits (Sisbén in Spanish). It uses the data collected in interviews and an algorithm to generate a score of poverty. In 2016, the system changed to collect more personal data and to verify automatically the information with 34 public and private databases. When the pandemic started, the government created a cash transfer program that selected the beneficiaries automatically without an application or clear criteria, just using multiple databases to search for "suitable" profiles. After this experience, the government created the Social Household Registry that would assess the social conditions of each household automatically verifying Sisben´s data with any database available. Thus, the process of assessing someone’s social condition will not depend on massive interviews that took four years but on a complex data-intensive system that would use data from multiple and obscure sources.

The Hourglass Architecture of Vaccine Distribution: From Aadhaar to DIVOC

ABSTRACT. My contribution focuses on India’s ongoing investments into a Digital Infrastructure for Verifiable Open Credentialing (DIVOC) oriented to creating a modular, plug and play, digital platform that can be used by any country to organize their vaccine rollouts. The designers of Aadhaar, India’s biometrics-based national ID, are developing this platform. They have consistently argued that vaccination, like collection of fingerprints, literally requires the physical presence of the Indian population of 1.3 billion at designated places/facilities/camps. Vaccine rollouts, thus, face a similar challenge of scale as organizing Aadhaar enrollment. I will show how, extending this argument, Aadhaar designers are building DIVOC using a similar “hourglass architecture” (Singh 2019) that they previously used to organize Aadhaar enrollment:

• At the waist of this hourglass is the vaccine certificate and the QR code attached to it. • Below the waist is vaccine distribution facilities. • Above the waist is any organization using vaccine certificates to organize its services.

This architecture comes with its own challenges in negotiating modularity and power between the waist and the organizational ecosystems above and below it. I will argue that contests and consensus over these negotiations is the emergent locus of future struggles over access, inclusion, and justice in the distribution of vaccines in India and elsewhere.

References: Singh, Ranjit. 2019. “Give Me a Database, and I Will Raise the Nation State.” South Asia: Journal of South Asian Studies 42 (3): 501–18. https://doi.org/10.1080/00856401.2019.1602810.

Transaction Failure Rates in the Aadhaar Enabled Payment System: Urgent Issues for Consideration and Proposed Solutions

ABSTRACT. The Aadhaar-enabled Payment System (AePS) witnessed a surge in transactions during India’s COVID-19-induced lockdown. Many providers pivoted to use of this system as bank branches experienced service disruptions in the early weeks of the lockdown, limiting the cash-out points in India. This coincided with a huge demand for cash withdrawals, including in response to announcement of cash transfer schemes by Central and State governments. Worryingly, the rise in AePS transactions was accompanied by reports of high transaction failure rates with serious consequences for individuals trying to access cash to stay afloat in the crisis. Unfortunately, limited published evidence and analysis of the nature of these transaction failures exists.

This policy brief (written in May 2020) identified the most serious categories of AePS transaction failures based on data and conversations with four financial institutions with a combined presence across the country. To better understand these failures, it (1) describes the AePS process flow, (2) the main reasons for AePS transaction failures (especially in April 2020) emerging from research, and (3) the impact on consumers. Some solutions for urgent discussion were also proposed, given the enormous costs these failures externalised on the most vulnerable users of India’s financial services infrastructure.

Has the digital welfare state led to greater exclusion of poor citizens in accessing social cash during the COVID-19 pandemic? Insights from Pakistan through the social justice lens

ABSTRACT. The Covid-19 pandemic has affected vulnerable populations worldwide, especially in many emerging economies, and Pakistan is no exception. Although efforts by the government have been applauded to swiftly roll out the Ehsaas Emergency Cash Programme building on the foundation of the existing social protection scheme - Kafalaat (previously the Benazir Income Support Programme) there has been criticism through the social justice lens. While the application of digital technologies such as AI, analytics and mobile applications drive the evolution of the digital welfare state, there is still increasing discourse around how biometric targeting of poor citizens may have punished certain marginalised populations leading to their social and financial exclusion.

The Ehsaas Emergency Cash Programme in Pakistan relies on the National Database and Registration Authority (NADRA) for datafication through biometric targeting and authentication of poor citizens for cash delivery. The programme was launched rapidly at the beginning of the pandemic in March 2020 targeting the next tier of poor households (additional 6 million); mainly informal workers and daily wage earners whose incomes were severely affected by the lockdown across the country. This enrolment was an additional layer over the 6 million families (women beneficiaries) who were previously receiving monthly grants through the Kafalaat programme. Altogether, an estimated of total 12 million families, representing 67 million people, received grants whose livelihoods had been severely impacted by the COVID-19 epidemic or its aftermath. Each eligible family received a one-time financial assistance of 12,000 Pakistani rupees (PKR) equivalent to $75. Cash was distributed after biometric authentication against the database of NADRA (which houses the digital identity of 122 million citizens out of 210 million) to the vulnerable households at 17,000 distribution centres which were set up nationwide.

While the social cash programme represented the largest and most extensive scheme in the history of Pakistan, we argue that datafication of the programme for targeting citizens conferred a data-driven approach that made certain groups, particularly women, disproportionately ‘invisible’ to the State. The programme was reliant on NADRA’s digital platform to identify eligible beneficiaries during the pandemic. Citizens were instructed to text their digital ID number to a particular number via their mobile phones. An algorithm and data analytic system was applied by NADRA for initial screening of citizens data against other databases linked to their digital ID number. These included the Kafalaat database, the NSER (National Economic and Social Registry) database, and other databases covering immigration (travel patterns), civil and public service payrolls, utility bills, telecommunications subscriptions, vehicle registration and asset ownership/taxation to ascertain poverty status. The data analytics system for targeting was meant to be transparent and foul proof to distribute cash to poor deserving families. Those meeting the pre-determined eligibility criteria received a confirmatory SMS message with details of the nearest distribution centre to pick up their cash grants.

Through the social justice lens, there is criticism that the targeting process excluded many women from receiving social protection, as many in this social stratum do not own a mobile phone and are equipped with low levels of financial and digital capabilities to even submit their request online. Besides exacerbating gender inequalities, there are other concerns of how this programme was perhaps a regression from the previous scheme – Kafalaat – for making electronic disbursements through biometric ATM cards. Moreover, opportunities to scale up the country’s extensive branchless banking infrastructure for digital disbursements were missed, especially given the large volume of transfers that would have boosted the financial inclusion indicators of the country. With lockdowns in effect and physical distancing measures mandatory, there were major concerns about the spread of COVID-19, given the fact that people had to queue for receiving cash grants and use biometric identification (fingerprints on machines). From a data justice perspective, we also argue how citizens biometric data were unethically shared with other public/private institutions in the absence of any digital data regulation laws in Pakistan.

16:30-18:00 Session 5A: Human Rights and AI: Human rights and surveillance technologies
Inter-Legality and Surveillance Technologies: Looking at the Demands of Justice beyond Borders

ABSTRACT. On the 19th of May, the German Constitutional Court ruled that telecommunication surveillance of non-German individuals outside German territory violates the German Basic Law (BVerfG [2020] 2835/17). In the judgment, the Court conducted a constitutional review on certain provisions of the German Act on Federal Intelligence Service, allowing German authorities to collect and process communication data between non-German nationals outside German borders. The complainants, a group of journalists and NGOs, claimed that the provisions of the Act (BND-Gesetz) violate their right to privacy and the freedom of press. The Government objected that it is not bound by the German Basic Law when conducting surveillance activities on foreign individuals on foreign soils. However, the Court found violations of Articles 5 and 10 of the German Basic Law and stated that the Legislature would have to revise the existing provisions in accordance with the German Basic Law until the 31st of December 2021.

The reasoning of the Court entails a number of crucial questions both from the international and European human rights law perspective. The most important one being whether the German Federal Government is bound by the provisions of the German Constitution when it interferes with the rights of non-German individuals in a non-German territory. Relying on international human rights law, the Court answered affirmatively, raising three main arguments in favor of the accountability of the German Federal Government on foreign soils, mainly interpreting paragraphs 2 and 3 of Article 1 of the Basic Law. Indeed, pursuant to Article 1(3) of the Basic Law, the Court stated that German authorities are comprehensively bound by the fundamental rights of the Basic Law without restrictions on the German territory (Staatsgebiet) or the German people (Staatsvolk) (i).

In this context, by making references to the history of the Basic Law and applying to the teleological interpretation, the Court clearly emphasized that the Basic Law aims a comprehensive reading of the fundamental rights rooted in human dignity. Secondly, in the light of the second paragraph of Article 1 and the Preamble, the Court found that the Basic Law recognizes inviolable and inalienable human rights as the basis of every community, of peace and justice in the world. Thus, the fundamental rights of the Basic Law are placed in the context of international human rights guarantees. This requires that fundamental rights of the Basic Law must be interpreted in the light of Germany’s international-law obligations (ii). Finally, according to the Court, new technological developments and their usages require a comprehensive reading of paragraph 3 of Article 1 to take into account the threats to fundamental rights and the resulting shifts in the powers. This leads to the fact that German authorities are subject to international human rights obligations regardless of the territory in the context of new technologies that offer cross-border services (iii). In other words, such a comprehensive reading of the German Constitution is particularly highlighted for new technological developments allowing states powers to reach out into third countries.

I believe that this Judgment demonstrates a successful example of inter-legality. Therefore, this study aims at analyzing the judgment from such a perspective (Palombella & Klabbers, 2019), through a three-step analysis: Taking the vantage point of the affair – the case at hand – under scrutiny (i), understanding the relevant normativities actually controlling the case (ii), looking at the demands of justice stemming from the case (iii).

In the light of the first step, the judgment has clearly analyzed the virtuality of the “vantage point” of the case by considering the features of the advanced surveillance technologies and fundamental differences from the traditional technologies in the past. The Court has highlighted that in line with new technological developments, limiting the application of fundamental rights with the national borders would leave individuals vulnerable and cause the scope of the protection of fundamental rights to lag behind in internationalization. According to the Court, such kind of international dimension allowing communication within states and beyond states borders ambiguates the distinction between domestic and foreign.

Secondly, according to the Court, the relevant normativities controlling the case are both the international norms and the German Constitution. Such a result has stemmed from the interpretation of paragraphs 2 and 3 of Article 1, that focuses on their “multifaced nature”. In other words, the Court has read the “Normtext” of these provisions and recognized that they are composed of more than one system-sourced positive law. This implicitly follows the perspective of Inter-legality that changes the “usual, traditional perspective, a perspective that is limited by the political, legal and cognitive borders of a single self-contained system.” (Palombella & Klabbers, 2019). In this context, the judgment has primarily focused on the case-law of the European Court of Human Rights by making clear references to the cases of Al-Skeini and others v. United Kingdom (7 July 2011, No. 55721/07, §§ 132), Big Brother Watch and others v. United Kingdom (13 September 2018, No. 58170/13 et al., Section 271) and Center for Rättvisa v. Sweden (19 June 2018, no. 35252/08). In the light of these cases, the Court has induced that the case-law of the ECtHR is largely based on the doctrine of ‘effective control over territory’ and it is still not clear on the protection against surveillance measures taken by the Convention States. However, the Court has highlighted that Article 53 of the European Convention on Human Rights allows it to provide further protection for fundamental rights.

With regards to the third step guided by the first and the second, the Court has implicitly applied to the doctrine of ‘effective control over rights’. Different from the effective control over territory and over persons doctrines of the ECtHR, ‘effective control over rights’ is based on whether states have the effective control over the enjoyment of the rights. It has been adopted by the Human Rights Committee (General Comment no 36), the Office of the United Nations High Commissioner for Human Rights (OHCHR, Report on the Right to Privacy in the Digital Age, 2014), and the Inter-American Court of Human Rights (Advisory Opinion on the Environment of Human Rights, 2017). The doctrine has been also proposed by several human rights experts due to the facts surrounding surveillance technologies (Margulies P. [2014] “The NSA in Global Perspective: Surveillance, Human Rights, and International Counterterrorism” 82 Fordham Law Review 2137–2167 at 2148–52; Land M., Aronson J., [2018] New Technologies for Human Rights and Practice, Cambridge University Press, at 236-39).

Although the Court does not clearly refer to the doctrine of effective control over rights, reliance on international human rights law by making general references to Article 12 of the Universal Declaration of Human Rights and Article 17 of the International Covenant on Civil and Political Rights regulating the right to privacy has bridged the fundamental rights protected under the Basic Law to reach beyond borders. Such an approach has enabled the Court to fulfill the demands of justice stemming from the case, namely the third step of our analysis.

I conclude that, in general, the perspective of inter-legality has enabled the Court to determine the virtual nature of the debate, to embrace a composite perspective accounting for relevant normativities and finally to adopt the ‘just’ solution.

AI in Smart Cities and Tackling Private and Public Surveillance Through Privacy by Design Principles

ABSTRACT. Along with the advantages smart cities and AI technologies bring, there are also possible pitfalls of implementing Big Data Analytics and the Internet of Things as part of the ordinary daily life in cities. Do these technologies respect our autonomy? Do governments protect the interest of their citizens or do they make use of these technologies for data gathering on their citizens? Many scholars define online platforms as digital public spheres, and these public spheres curate the public debate. Smart cities as physical and analytic public spheres are akin to online platforms in that sense. Therefore, smart cities create privacy, governmental surveillance, freedom of expression, and most importantly autonomy issues. I’d like to address each of these issues in the following ways, and I would like to present a privacy by design principles focused solution to balance the interest of stakeholders, while preserving individuals’ cognitive liberty. The paper aims to cover the issue in seven parts, (1) By looking at the Panopticon theory by Michel Foucault and Jeremy Bentham, identify the societal context, the passive influence of surveillance, and the chilling effect on citizens’ exercising their right to self-determination, access to information, and expressing minority opinions; (2) Including empirical evidence on the chilling effect of surveillance on online platforms such as Wikipedia after the Snowden revelations in 2013, physical and online surveillance such as the New York Police Department’s surveillance program conducted on Muslim-Americans after the 9/11 incident, and the like; (3) Identifying the impractical application of freedom of thought as an absolute fundamental right in the age of behavioral advertising, political propaganda manipulation, big data analytics, and robots both on online platforms and physical public spheres; (5) Identifying what activities could be considered legitimate government interference and what could not be; (5) Identifying the above discussed chilling effect and possible interferences of privacy and mental autonomy by the government and to what degree that complies with Article 10 of the ECHR, Article 18 of the UDHR, and Art. 19 of the ICCPR; (6) Drafting a solution based on the 7 principles of the privacy by design, also mentioned in Article 25 of the GDPR, and how these principles could reduce the chilling effect and passive influence of surveillance, increase the democratic participation of the citizens and minorities instead of creating an empire of fear, and encourage the practical application of freedom of thought.

The Surveillant University: Remote Proctoring, AI, and Human Rights

ABSTRACT. Although distance education is not new, the COVID-19 crisis has dramatically increased the number of people around the world who are carrying out their university studies from home. Even after the pandemic ends, universities may continue to experiment with and further develop distance learning opportunities alongside existing programs. In this context, universities have turned to technology as a way to ensure that those studying and writing exams from home are not engaged in cheating. Remote proctoring technologies such as Respondus Monitor, Proctor Exam or ProctorU combine audio and video surveillance with artificial intelligence technologies to identify movements or behaviours that could be associated with cheating. Students may have no choice, or may be given limited or onerous options to avoid the use of these technologies. The technologies raise immediate privacy, human rights and ethical issues. In addition, the data collected via these technologies may also be used or shared by the companies for other purposes, creating ongoing and often non-transparent risks of harm. The paper will be based upon data gathered from a close examination of the technologies themselves, their adoption and implementation and their terms of use. This research examines selected AI-enabled remote monitoring software in order to understand and explain how such technologies function and what data they collect. It considers the adoption of these technologies in specific universities in Canada, the U.S. and the EU in order to assess the processes and considerations that led to adoption and implementation. It also examines the multiple terms of use in play: those that govern the relationship between the university and the technology company; those between the technology company and the student; and those between the student and the university. Based on this data, the paper will analyze the adoption and use of AI-enabled remote proctoring technologies by universities, and will identify the privacy, human rights, and ethical issues that arise from their use.

16:30-18:00 Session 5B: Data Governance: Reconceiving Data Governance
Beyond Data Ownership

ABSTRACT. Data ownership may currently be the proposal receiving the most significant attention in the mainstream conversation about privacy. This article shows that property alone is self-defeating as a privacy measure. Our digital environments are marked by substantial power imbalances between data processors and data subjects. Data ownership proposals will not alter these realities. I show that data ownership not only may be aiming to achieve the wrong thing but, moreover, fails to achieve its goal: control over personal information.

The article begins with a much-needed legal clarification about what actually is meant by data ownership. I show that proponents of data ownership do not argue for the bundle of ownership rights that exists over property. Instead, their vision is grounded in the transfer of privacy rights from the user to data collector/processor—this is a property rule (a trade rule). But property rules (trade rules) alone unequivocally fail at protecting data subjects because they cannot safeguard against the risks of future uses or abuses of personal information. This latter problem can only be remedied through ex-post accountability mechanisms, such as liability rules that allow for compensation based on harm. The article demonstrates that the fusion of trade rules and accountability is a pre-condition to an effective privacy regime.

The normative implications of the article’s finding go beyond debunking the popular data ownership initiative. The article operationalizes how to remedy the problems of property rules through two current policy discussions in privacy regulation: purpose limitation and private rights of action. It suggests how legislators and courts could, in concrete terms, reform these two areas to optimize privacy protection.


ABSTRACT. In light of the evolving data-driven environment, where data is central to increasingly many processes and operations in both the private and the public domain, to people’s private lives and society at large, two ideal models of data regulation have generally been proposed. First, to give control to individuals over their personal data and second, to rely on legal standards and governmental enforcement of those standards. These two models, however, have inherent flaws and weaknesses. e.g. Data are quasi-impossible to control. To exert property rights, it is necessary to know who has your data, while with covert surveillance, cookies, and ubiquitous computing, it is rather impossible to know who has your data. Moreover, with the technological development rapidly evolving, specific norms are outdated soon and supervising bodies have difficulties keeping up. Considering these challenges, it does not come as a surprise that new models have emerged that find an alternative to these two models of data regulation. What these new models have in common is that they all –to a certain extent– build on trust between citizens and organizations that are processing their data and the ethical principles and obligation that come with these types of relations. Trust-based governance models focus on broader, ethical obligations that come with the dominion of large sets of data that are not the property of these organizations. In our paper we start with a conceptual analysis of trust. We analyse how central ideas such as vulnerability, developing positive expectations, and trustworthiness play out in the data-driven domain. We distinguish between three forms of trust-based governance models: (a) data stewards, custodians, curators, (b) information fiduciaries, and (c) data trusts. For every model we identify and compare the underlying trust relations and reflect on their inherent challenges. We argue that these trust-based governance models can play an important role in making data processing actors more reliable and accountable, but that they also have their own weaknesses. We find that for some models it is difficult to genuinely encapsulate the stakes of citizens due to conflicting interests and that in order for these trust-based models to work, they need proper support from control-based regulatory strategies.

The (gendered) vulnerable data subject

ABSTRACT. Vulnerability is an emerging topic in many different fields, but in data protection and privacy the discussion has rarely engaged with gender studies. This paper investigates the notion of the ‘vulnerable data subject’ from a gender perspective, to question whether gender should be regarded as factor of vulnerability at all, and, if yes, how. In addition, what do these reflections tell us about the (gendered or un-gendered) notion of ‘standard data subject’? In the GDPR, even though the term ‘vulnerable data subject’ is only incidentally mentioned and referring explicitly just to children, several Data Protection Authorities (e.g., in Spain and Poland) have considered “being female” as a potential source of data subject’s vulnerability (e.g., in case of consumers victims of sex-related crimes). The US privacy tort - as originally conceived - was built off gendered notions of female modesty, suggesting women were vulnerable (Skinner-Thompson), and thus connecting women’s privacy claims to the ‘wrong kind of privacy’ (Allen). Looking at the history and foundations of privacy and data protection law, surface questions such as whether the ‘average data subject’ in privacy and data protection legislation is, by default, a man, and, whether are women might have to be regarded as vulnerable data subjects just because they are women. This paper will then look into law and economics analysis of consumers’ behaviour (where “gender” and “sex” are relevant variable in consumer vulnerability), but also political philosophy and, in particular, gender studies, where we can observe a real intellectual polarisation: on the one hand the vulnerability universalist approach (Butler, Fineman, Dodds, McKenzie), according to which every human is vulnerable, and any additional "label" of vulnerability can lead only to stigmatisation and "pathogenic vulnerability"; on the other hand the particularistic approach (Cole, Goodin), according to which some subjects are more vulnerable than others (in particular, women are more vulnerable - i.e., subject to adverse effects - than men in many contexts: workplace, education, etc.). A third way might be the "layered" theory of Luna, based on a contextual and relational (even temporary) nature of vulnerability. This solution is compatible with the layered risk-based approach in the GDPR, but also with the intersectional approach in gender studies.

16:30-18:00 Session 5C: Open Track: Interactive Translation Workshop
INTERACTIVE TRANSLATION WORKSHOP: Exploring Divergent Meanings of Key Concepts in ‘Responsible AI’ with Scientometrics

ABSTRACT. Background Over the past few years, governments and corporations proliferate numerous principles and guidelines invoking normative concepts around the use and development of AI (Jobin et al 2019). While there is agreement that responsible use and development of AI requires an integration of a wide variety of perspectives, it is also increasingly important to be aware of the different understandings of key concepts as they are situated in such different perspectives, of potential misunderstandings between different perspectives, and to understand the consequences of both the obvious and the much more subtle ambiguities of the concepts (Haraway 1988, Ganesh et al. 2020). Such ambiguities have consequences in debates about AI and automation, but more generally we feel that misunderstandings over key terms present a problem for attempts to develop regulation in response to crises.

Activity description In a 90 minute activity for participants of the conference, assuming they come from diverse disciplinary and personal backgrounds, we will let the participants discover and explore ambiguities of key concepts from the debates on `Responsible AI’. The goal of the workshop is also to reflect on how awareness of the divergence of meanings can lead to better regulatory processes.

We will use a novel (and somewhat experimental) discussion technique using scientometric visualizations. Scientometrics is the quantitative study of scientific work, mainly through publications. In particular we will use variations on a technique called co-word analysis (van Eck & Waltman 2009) to focus on a handful of key terms and how their use varies across papers. A meta-goal of the activity to explore how productive the scientometric visualisations can be as a conversational element.

More specifically, in this activity we will use visualizations of the corpus of the papers presented at the FAccT-conference 2021, one of the most prominent multi-disciplinary platforms in the field to explore and discuss key concepts such as `bias’, `accountability’, `explainability’, `machine learning’, `science’, `fairness’, `justice’, etc. The FAccT conference series brings together a diverse array of scholars interested in Fairness, Accountability and Transparency in AI and Machine Learning. As such the platform has itself established itself as a (mostly academic) interdisciplinary community, which is now organisationally situated within the Association for Computing Machinery (ACM). But how do the different disciplines and authors invoke the central ‘terms’ of fairness, accountability and transparency? Are there key differences between ‘bias’ as a socio-cultural phenomenon used by social scientists and ‘bias’ as a statistical term used by computer scientists? Does ‘accountability’ have the same connotations across fields and domains? Do stakeholders understand ‘explainability’ in the same way as designers, and how does that relate to their notions of ‘transparency’? What about ‘machine learning’ and ‘science’? What are the consequences of such possible (and mostly unnoticed) misalignments in the ways the terms are used in these shared debates?

Proposed timeline of the session (total 90 minutes) 5 mins: Welcome and introduction 10 mins: Presentation (by Waltman and Moats) of scientometric maps that highlight different usages of terms across papers in the conference. 5 mins: Dividing participants into small mixed discipline groups and assigning key terms. 50 mins: small group discussion in breakout rooms using visualizations 20 mins: large group discussion and feedback on the workshop.

Target group size: ca 30 participants.

16:30-18:00 Session 5D: IP Law: Book presentation - Injunctions in Patent Law: A Trans-Atlantic Dialogue on Flexibility and Tailoring

Jorge Contreras & Martin Husovec, eds., "Injunctions in Patent Law: A Trans-Atlantic Dialogue on Flexibility and Tailoring" (Cambridge University Press, forthcoming).

16:30-18:00 Session 5E: Competition and Market Regulation: Extraterritorial reach of platform regulation and design of effective institutions
Panel 'Regulating digital platforms: convergence or divergence'

ABSTRACT. This panel builds on the keynote speech of Anu Bradford. It aims to discuss optimal techniques to regulate digital platforms and compare regulatory approaches across jurisdictions: what can we learn from different institutional frameworks or enforcement methods used across the world and are there common global values that deserve protection?

Chair: Filippo Lancieri (University of Chicago Law School)

Panelists: Anu Bradford (Columbia Law School) Nataliia Bielova (Inria Sophia Antipolis) Catherine Batchelor (UK Competition & Markets Authority) Inge Graef (Tilburg University)

16:30-18:00 Session 5F: Energy and Climate Crisis: Justice and consumer protection
Regulating digital technology and ‘green growth’: ‘Sustainable’ extractivism, e-waste, planned obsolescence and the right to repair

ABSTRACT. In this paper we argue there are numerous factors that contribute to, and drive, extractivism, production, consumption and waste, including wider capitalist and regulatory incentives, regulatory capture and failure. In particular, we take aim at the concept of ‘green growth’ and ‘greenwashed’ ‘sustainable’ technologies that are considered to be ‘environmentally friendly’ but which contribute to environmental and social harm as they drive ongoing extraction, production and waste in pursuit of economic growth. We contend that the claim that economic growth under the current capitalist order can be ‘green’ needs to be called into dispute, and we argue that there is a need to move beyond a capitalist realism with its vested arguments that we can consume our way out of the current unsustainable trajectory. We present three examples to illustrate our argument that if we are serious about ecological justice we need to work towards decoupling our economies from extractivism: (i) the extraction and mining of metals and minerals on land and in the deep sea for ‘sustainable’ or ‘green’ technologies; (ii) the disposal of e-waste, and; (iii) the planned obsolescence of digital devices and limiting the right to repair. Aligned with the conference theme of ‘regulating in times of crisis’, we focus on the institutional, regulatory and governance practices (and their limitations) related to each of the examples, including the United Nations International Seabed Authority, the European Union Directive on Waste Electrical and Electronic Equipment (WEEE), and the ‘Right to Repair’ Directive that will enter into force in 2021.

The emerging collaborative economy in the energy sector. Consumer and prosumer protection in peer-to-peer electricity platforms.

ABSTRACT. The rise of prosumers and the possibility of conducting peer-to-peer transactions without intermediaries trigger a so-called process of “democratization” of the energy sector and challenge the current legal framework. This paper aims at collocating, in a consumer law perspective, the peer-to-peer electricity trading in the current European legal framework, devoting particular attention to the renovated Renewable Energy Directive 2018/2001 and Electricity Directive 2019/944. The main research question is how to guarantee the effectiveness of mechanisms of consumer protection – elaborated to protect consumers in a context of asymmetrical relationships with their electricity suppliers (business-to-consumer model of regulation) – in peer-to-peer marketplaces, where prosumers interact on a level playing field, and disintermediation technologies affect the intermediary role of suppliers. According to the new Directives, the digitalized self-production of energy is considered a disruptive innovation, able to change the structure of the electricity market. However, this innovation does not affect the law disruptively because prosumers and consumers must maintain their rights as energy consumers towards energy suppliers. This leads to investigating the structure of legal interactions between market actors in peer-to-peer environments to figure out how prosumers and consumers could benefit from legal protection mechanisms still designed according to the traditional business-to-consumer model of regulation. This issue is part of the legal debate on the collaborative economy, arisen as a result of the findings of the European Court of Justice in the Uber and Airbnb cases. For this reason, defining which legislation is applicable and establishing whether and according to which rules platforms themselves can be held accountable to ensure legal protection to electricity consumers is of crucial importance to the trustworthy spread of the collaborative economy in the energy sector. The proposed solution is qualifying peer-to-peer electricity platforms as providers of information society services and, at the same time, as electricity suppliers. In such a way, consumers and prosumers could benefit from transparency requirements and rules on online contracts contained in the E-Commerce Directive and, at the same time, they would be not deprived of protection mechanisms laid down in the sector regulation, tailored to their specific needs in the offline electricity market.

Emerging modes of engagement with crisis information in troublesome times

ABSTRACT. The last months have taught us that distinctive modes of engagement with ongoing socio-ecological disasters as the coronavirus pandemic and climate change are emerging. We focus especially on the ‘turn to sensing’ as a human attempt to elicit, receive, and process impressions and information, both in the mode of intuitions or feelings, and in terms of data, quoting Fleur Johns. Our reflection starts by elaborating on the potential of sensing as a way to cope with unfolding events that qualify as ‘hyperobjects’. We see the turn to sensing as providing a productive means to engage differently with events that are massively distributed in time and space, and which manifestations call for a different configuration of existence. In doing so, we attempt to show how sensing can participate in Donna Haraway’s invitation to build more livable futures by 'staying with the trouble' that dwelling in a damaged Earth implies for human and more-than-human life forms. We then delve into situated examples of ‘citizen sensing’ initiatives and conclude by questioning how the insights drawn from such ‘sensing practices’ can be fruitful to cope and act upon risks associated with pandemics and climate change. This analysis will ultimately show the inner desire and essential need that humans have to access (good) information in times of crisis. It will also uncover the alternative, emerging trend of civic actors producing information when they perceive that they cannot access – or trust – official/mainstream information, and this both in the context of ongoing health crises, and in that of environmental crises. We conclude reflecting on which implications this may have for legal entitlements and even for the regulatory system when faced with these alternative modes of engagement.

18:00-19:30 Session 6: IP Law: Strategizing IP: Seeking the Solution Within
Patents and Green Innovation: From linearity to systems analysis of 'wicked problems'

ABSTRACT. This article considers how to make the patent system fit-for-purpose for societal challenges like climate change. To meet the Paris Agreement emissions reductions pledges requires rapid innovation in climate-friendly technologies, yet it is doubtful that patents induce enough green innovation. This article's central claim is that a shift from linear to systems thinking may equip the patent system to innovate for societal challenges. 'Linearity' refers to an outlook on innovation in patent theory reliant on neoclassical economics, an outlook incompatible with the prevailing systems account in economic literature. This work considers IP as an instrument for promoting public welfare by focusing on green innovation. It analyses what the green innovation case can teach incentive theory debates at the heart of patent justification. It examines how a theory problem influences policymaking. In contrast to its stated climate ambition, the Paris Agreement is silent on intellectual property (IP). The rationale being that weakening IP for developing countries to access green technology would erode incentives to invent - a misconception derived from linearity. With short timescales to meet Paris goals, the need to accelerate green innovation warrants rethinking known defects in innovation policy. A shift to systems thinking is a crucial first step.

Scope of technology transfer under restrictive IPR terms of foreign technology collaboration : A Case study of foreign affiliates in India

ABSTRACT. Foreign technology collaboration is widely perceived as an effective means to address the technology gap in any developing economy, and transfer of advanced technology through licensing route via FDI or pure technical collaboration contracts is highly desired in such economies. However, any technology purchase is ruled by the various specific terms of technical collaboration contracts imposed by the technology licensor on the licensee that are prone to heavy bias in favour of the technology supplier due to its inherent ownership and control on technology. The licensor may exercise a continued control on the technical asset by incorporating a number of restrictive and prohibitive intellectual property conditions designed particularly to this aim. These prohibitive clauses collectively ensure that the purchased technology or knowhow remains an exclusive asset of the licensor with a very limited scope present for eventual ‘real acquisition’ or ‘absorption’ by the licensee either during the duration of the agreement or even after the expiry or termination of contract. The specific terms of technical collaboration contract that may reasonably limit the scope of technology transfer to the affiliated or unrelated licensee are particular direct clauses linked to non-transferability and indivisibility of license, strict confidentiality of intellectual property, restrictions placed on field of use of technology, strict duration of contract, stringent termination and post-expiration requirements, restrictions on research and development, grant back provisions etc. Owing to lack of information on technical collaboration contracts in public domain in India, these underlying aspects of technology transfer process through intra-firm or open market technology purchases in India remain largely understudied. The present study identifies a range of restrictive IPR conditions in technical collaboration contracts of foreign affiliated manufacturing firms (about 100) in India and attempts to assess the scope of technology transfer in presence of these limiting conditions. The relevant information on contract terms is collected from the case documents on tax disputes around technology payments that usually refer to these initial collaboration terms. The disclosure on technology absorption status in annual financial statements of select firms are referred to for gaining insights on the technology transfer process and extent. The paper emphasizes the need for international norms on transfer of technology under mutually agreed and reasonable terms of technical collaborations and IPR conditions to ensure real transfer of technology to various disadvantaged developing jurisdictions.

‘National Domain Dispute Resolution Policy’ as a way forward in settlement of domain name disputes in Sri Lanka

ABSTRACT. A Domain Name is a relatively user-friendly form of an Internet Protocol (IP) address, where Internet users can access the Internet website servers. Domain names are assets, crucially important to the very DNA of a brand. Generally, a Domain name performs the same functions in online, which a trademark serves in offline business dealings and transactions. Domain names can be protected as trademarks or service marks at the national and international levels in the Intellectual Property Law, satisfying all conditions to be duly protected. When the Domain name is not registered, the generated ‘Good Will’ attached with the name may be protected with the ‘Passing off.’ Due to the overarching penetration of Internet access, Legal disputes on Domain names have steadily increased in the past few decades. While the International, Regional, and National legal frameworks address this emerging concern, in Sri Lanka, the Domain Name Disputes are primarily referred to traditional Litigation as IP Rights violations. The Alternative Dispute Resolution mechanisms at the National Level become futile to remedy the disputes due to the absence of the Domain Dispute Resolution Policy at the National Level in Sri Lanka. This paper attempts to evaluate the effectiveness of the present Sri Lankan framework on Domain name disputes and identify the significant drawbacks of resolving them. As a comparative study, the research investigates the arrangements of the United Kingdom, and India on these issues. International standards such as Uniform Domain-Name Dispute-Resolution Policy (UDRP) implemented by ICAAN also to be adhered to in the research. The study is primarily based on qualitative data. Legislation, Policies, Laws, and Judicial Decisions are the Primary sources for the research. Statistical reports, Journals, Scholarly articles, and publications will be referred to as Secondary Resources. The study urges to refine the present framework on Domain name disputes and suggests that the formulation of the National Domain Dispute Resolution Policy in Sri Lanka as a significant footstep towards an effective framework for resolving the Domain name disputes.