previous day
all days

View: session overviewtalk overview

09:00-10:00 Session 7: Panel Session BILETA EB : Self-generated Imagery of Children :Reconciling the Law, Norms & Safety
Not in my house

ABSTRACT. The paper explores the legal issues of data collection, aggregation, and processing through the use of smart meters. Smart meters raise public concerns and may induce the authorities to put their roll-out on standby or to allow the customers to discretionally oppose their installation. This sentiment could be dangerous for the technology’s success, particularly in a period in which people’s awareness about privacy and data protection is higher than ever. Consequently, a balanced framework is required, more specifically to avoid a yes or no approach. The aim is to identify the privacy risks that exist for customers that have a smart meter. In fact, the appliance could substantially contribute to build up a surveillance society in which unauthorized actors could track occupant behavior in buildings, with the possibility of commercializing such information or use them for other purposes. This topic is of high interest from the point of view of the human rights at stake. On the one hand there is the necessity for sustainable development (Article 37 EU Charter) through renewable energy and efficient energy management, made possible by digitalized grids. On the other hand there is the preservation of a free and democratic society where the privacy of citizens is protected (Article 7, 8 EU Charter). A just legal framework would aim at balancing these two interests harmonically and rationally. The paper takes as reference the GDPR, recognized globally as a model for data protection law, combined with the EU laws on energy efficiency and electricity. A clear view of the way these laws relate to each other and function together is deemed of fundamental importance for the successful implementation of a smart metering system. The paper clarifies under what circumstances electricity data fall under the definition of personal data, which triggers the application of the GDPR, taking into consideration the relevant distinctions among different types of electricity data. As the GDPR is considered the reference framework, the smart metering system has to undergo a legal compliance assessment: to classify the actors according to the roles individuated in the Regulation, to identify the legal basis for data handling, to apply the purpose limitation and data minimization principles, to apply Data Protection by Design and by Default, to conduct an impact assessment, and to fulfil fundamental principles and rights expressed by the Regulation. The acceptance and trust that results from legal compliance is an important condition for the smart metering system to succeed on a wide scale. The present work would add a more systematic view of the EU legal framework and its requirements in the context of data protection. The existent literature is fragmentary and is not sufficient to keep pace with the political and legislative momentum in terms of energy transition (see the Green Deal of the EU). The latter would need more critical attention.

The GSR v the GDPR: A tangled web

ABSTRACT. On the 18th of March, 2018 a self-driving test vehicle from taxi service Uber collided with a pedestrian, who died from her injuries. Camera footage from inside and outside the Uber made headlines around the world, thereby revealing the identity of the ‘safety driver’. This camera footage, together with other data on the vehicle’s driving behaviour, proved of great importance for the reconstruction of the accident. Although at the time this vast amount of information on a crash might have seemed unusual, it could soon become the norm.

From 2022 onwards, vehicles will need to be equipped with an Event Data Recorder or EDR (art. 6 General Safety Regulation (EU) 2019/2144; hereafter: GSR). Such an EDR has to collect data on, for instance, the vehicle’s speed and braking from a period just before, during and after a collision. These data could be of interest to the public prosecutor, parties in civil law cases and national authorities (e.g. road authorities). However, the collection and storage of such data related to a collision gives rise to significant data protection concerns.

At the moment, the requirements for the kinds and amounts of data which EDRs need to collect and store are vague. Although a list of data types is given, this list is non-exhaustive. However, as a result of the ongoing integration of digital systems into vehicles, the list of available data is much longer. We propose that the legislator should set a clear limit on the types of data which should be collected. Furthermore, the period for which data should be stored should be made explicit in the law.

More striking, however, is that the GSR follows an internally inconsistent approach to data regulation: on the one hand it strives for complete anonymity, but on the other hand also encourages the collection of all available data. The requirement of anonymization laid down in art. 6 (4)(c)ii GSR is thus a practical impossibility. More fundamentally, this requirement seems to contradict other provisions of the GSR. If the data is anonymized ex art. 6(4)(c)ii GSR, it makes no sense to require GDPR compliance in for instance art. 6(4)(d). After all, anonymized data by definition falls outside the scope of the GDPR.

With regard to the ‘driver drowsiness and attention warning system’ and ‘advanced driver distraction warning system’ the internal inconsistency of the GSR becomes even more prominent. On the one hand, art. 6(3) GSR demands that the data collected by these systems should not be accessible or made available to third parties at any time and should be immediately deleted after processing. On the other hand, art. 6(4) (a) GSR demands storage of all the relevant input parameters of the on-board active safety and accident avoidance systems.

Concludingly, the GSR is a tangled web which poses both challenges to itself, the EU data protection framework as a whole, and the legal community.

11:00-11:15Coffee Break
11:15-12:45 Session 9A: Blockchain and Smart Contracts
A Predictive Analysis of Wearables at Work: The Lawfulness Principle and Employers’ Implementation of Fitbits

ABSTRACT. In February 2019, the CEO of Fitbit told CNBC that there are 6.8 million individuals wearing Fitbit devices as a result of corporate wellness programs. Across Europe, such wellness programmes or other monitoring regimes increasingly integrate fitness or activity trackers, as well as employees sharing the device’s data with their employers. In the Netherlands, the Dutch Data Protection Authority recently shut down a company’s attempt to incorporate wearable fitness trackers into their workplace on the grounds that it failed to comply with the standards of data privacy. This case demonstrates the urgency of considering the impact of legal regulation upon the implementation of Fitbits in the workplace.

The aim of this paper is to fill a gap in the existing literature by offering an analysis of the GDPR compliance of such enterprises, with particular focus on the lawfulness principle contained in Article 6 GDPR and by reference to the implementing measures in the UK and the Netherlands where necessary. The paper will contextualise the discussion within the context of literature regarding privacy, monitoring and data analytics in the employment context. Two models for implementation based on current and emerging business practice are then established: one is an optional, wellbeing-focused model and the other mandatory, in pursuit of performance management or health and safety aims.

The argument will then be made that the data received by employers from a Fitbit account is “health data” under GDPR. There are numerous ways in which employers could draw conclusions about an employee’s health status or risks, whether through analysis of the data over time or through the combination of Fitbit data with other sources of data. This conclusion leads to the application of an additional layer of restrictions upon the use of the data in accordance with Art.9 GDPR.

The final section of the paper examines potential legitimate bases that employers may seek to rely on to comply with the lawfulness principle. Whilst consent is commonly put forward as a possibility, we draw on existing literature in employment law to argue that reliance on consent is inappropriate due to the imbalance of power present in the employment relationship. Other legitimate bases contained in Articles 6 and 9 revolve around necessity, which introduces questions regarding whether less invasive measures are possible given the sensitivity of the data collected and the privacy concerns that these programmes involve. Each basis is also targeted towards a specific objective, many of which appear at odds with the true purpose for which employers are seeking to implement Fitbits in their workplace.

To conclude, the paper suggests a less privacy-invasive alternative that employers should consider adopting and warns that programmes currently used or in development in European jurisdictions may not be compliant with the principles of lawfulness.

Towards a co-regulatory EU framework for online content sharing service providers’ liability for copyright infringements: from theory to practice.

ABSTRACT. Online content sharing service providers is the new term to describe those online intermediaries that host content online. The main legislative tool that addresses their liability for copyright infringements within their networks is Article 17 of the newly introduced Copyright Directive in the Digital Single Market. This provision endorses a new liability regime, primary liability rules as well as an array of new developments. One of the most important developments is the creation of an impartial body that would deal with dispute between users and online content sharing service providers. In particular, Article 17 (9) para. 2 states “Member states shall ensure that out-of-court redress mechanism are available for the settlement of disputes. Such mechanisms shall enable disputes to be settled impartially and shall not deprive the users of the legal protection afforded by national law, without prejudice to the rights of users to have recourse to efficient judicial remedies.” In line with this, there is a growing body of academic scholarship that leans in favor of the establishment of an authority for online content sharing service providers which would deal with disputes between internet users and online content sharing service providers. However, without adequate safeguards and a specification of the set of principles and functions, this authority would lack any legitimacy and the validity of its decisions would be very easily challenged. In this light, building upon Article 17 (9) para. 2 of the Copyright Directive in the Digital Single Market and the criticism that revolves around the creation of such an authority, this paper argues that the creation of supervisory authority for online intermediaries would have merits. In doing so, I draw normative and theoretical underpinnings of the supervisory authority for the online content sharing service providers in order to justify the creation of such an authority. Further, in order to specify the principles under which this authority will operate, I examine authorities that operate in other fields of law, such as data protection and competition law authorities across the EU member states. With regard to the functions of this supervisory authority, I draw parallels with existing supervisory authorities in the Greek and Italian legal system.

‘Interoperability’ and beyond: Mapping the needs from a regulatory perspective

ABSTRACT. ‘Interoperability’ means the ability for two different and independent information and communication technology (ICT) systems to exchange information and use that information. Interoperability is crucial for running ICT networks and services, serving as a central thread for meeting the ICT-inclusive needs of the society. It is remarkable that all the relevant disciplines i.e. intellectual property legislation, competition law and electronic communications regulatory framework (ECRF) in the EU law have embraced interoperability-based measures, within their respective domains, e.g. based on various concerns like protecting competition or consumers.

Although interoperability has so far figured as one of the important policy items of the EU agenda, it has not been translated into the ICT regulations at the equivalent level. While the ICT-based transformation, which is sometimes echoed in the fourth industrial revolution or Web 4.0 paradigm, has unraveled new challenges e.g. Artificial Intelligence (AI), cloud computing, Internet of Things (IoT), these have yet to be resolved from a broader interoperability-based perspective. From this point of view, whether or to what extent interoperability needs to be regulated from a holistic perspective poses a compelling question for the policy makers.

Pursuing a holistic perspective means the ICT networks and services being considered not only from the technical or basic interoperability perspective but also from a future-proof regulatory viewpoint. Having said that, the current EU legal framework first needs to be evaluated from the multidisciplinary legal viewpoint incorporating the intellectual property legislation, competition law and ECRF. Secondly, interoperability related concerns have to be revitalized reflecting on the current and emerging ICTs. Against this need, this study attempts to examine the industrial settings of cloud computing and the IoT which widely represent the emerging interoperability needs in the field of ICTs. Doctrinal analysis of the EU law, i.e. concerning intellectual property legislation, competition law and ECRF denote distinctive disaggregated body of rules for ‘interoperability’, along with partial solutions and shortcomings against the relevant concerns e.g. lock-in, switching problems, competition constraints. On the other hand, based on examination of the cloud and IoT settings, it is clearly seen that ICT interoperability cuts through the interdependent layers which mean building blocks for the relevant architectures, i.e. cloud and IoT networks. Not only the internal elements i.e. ‘infrastructure’, ‘platform’ and ‘application’ layers for the cloud computing and ‘perception’, ‘access’ and ‘application’ layers for the IoT, but also the surrounding/external elements, i.e. content layer entail interlinks between the ICT players that represent these layers. This interrelatedness encompasses both ‘competition’ and ‘cooperation’ across many industrial settings particularly for the cloud computing, often resulting in ecosystem structures and echoing ‘coopetition’. This finding of the study is however not reflective of many IoT settings, where many service or platform providers promote their own proprietary protocols, formats, interfaces, standards and semantics, which creates closed ecosystems or truly speaking ‘walled gardens’.

Against these findings, this study focuses on delving into the appropriate regulatory treatment to these emerging technologies and concludes that a layered design called ‘layered regulatory model’ is able to respond to the ICT interdependencies and layers, embracing the regulatory concerns related but not limited to interoperability, i.e. gatekeeping activities. From a holistic viewpoint, the concept of ‘gatekeeping’ is revitalised and embedded into the proposed ‘layered regulatory model’ so as to encompass discriminatory, selective, non-transparent and non-ethical online activities, often represented by algorithmic, AI-driven consumer manipulations.

Crucially, it is considered that this ‘layered regulatory model’ is fit-for-purpose in dealing with the interoperability related concerns from a broadly minded vision incorporating the cross-layer problems as to reflect the activities surrounded by the ‘network gatekeeping’. To ensure gatekeeping activities, e.g. AI-driven online activities that potentially affect the consumer choices, leading up to discriminatory, selective and/or arbitrary results, are deterred, a bottom up perspective is followed ending up a normative framework replacing the thrusts of the ECRF and incorporating a set of principles and remedies, i.e. transparency, non-discrimination, accountability.

11:15-12:45 Session 9B: Finance, Tax and AI
Trust, Human Rights and Ethics: New Perspectives on Bulk Surveillance

ABSTRACT. It has been argued by some that the moral principles governing intelligence gathering in domestic law enforcement differ to some degree to those governing foreign intelligence gathering, which connects to a wider belief that different contexts of security should have differential moral weight (Miller & Walsh, 2016). Following this logic one can conceivably support the use of bulk powers in foreign intelligence gathering, especially in those cases dealing with a demonstrated terrorist threat, but not in the case of domestic intelligence collection carried out by law enforcement. Arguments such as the above are often to be found in works dealing with intelligence ethical codes, often seen by practitioners as an embodiment of an internal culture of compliance. Meanwhile, the civil society backlash following the publication of David Anderson’s Report on the investigatory powers review, aptly called “A Question of Trust”, has shown that more often than not, internal reviews even when carried out by independent experts are not considered “truthful enough” by civil society actors, who argue for as much transparency as possible when it comes to sensitive issues such as bulk surveillance. Starting from this apparent trust tension, this study seeks to understand the interaction between intelligence ethical codes and the wider framework of human rights and the rule of law. An analysis of the current surveillance debates seems to indicate that we have on one hand the rule of law framework, which supports a universal approach to human rights, whereby the law acts as both the enabler for intelligence operations and safeguard for society and on the other the intelligence ethics discourse, where the ethical codes of the security organizations’ themselves are thought to act as the main safeguard against abuse. The latter approach does not, however, meet the CJEU’s standards for sufficient safeguards, as it possibly gives too much discretion to the agencies themselves. A comparative analysis of both the conceptual differences between the two frameworks, but also the real-world implications of using either of the two approaches, is useful to better understand the complexity of the bulk surveillance debate and the different meanings of what one party or another may consider “truthful”. It is also a good starting point to produce the outline of an integrated model that would seek to overlap these different interpretations onto a comprehensive intelligence oversight model that would potentially meet most of the policy requirements.

EU Privacy Regulation on the Content and the Carrier Layers: Does the Split Still Make Sense?

ABSTRACT. This paper explores a fundamental feature of EU privacy laws: the fact that they are split between two regulatory layers - the content (e-commerce) and the carrier (telecoms) - and then looks into how this split affects modern privacy regulation. EU privacy laws consists of the general data protection instrument - the GDPR - and the ePrivacy Directive as a lex specialis. While the former is firmly rooted in the content regulatory circle, the latter has since 1997 been one of the key instruments on the carrier (telecoms) layer.

The key argument is that the new ePrivacy proposal, which has been stranded since 2017 (as Member States repeatedly fail to reach a compromise), complicates privacy regulation by ignoring the reality of the convergence between the content and carrier layers. Convergence, in simple terms, means that different services are all provided on a single network - the Internet. While services are converging, the laws are not as the proposal maintains the split introduced in the 90s. The model of privacy regulation, we argue, is ill equipped for dealing with the new types of services.

The paper first briefly explores the origin of the split in privacy regulation, tracing its beginning to the original telecoms framework in the 90s. It then looks more specifically at how this split affects the design and operation of the proposed ePrivacy Regulation, cataloguing the points of political contention in the present proposal to illustrate both the origins of the dissatisfaction and the regulatory options. Finally, the paper analyses the attempts to revise the privacy laws in the light of the overall IT policy in the EU and suggests alternative methods.

Commercialisation of play in the digital era: a children’s rights perspective

ABSTRACT. The online world provides great opportunities for children to play, communicate with friends and seek information, but it is also increasingly commercialised, with advergames, in-game ads and in-app purchases popping up to grab children’s attention (Verdoodt, Clifford, & Lievens, 2016). Behind the fun and playful activities available for children online lie different revenue models, creating value for companies by inter alia feeding children’s data into algorithms and self-learning models or by nudging children to buy or try to win in-app items to advance in the games they play. Aside from the business interest, in todays’ social media world certain children have become an important source of income for their parents, for instance by becoming influencers (van der Hof, Verdoodt & Leiser, 2019). Some authors consider such practices to be a new form of child labour (Heffernan, 2016), resembling ‘playbour’ (Kücklich, 2005). The commercial aspects of the playful activities children engage in when they access the digital environment are largely concealed to them (and often also for their parents), which raises important questions from a children’s rights perspective.

Two rights that are particularly impacted by this trend but have not received much scholarly attention in the digital context so far are the right to play (Article 31 UNCRC) and the right to protection from economic exploitation (Article 32 UNCRC). The 2018 Recommendation by the Council of Europe’s Committee on ‘Guidelines to respect, protect and fulfil the rights of the child in the digital environment’ already acknowledges that States should take measures to ensure that children are protected from commercial exploitation online. This contribution will analyse children’s rights to play and to protection from economic exploitation in light of the ongoing commercialisation of play online, in order to provide children’s rights inspired recommendations for policy makers.

11:15-12:45 Session 9C: Childrens' privacy and data protection
Innovation and precaution: Will innovators be innovating, regardless of whether regulators are regulating?

ABSTRACT. The innovation principle is not part of primary or secondary EU law. It was developed and proposed to the EU legislator by an industry association, outside of ordinary legislative processes or the realm of the judiciary. This paper examines the origins, potential scope of application and possible role of the innovation principle in EU law- and policy-making. In a first section, the paper delves into the history of the principle, taking into account the fact that it was developed by an industry association. Section two assesses the potential scope of application of the principle and ascertains whether it can or should be included in a revised version of the Treaties. The third section examines the role the principle could play in law- and policy-making, notably in relationship to existing principles, such as the precautionary principle.

‘The reports of my death are greatly exaggerated’: Digitisation, competition law and capitalism

ABSTRACT. ICTs, particularly the Internet, and now big data, algorithms and artificial intelligence, have been viewed by some as posing existential threats to antitrust/competition law as we know it due to features such as zero-pricing, behavioural discrimination and the amassing of super-monopoly positions rendering current prohibitions against anticompetitive conduct illusory or inapplicable. This contribution will take a more theoretical lens to the relationship between competition and digitization by adding an additional, political economy perspective arising from an exploration of the role of neoliberal capitalism in shaping competition theories over the last few decades in major Western jurisdictions (and increasingly internationally). It will be argued that neoliberalism has also played an important role in creating this perception of the impotence of competition law in these matters, as well as playing a further important role in the new information technology’s seeming regulatory vacuum. Current policy developments to address this, including the US-based fledgling ‘hipster antitrust’ movement, which has found its main embodiment in new Democratic Party competition policy, will be examined to assess whether ‘hipster antitrust’ is really just a cry for a more ordoliberal competition law, or whether a truly anti-capitalist theoretical alternative is being suggested in a post-neoliberal world to deal both with the threats posed by technological developments, and the inadequacies of the current competition paradigm.

This submission is for the panel on "The Philosophical Foundations of IT Law". It is based on a contribution to the book of the same name.

“Contested uses of citizen-generated data for policy: exploring legitimization strategies”

ABSTRACT. In the literature, it has been affirmed that citizen-generated data - including, for example, biodiversity citizen science, environmental pollution’s citizen sensing and diverse applications of citizen observatories - can advance ‘environmental justice action’ (Haklay and Francis 2018). The mobilization of (concerned/amateur) citizens to track their surrounding environment has also been framed as a form of ‘rights in action’, expressing people’s claims to live in a healthy environment and to access environmental information (Berti Suman 2020). Potentially, the act of producing data by lay citizens over a ‘matter of concern’ could also embody a truly new right, i.e. the right to contribute to environmental information (Balestrini 2018; Berti Suman 2020), which can be derived from a broad interpretation of the Aarhus Convention (1998) and of the Kyiv Protocol (2009). The rights’ discourse undoubtedly contributes to legitimize the use of alternative data sources, such as citizen-gathered data, in policy and decision-making processes (‘policy uptake’). Furthermore, citizens’ monitoring initiatives are increasingly obtaining legitimization from institutional recognition through the endorsement of scientific accreditation bodies and public authorities, such as environmental protection agencies (Wyeth et al. 2019; Berti Suman 2019). Yet, the use of citizen-generated data in policy is often questioned. Citizens and interested institutional actors still have to ‘justify’ the role of lay people in producing data over environmental issues. They adopt a variety of arguments to persuade public authorities to recognise citizen-generated data as a legitimate resource for policy purposes. To date, however, scarce attention has been devoted to inspecting the different legitimization strategies adopted to push for institutional use of citizen-gathered data and on the conceivable effects of this ongoing process of legitimization. This contribution takes a theoretical approach in addressing the question, despite acknowledging the need for complementing such an approach with empirical insights in further research. It starts from a necessary effort of categorization: first, typologies of citizen-generated data and their uses in policy are outlined; second, possible legitimization strategies per type of citizen-generated data and intended policy use are explored and compared. Based on literature review and exemplary cases, existing legitimization strategies are outlined and their effects on targeted policy use are described. In the conclusion, possible additional legitimization strategies are proposed and a future research agenda is outlined.

11:15-12:45 Session 9D: IPR at national and interational level
Enhancing Legal Compliance in Smart Contracts

ABSTRACT. The Internet is one of the most influential human technological developments in history, delivering important social and cultural benefits. Its composition has, however, made it an attractive medium for the commission of illegal activities forcing the law to adapt to this new environment. The traditional legal approach – punishment after a criminal act – has proven incapable of deterring illegal activities, mostly due to its inability to adapt to the operational particularities of the cyberspace. The restrictions that this created, along with the commercial drive to deliver a security scheme for digital works and the mixed (at best) results of early methods have led to the development of new strategies. Indeed, the lack of positive results proved useful for future approaches, allowing the adoption of relevant concepts such as “law as computer code”. This has been the case for contractual law in digital platforms in particular. Before the expansion of this technology, approaches relied on the capacity to enforce contracted obligations through litigation. Given the dynamics of the cyberspace this approach is no longer suitable as it has proven incapable of delivering a deterrent effect. To ensure legal compliance it is therefore proposed that the capacity of digital contracts is enhanced by providing them with the capacity to operate automatically based on legal reasoning, increasing the level of legal certainty for the parties involved. This represents a significant advancement in the functioning of traditional legal devices. Legal compliance is delivered through a two-fold approach: first, a preventive scheme which will correct low impact situations that do not compromise the original object of the contract without having to stop its execution. This is supported by a second, restrictive one, capable of stopping the execution of the contract if the detected action is considered to be against a particular clause.

Among the novel aspects contained in this presentation, the combination of computational legal reasoning and theory of legal cognition stands out. Traditional implementations aimed to fully replicate the cognitive processes performed by a legal operator, usually a Judge. This created as a consequence systems that were legally accurate but required an unnecessary amount of time and resources to develop a reasoned solution. Instead of this, the adoption of the behavioural processes adopted by regular citizens is proposed. In this sense, it is proposed that legal responses are adapted to the characteristics of the behaviour of the parties, delivering an approach suitable for adapting to dynamic environments such as cyberspace. This will provide an accurate and technical efficient legal response based on the use of legal ontologies. Finally, this presentation will conclude that by delivering intelligent technology with the capacity for replicating legal cognitive processes the legal accuracy of smart contracts can be enhanced.

The Story Of Intermediary Obligations In IP Infringement Cases: Is India Following The European Way?

ABSTRACT. Cases of intermediaries fending off allegations of contributing towards instances of IP infringements on their platforms have been discussed and debated in different jurisdictions. These debates and discussions, especially in case of India, have largely been placed within the framework of the Information Technology Act, Guidelines for Intermediaries under the Information Technology Act, The Copyright Act and the Trademark Act. Certain recent judgements of the High Court in Delhi have created a sense of expectations surrounding intermediary obligations. One can sense a lot of reliance on the existing European framework in the recent judgements. The framework in Europe has also undergone substantial changes especially with the passage of Article 17 of the DSM Directive. In this context, the paper focuses on two issues: 1. In the background of the relevant Directives in the European Union, the paper will consider the CJEU judgements where intermediary obligations in matters relating to IP infringements have been discussed. The similarities and changes over time will be mapped in the identified judgements to locate the trend regarding future expectations with intermediaries. 2. Secondly, the paper will map similarly placed cases in India, compare the preliminary observations with the changes emanating from the EU and suggest the future course for intermediary obligations in India.

A five-dimensional framework for assessing GDPR adequacy – UK case study

ABSTRACT. The UK formally left the European Union on 31st January 2020, having been a member for forty-seven years. The two economies are currently very heavily integrated - three-quarters of the UK’s cross-border data flows are with EU countries - and forecast to remain so for decades to come. Unwinding existing legal and trading relationships and agreeing a future relationship is a complex process, so the EU and UK government have agreed that transitional measures will apply until December 2020 by which time the UK hopes to have negotiated and concluded a future trading agreement with the EU. During the transition period EU data protection law - the General Data Protection Regulation - will continue to fully apply in the UK. The Political Declaration setting out the framework for the future relationship between the European Union and the United Kingdom indicates acceptance by the UK government of the need to seek an adequacy decision from the European Commission to facilitate transfers of personal data from European Union (and European Economic Area) countries to the UK once it becomes, at the end of the transition period, a third country for data protection purposes. The political declaration also confirms a commitment by the Commission to adopt adequacy decision by the end of 2020, “if the applicable conditions are met.” This will involve the Commission assessing whether the UK provides an ‘essentially equivalent level of protection to the EU’ by reason of its domestic law and international legal commitments; if so, it will issue an adequacy decision permitting personal data transfers from EEA countries without the need for further safeguards e.g. contractual solutions, or for data controllers in the UK to individually show compliance with the GDPR, which would be less onerous for UK-based data controllers and processors. One might assume that the UK will readily, and as a matter of course, obtain an adequacy decision from the Commission given that UK will have complied with the General Data Protection Regulation until that date and it will be retained in UK law thereafter. This article explains why that is a mistaken assumption. It does so by reference to a five-dimension framework of the concept of adequacy. These dimensions are used to highlight issues that could impede the adoption of an adequacy decision, or at a minimum, cause the Commission to seek clarification and perhaps amendment of UK laws and/or procedures. The two goals of this article are: firstly, to assist the UK government as it prepares for an adequacy assessment – identifying and taking remedial action in respect of deficiencies would encourage the Commission to take a positive view of the UK’s application, and, secondly, to provide, for instructional and comparative purposes, an example of the application of the five-dimension adequacy assessment framework to assist other third countries preparing for an adequacy assessment.

11:15-12:45 Session 9E: Surveillance, Border control, and fundamental rights
State Surveillance, Manipulation, and the Right to Integrity of the Person

ABSTRACT. States have a positive obligation to keep their citizens safe. To this end, security services make use of surveillance. Although there is a wealth of academic scholarship on the regulation of the interception and retention of data, this article focuses on the use of surveillance to manipulate behaviour for the purposes of maintaining state security. Psychological operations or “Psy-ops” refer to a category of programmes used by state security services that focus on modifying and manipulating citizens. Due to the numerous concerns these programs raise, state surveillance for the purpose of manipulation needs urgent examination. Regulating surveillance practices is often justified in the literature as needed on abstract theoretical grounds like the ‘chilling effect on free speech’ and ‘control’, yet psy-ops programmes are deemed successful when actual modification of behaviour is achieved. To this end, a wide variety of persuasive communication techniques are used by state security and intelligence agencies that aim to shape behaviours and condition individuals. For example, sentiment analysis, in combination with its wide array of predictive analytics and experimentation, not only are emotions recorded, but modulated as variables. When negative emotions are detected for a particular issue, intelligence analysts use software to manage and exploit these results.

Academic scholarship suggests that manipulation, as a negative interference with our mental spheres, is neglected and consequently underdeveloped in legal thinking. Historically, the mind has not been treated as an entity vulnerable to external intrusions and in need of legal protection. Only a few legal systems ensure protection to mental integrity; for example, the right to mental integrity (Article 3 of the Charter of Fundamental Rights). This right guarantees that every citizen has the right to respect for his or her physical and mental integrity, while the Court of Justice of the European Union (CJEU) has stated that the right to human integrity is part of Union law. However, Article 3 has largely been interpreted in a way that ensures the State is constrained from commoditizing and interfering with the physical integrity of human beings.

This paper aims to map insights from surveillance studies scholarship onto the European framework for the protection of fundamental rights. Constraining abuses of power by the State is at the heart of the Court’s response to interception and retention frameworks for the purposes of State surveillance. However, there has not been a single legal action and zero scholarship addressing the lack of lawful authority, necessity, and proportionality needed to legally manipulate individuals, groups, and even society using data that is legally collected and retained. As the state’s technological capabilities intensify, the more surveillance expands into the life of citizens and groups. Yet there has been little attempt by scholars to compartmentalize surveillance-to-manipulate into the same rights-based framework that has constrained surveillance-to-observe-and-predict. We argue that these forms of state manipulation infringe fundamental rights and, action is needed to effectively protect individuals from surveillance-to-manipulate.

Regulating banking capital adequacy transitions in an intangibles/IP debt finance sustainability context

ABSTRACT. Intellectual property rights such as patents are important instruments to advance technological innovation in international trade and development. This interdisciplinary paper considers regulating transitions in IP finance in a sustainable finance context. Sustainability is broadly understood to comprise development that meets the needs of the present while safeguarding Earth’s life-support system as set out in the United Nations 2030 Sustainable Development Agenda.

The regulation of banking capital ratio barriers to IP debt finance in green finance is critically examined. This paper argues that IP debt finance and access to IP-backed loans should be an important aspect of the sustainable finance market and global targets relating to UN 2030 Sustainable Development Goal 9: Industry, Innovation and Infrastructure. A functioning IP debt finance market would arguably strengthen a country’s ability to unlock innovations, inventions and commercialise new to tackle global challenges more efficiently and sustainably.

The regulatory activity of institutions such as the Basel Committee on Banking Supervision housed at the Bank for International Settlements based in Switzerland and the Bank of England are examined and reforms proposed. The Basel Committee publishes the Basel Accords which deal with capital adequacy ratios that banks apply when lending against intangibles, a subset of asset which includes IP rights. Member banks voluntarily adhere to the Basel III Accord which sets out a framework for how banks and depository institutions must calculate their capital ratios when lending against all types of collateral/security.

I first explored the topic of banking capital adequacy ratios topic in PhD thesis (2015) and my monograph, Intellectual Property, Finance and Corporate Governance (2018) Routledge. The current state of play with respect to regulating transitions in intangibles/IP and regulatory support for green finance disseminates the new research in my chapter contribution to an edited collection entitled, Intellectual Property and Sustainability (2020) Edward Elgar. This new volume is due to publish in 2020 and is edited by Professor Ole-Andreas Rognstad and Associate Professor Inger Berg Ørstavik at the University of Oslo, Norway.


Dr Janice Denoncourt BA (McGill), LLB (Western Australia), PhD (Nottingham) Senior Fellow Higher Education Academy European IP Teachers’ Network Committee Barrister and Solicitor Western Australia (non-practising) Solicitor England and Wales (non-practising) Course Leader LLM Legal Practice DL Department of Academic Legal Studies, Nottingham Trent University, Nottingham NG1 4BU, Tel: +44 (0)115 848 2130,

Related Academic Research Publications: J. Denoncourt Intellectual Property, Finance and Corporate Governance is part of the Routledge Research in Intellectual Property Monographs Series and published on 12 April 2018 (see

J. Denoncourt , ‘Corporate Intellectual Property Assets and UN 2030 Sustainability Goal 9 Innovation’ (2019) Vol 19, Part 2 Journal of Corporate Law Studies J Denoncourt, Chapter 1 in IP debt finance and SMEs: trends and initiatives from around the world. In: T. KONO, ed., Security interests in intellectual property in a global context (September 2017) Japan: Springer. This volume published new research in the intersecting fields of innovation, finance and law, from Japanese, Asian, European and American perspectives (see

Janice lectures on Nottingham Law School’s LLM Intellectual Property and the LLM International Financial Transactions programs.

“Regulatory transitions in energy and law – the place of IP”

ABSTRACT. The impact of technology can depend on regulatory approaches taken to the technology, the values which underpin the regulation, the transitions which are sought to come about, and the extent to which one form of regulation can prevail over another. This paper will explore the intersection between intellectual property (IP) rights and changes in approach to oil regulation in the UK and in Canada, which regime has prevailed so far, and the consequences this brings about for IP and its place in encouraging innovation and for the delivery of wider (or other?) societal goals.

IP rights confer the power to prevent activity of others and legislation in the UK and Canada confers positive permission to drill for oil. To enable this drilling to take place most efficiently, legislation was passed in both countries - in the UK the Energy Act 2016, Petroleum Act 1998 (as amended), 2018 secondary legislation and in Canada the Petroleum Resources Act 1985. Through this, key seismic information and details of the subsurface are to be shared with the regulators (in the UK as part of an ongoing programme of sharing and collaboration termed Maximising Economic Recovery) and then more widely across the industry, including with competitors.

It remains to be seen if the UK legislation could lead to a human rights based challenge. In Canada, Geophysical Services v Encana (2017) considered the extent to which use by others of the obtained information was infringement of copyright. The appeal court found, based on principles of statutory interpretation, that the oil legislation prevailed over the rights of the copyright owner. The decision is a rare example of a court having firstly the structural opportunity to consider another area of law as against IP law and secondly reaching a substantive decision which finds against IP law. A useful comparator lies in the Magill and IMS health competition and copyright cases in the EU from the 1990s and 2000s: is there once again an underpinning silent justification that the material may be considered not to warrant the existence of the right or of its enforcement? The decision of the Canadian Supreme Court in Keatley Surveying v Teranet (2019) and its approach to Crown Use provides a valuable further lens. Yet the power of IP can continue, with an investor state dispute settlement claim having been raised under NAFTA by Geophysical Services against Canada.

From these bases, this paper will make proposals regarding the implications for transition in one sector when it is faced with the values and pathways of other areas of law.

11:15-12:45 Session 9F: Everything is personal data: now what? In search for new boundaries for protection against information-induced harms
Jurisdiction in online cross-border infringement of personality rights

ABSTRACT. The information that is put online can be accessed in any country. The defamation and other infringement of personality rights that occur on the Internet can have a worldwide reach and can cause damage with greater geographical extent and repercussion in the legal sphere of the victim, especially by the geographical dissemination of the Internet users. This is an example how technology can potentiate the violation of human rights with great repercussion for the victim. To determine which court has jurisdiction to rule the cases of cross-border infringement of personality rights, in the European Union, it is applicable Article 7, Section 2, of Regulation No 1215/2012 (Brussels I a). However, the online cross-border infringement of personality rights forced the European Union Court of Justice (ECJ) to do an interpretative effort of the rule, taking into account the specific characteristics of the Internet, as a way of fast dissemination of information, with a global access. These characteristics of the Internet led the ECJ to embrace a delict oriented approach to determine which are the courts of the place where the harmful event occurred or may occur. In this interpretation the ECJ set that the place of damage in online infringement of rights varies according to the infringement in the case, taking into account the nature of the right, its geographical scope and the extent of the damage. The starting point of the delict oriented approach is that the occurrence of damage in a particular place depends on the condition that the right in question is protected within the territory of that State. Therefore, the delict oriented approach takes into account the geographical protection area of the right, taking into account the need to identify the court best placed to assess the infringement of that right. However, in online cross-border infringement of personality rights the ECJ made some precisions taking into consideration the nature of the rights at stake, as human rights of particular importance. The objective of this presentation is to analyse the latest developments in the ECJ´s case law about the application of Article 7, Section 2, to the online cross-border infringement of personality rights, namely the interpretation of the notion of place where the harmful event occurred or may occur.

Should We Ban Political, Data-Driven and Targeted Online Campaigning? A Revisit to the Strasbourg Jurisprudence in the Era of Political Microtargeting

ABSTRACT. CONTEXT Political parties using personal data to target (vulnerable) voters have presented staggering consequences for electoral results and modern democracy. This phenomenon, sometimes known as ‘political micro-targeting’, came to light in the wake of the EU Referendum and US General Election in the year 2016, and keeps shaping electoral events that follow. The undue disruption to modern democracy, as well as implications for data protection and privacy, begs the question as to whether we should ban this form of online campaigning, just as some European countries have done to political advertising in the broadcast media sector.

The use of personal data for campaigning purposes has now become the main focus of data protection authorities in the EU. The UK’s Information Commissioner’s Office, for instance, has issued several ‘research reports’ but found it reluctant to take any further action. In practice, advocacy of a blanket ban as such is gaining momentum. Non-profit organisations such as the Open Rights Group (ORG) are mounting pressure upon political parties to stop manipulative political advertising. Before any legal or social action takes shape, tech companies such as Twitter, Google and Facebook has substantially restricted, or even totally banned, political advertising or campaigning on their platforms and networks. These responses to the social or legal pressure raise the concern as to the effectiveness of such bans on the one hand, and the unintended or undesirable ramifications on the other.

QUESTION This paper aims to grapple with political, data-driven, targeted online campaigning. In view of the moderate progress made by current regulatory measures, it looks back on the rationales for banning political advertising on television, and inquires whether they can be extended in the digital age.

WORKING CONCLUSIONS It is provisionally argued that most of the rationales were developed by the ECtHR in a pre-digital context, and should hence be revisited to duly reflect the new dynamics of online political campaigning. This would inform, even create space for, more stringent restrictions on the use of personal data for online campaigning purposes. This paper sets to explore the most cost-effective ways to regulate political micro-targeting and provides critical reflections on the longstanding divide between broadcasting and non-broadcasting regulation. While a certain ban might be justifiable and desirable in some democratic traditions, a note of caution should be sounded about the disruptions to robust political activities, as well as the chilling effect on political speech.

Shining Legal Light on ‘Dark Patterns’

ABSTRACT. ‘Dark patterns’ are defined as “interface design choices that benefit an online service by coercing, steering, or deceiving users into making decisions that, if fully informed and capable of selecting alternatives, they might not make” . ‘Dark patterns’ is a term commonly used by the web collective to describe a user interface that has been designed to exploit users into doing something that they would not normally do. It is a coercive and manipulative design technique typically found across a variety of digital environments. Many e-commerce and social media platforms are purposely designed to capture user attention, while obfuscating information that would help them make informed decisions. As these practices become more widespread, they raise questions about their compatibility with legal frameworks for protecting internet users and consumers. Currently, the European data protection framework, as well as consumer protection legislation, provide safeguards for internet users against the growing powers of online providers that employ dark patterns in order to lure users into making decision that are to the benefit of the online providers, especially through the release of more personal information than needed for the provision of a service. For example, ‘dark patterns’ can compromise legal requirements like consent and data protection-by-design-and-default, and principles like fairness and transparency. Furthermore, they can amount to a type of unfair commercial practice.

This paper studies the most common practices of dark patterns and examines their compatibility and legality within European frameworks. As commonly used when some sort of indication is needed from a user – typically to begin processing personal data or to enter into a contract - the paper starts by introducing the concept of ‘dark patterns’ to the wider legal community. The paper then addresses ‘dark patterns’ from a regulatory perspective. Two regimes are discussed: The European Union’s regime for data protection and consumer protection. We close with recommendations for the regulation of dark patterns and conclude that a pluralistic approach is needed to regulate dark patterns.

12:45-13:45Lunch Break
13:45-15:15 Session 10A: Regulating online platforms
Born to be authors: children, creativity and copyright

ABSTRACT. Research on the role of the child as an author of creative works is limited. Often the relationship between children and the legal framework of intellectual property revolves intellectual property literacy, under the proposition that children are born in the digital age, are surrounded by digital content and therefore are likely to benefit from learning about lawful ways to access online creative content. It has been argued that a “basic understanding of IP and a respect for others’ IP rights is … a key life skill” (Intellectual Property Office, ‘Building respect for IP: UK educational awareness raising initiatives’ November 2015). Digital citizenship is also a significant area of research that focuses on the safe and responsible use of digital content by children.

The core of these two approaches is the child as recipient of creative products devised by others. However, children engage with creative media through active processes that very often lead to the production of original works that are the expression of their own intellectual creation through the making of creative choices (Case C 5/08 and Case C 145/10 inter alia) as embedded in artwork, writings, dramatic works such as mime or choreographies to name a few.

The wealth of copyright protected works created by children is immense, with parents, guardians, teachers and other individuals in such positions of responsibility called to act as stewards of a child’s copyright works and exercise the relevant rights on their behalf. This relates to both economic and moral rights of authors as exercised through transformative technological tools, e.g. when a drawing such as a self-portrait is reproduced on the digital edition of the school yearbook, or when someone posts the text of a poem written by a pupil via social media.

This paper presents a critical analysis of early findings in the context of a project on children’s creativity from a copyright perspective. It identifies building blocks towards a systematic scholarship of children’s copyright by addressing how the law interprets children’s creativity as subject matter of protection, and how the relevant rights are exercised. In particular, firstly it develops the inquiry on the exercise of rights in relation to issues that emerge from the digitisation and archiving of children’s copyright works. Secondly, it considers how moral rights can be exercised in a meaningful way, taking into account the way children create digital and non-digital copyright works at different stages of their childhood. Thirdly, it questions the legal implications of commercial practices such as End User Licence agreements for video games as examples of how children become active participants of a creative process. Through this analysis, the paper examines both formal and informal awareness of the copyright framework in the way it affects a child’s approach to creativity, and a parent’s, teacher’s or guardian’s approach in their role as stewards of a child’s copyright works.

Maria Mercedes Frabboni 20 November 2019

Open banking and online financial activity - the promise of competition and the AML risks

ABSTRACT. Open Banking is designed to give customers more control over their financial data, allowing them to request their bank to securely share product, pricing and transaction data with other banks or authorised Third Party Providers (‘TPPs’). The rapid development, increased functionality, and growing use of Big Tech companies like Google, Facebook, and Amazon has the potential by leveraging big data and open architecture, to increase competition in the banking industry. This however creates a risk of monopolising the origination and distribution of financial services and challenges for countries and private sector institutions in ensuring that Big Tech companies’ online activities are not misused for money laundering (‘AML’) purposes. Hence, while the entry of Big Tech companies will lead to more innovation in the financial industry there might also be an adverse effect to the public interest. This paper examines the fundamental trend of migration from paper to online payments. The basic need for criminals to raise, move and use funds has remained constant; however, their methods of raising and managing funds have evolved over time. This paper focuses on the AML risks of TPPs in an open banking environment. The article will fill the gaps existing in the literature by examining the innovative environment of Big Tech-TPPs’ companies, the potential enhancement of competition and the new AML risks. This approach differs from other literature in the field, which deals solely with Open banking regime in general, or with the promise for competition in a disconnect discussion of AML risks.

Digital Debt Collection. Opportunities, Abuses and Concerns

ABSTRACT. The paper analyses the impact of digitalization of non-judicial debt collection on consumer-debtor protections, using as an example the federal Fair Debt Collection Practices Act (FDCPA) in the US and the Consumer Credit Handbook (CONC) in the UK. It starts by introducing the reader to the ‘classic’ concept of debt-collection and the protections granted by legislators to consumer-debtors against abusive debt collection practices, focusing on three aspects: the protection of debtor’s privacy, the ban of misleading and abusive practices and the right of the consumer-debtor to dispute the debt. It then introduces the concept of digital debt collection and develops upon how technological innovation is used by the debt collection industry to circumvent the statutory protections of the FDCPA and CONC. By using specific examples from practice and patent descriptions of debt collection software, the paper shows that digital debt collection reverses the communication pattern assumed by the FDCPA and CONC, overturns the consumer’s privacy rights, turns the validation of the debt into a public exercise and misleads consumers into sharing more data than they should. The paper concludes that current protections are too static and the existing regulations are ill-equipped to deal with the challenges posed by the digital era, thus requiring amendments and improvements.

13:45-15:15 Session 10B: Regulating Controversial Technologies
Can the Newly Reformed Australian Competition Law Stop Algorithms from Colluding?

ABSTRACT. Algorithms and Big Data are important driving-factors behind the rapid growth and success of digital platforms. Pricing algorithms in particular are often used by online retailers and e-commerce platforms like Amazon. On the one hand, the data-driven innovations including price monitoring and efficient price discrimination that stem from pricing algorithms can enhance consumer welfare. On the other hand, pricing algorithms can be deployed as a tool to facilitate price-fixing cartels among sellers and therefore cause consumer harm. Where this happens the conduct may not necessarily involve a ‘contract’, ‘arrangement’ or ‘understanding’ which had to be proven to establish an infringement of the section 45 of the Australian Competition and Consumer Act 2010 (Cth) (CCA). Under that provision the establishment of an infringement was sometimes cumbersome when it was solely about human acts. With artificial intelligence and machine learning having entered the picture this task appears even more difficult. In November 2017, the CCA was amended to prohibit concerted practices that have the purpose, effect or likely effect of substantially lessening competition. Days after the amendment entered into force the Chairman of the Australian Competition and Consumer Commission (ACCC) stated that he was confident that Australia’s competition law, particularly with the addition of the new concerted practices prohibition can deal with algorithms that result in a substantial lessening of competition. The Australian legislator adopted the concept of ‘concerted practices’ from the EU competition law which has long included such a prohibition. However, unlike EU competition law, the CCA lacks a definition of the term ‘concerted practices’. This paper analyses the extent to which the new prohibition of concerted practices may capture algorithmic collusion and compares the scope of the Australian prohibition with the prohibition under EU competition law. The paper argues that since the new amendment lacks a definition of the term ‘concerted practices’, the judicial interpretation of that term will be decisive for the scope of the prohibition and the types of algorithmic collusion it may capture.

Law Enforcement Access to Encrypted Data Across Borders

ABSTRACT. The application of new encryption technologies has created problems for law enforcement access to digital evidence. Sophisticated device encryption and default end-to-end encryption for communications prevent access to data by both criminal actors and the law enforcement authorities who seek to protect society from those actors. While law enforcement authorities will have the legal authority to access communications data, they lack the technical ability to do so. To address this “going dark” problem, some states have proposed allowing law enforcement authorities to have “exceptional access” to encrypted data, but critics argue that any decryption capabilities will also be used by criminal actors as well. This article does not seek to settle this debate, but rather considers how these decryption orders may work in practice in the global environment.

This article explores the extent to which decryption orders, specifically technical capability notices authorized under UK and Australian law, have extraterritorial effect in light of the US Cloud Act. The Cloud Act came into force in April 2018 to facilitate foreign law enforcement access to data held by US service providers through the use of bilateral agreements premised on mutual trust. The first such agreement – the UK-US Agreement on Access to Electronic Data for the Purpose of Countering Serious Crime – allows UK law enforcement to send production orders, authorized under UK law, directly to the US service providers for communications and content data.

This recent UK-US agreement, and the announcement of negotiations for a similar agreement between Australia and the US, have led to confusion surrounding whether decryption orders may be served on US service providers pursuant to these bilateral agreements, and whether these investigatory powers might be ‘exported’ to US law enforcement as well. This article will explore how the ‘encryption-neutral’ provision of the Cloud Act allows for the UK, and possibly Australia, to serve technical capability notices on US service providers extraterritorially. To reach this conclusion, the article will analyze the relevant provisions of UK and Australian law, how those provisions interact with bilateral agreements pursuant to the Cloud Act, and the likelihood of the enforcement of UK and Australian technical capability notices on US service providers. To the extent that these notices are enforceable extraterritorially, the article will conclude with an analysis of whether these UK and Australian investigatory powers are thus exported to US law enforcement.

From Safe Harbour to Privacy Shield, certifications and consent: putting the EU data transfers regime to the test.

ABSTRACT. The EU legal framework on protection of personal data is considered one of the most advanced regimes, offering substantial and procedural rights to individuals unimaginable in other jurisdictions. Even more, the General Data Protection Regulation is said to offer a high standard, a benchmark that influences, if not directly affecting, the mentality, approach, and legislature adopted by regulators in countries outside the European Union. Nevertheless, there is still much distance to be covered by non-EU countries, so that one can speak of regimes that offer a comparable level of protection. Surveillance and access to data by governments, data commodification and commercialisation are heightened by the interconnectivity and analytical tools used by both governments and the industry. So, what happens to the personal data of EU citizens once transferred to non-EU countries? The growing case law of the Court of Justice raises questions as to how effectively the individual is protected once his/her personal data are outside of an EU Member State jurisdiction. These concerns are heightened by certain realities. A first reality concerns the tenuous grounds on which the regime of adequacy decisions stands, as shown in the CJEU Safe Harbour case, and the ongoing case on its successor, the Privacy Shield. A second reality is the introduction of new instruments for data transfers in Art. 46 GDPR, such as codes of conduct and certifications, despite the legal uncertainty surrounding such tools. Last, a third reality is the overuse of the derogation grounds, especially the consent of the data subject as a means of transferring personal data to countries with a non-adequate data protection regime. The aim of the article is to first reflect on how accurate is to speak of adequate level of protection and appropriate safeguards taking into account the current situation in relation to data transfers, and second, to instigate a discussion on whether a radical change in the EU approach on data transfers is necessary, taking the perspective of the individual whose rights are at stake. At a secondary level, the article also offers insights on how different law in the books from law on the ground is, by highlighting the discrepancy between the envisaged regulated data transfers regime and the reality of the exposed (and largely unprotected) individual.

13:45-15:15 Session 10C: Data Protection by Design and by Default
The Principle of Legal Certainty in the Light of Automated Determination and Application of Law

ABSTRACT. This paper aims to illuminate the interplay between the legal certainty principle and systems of automated determination and application of law. The principle of legal certainty is a widely recognised fundamental principle of legal systems, binding the legislator to ensure a sufficient level of determination within its laws. I show that this principle is stressed by the deployment of agents tasked with the generation, determination and application of law, i.e. legal automata, due to their theoretically higher performance.

In a first step, this contribution explores the concept of legal certainty and argues that there is a gradient of ambiguity within legal frameworks. Indeed, vagueness can be caused not only by ambiguous vocabulary or syntax. Of equal importance is terminology that is not exhaustively defined per se but which meaning can be determined in principle with reference to an external normative framework (e.g. standard of professional duty), or by bestowing deliberate discretionary authority to certain subjects (e.g. equity or fairness). This precision difference follows a bi-structured model: precision differs from rules serving either organizational, procedural or material functions but also between different substantive areas of law.

In a second step, a potential agent, the legal automata, is introduced and conceptualized as a automatized process which is functionally similar to the human application of law. This allows for deeper analysis of whether a legal system built on natural is or can be sufficiently predictable, self-contained and if it can be completely accessed through logic. Legal automata can be classified by which function (“interview-, subsumption-, report-, or analysis”-stage) in a legal reasoning process they aim to fulfill. This has implications how and to what extent they are affected by the legal certainty.

I show that by looking at legal sources from the perspective of a legal automaton, conclusions can be drawn about the solvability of legal problems and as a consequence the predictability of standards of conduct by individuals. As a result, it is suggested to separate the issue of complexity from the issue of opaqueness when analyzing constraints of legal automatons. Contrasting these results with a formal analysis of the legal activity of a human judge, it becomes clear that the limits of such technologies do not (only) arise from complexity, but primarily from the intentional or unintentional „fuzziness“ of the sources of law. The prevalent concern about potentially unpredictable decisions of legal automatons (and their consequences) can thus be countered by the resulting phenomenon of the equally opaque "black box" of human legal practitioners.

Finally, this paper outlines possible reflex effects of such legal information processing on the legislative process and investigate if the legal certainty principle is inherently static or dynamic and adaptive to the emergence of legal technology. Ultimately, I suggest that increasing performance of legal automatons, when upholding the legal certainty principle, necessitates bypassing natural language as an authoritative medium for legal frameworks

Choosing the right approach to privacy and data protection ‘by design’ solutions: quick fixes at the seams or weaving long-lasting tapestries?

ABSTRACT. Protecting individuals’ personal data and private lives is increasingly difficult given the growing complexity and convergence of technological applications. Solutions echo such complexity. The preservation of privacy and data protection, akin to other public policy issues, involves the pooling of knowledge and resources by experts from a variety of backgrounds. Ongoing regulatory interventions in the area of data protection, privacy, or both, all feature a layered toolbox reflecting the need for interdisciplinary and intersectoral approaches. Prime examples of the law imbibing complexity as a solution are ‘by design’ approaches. The success of ‘by design’ formulae can perhaps be explained by their ability to voice different desires at once. Some look at ‘by design’ approaches as a way to automate the protection of personal data; others welcome them for their ability to cast light on how ‘code’ silently regulates; others still support the emphasis that by design approaches place on protecting the entire life-cycle of data. Scholarly work reflects such positions in a lively debate whose outcome carries important policy implications. This paper contributes to the debate by suggesting the need to analyse the interplay between code, privacy and data protection vertically and not just horizontally. Different communities may in fact be looking at three different modes of interaction between code, privacy and data protection, because privacy and data protection exist as design strategies, as regulation (secondary law) and as fundamental rights (primary law). The hypothesis is to look at all three levels to understand the reach, and perhaps the limits, of ‘by design’ approaches. To do so, I suggest creating a matrix where code is expressed by ‘protection goals’, design is embodied in ‘design strategies’, regulation is looked at as ‘principles’ and ‘duties’, and rights are expressed by ‘essence’ and ‘attributes’. The latter decision is in keeping with the understanding, in EU law, that rights have an essence. The definition of the rights represents a minimum threshold to be safeguarded, while the applicable law implements the right and lays down corresponding duties. This paper builds on and further develops previous work where I link the technical notion of privacy protection goals and threats with the attributes and related ‘essence’ of the rights to private life and to the protection of personal data.1 The objective of this paper is to help framing the reach of regulatory ‘by design’ approaches. To use a metaphor, our best hope to protect privacy and data protection does not lie in a quick fix to contain the bursting at the seams, but rather in the mastery of arranging the different threads into a coherent and ultimately long-lasting tapestry. (500 words)

1 M.G. Porcedda, ‘‘Privacy by Design’ in EU Law. Matching Privacy Protection Goals with the Essence of the Rights to Privacy and Data Protection’, Annual Privacy Forum 2018, Lecture Notes in Computer Science; M.G. Porcedda, ‘On boundaries - Finding the essence of the right to the protection of personal data’, in Leenes, Van Brakel, De Hert and Gutwirth (eds), Data Protection and Privacy: The Internet of Bodies, Hart (2018).

Reconciling Substantive Data Protection and Freedom of Expression under the General Data Protection Regulation

ABSTRACT. Both the technological transition to a ubiquitous and powerful online ecosystem and the legal transition to the General Data Protection Regulation are not only fuelling the importance of the data protection-freedom of expression (and information) interface but should also make it more systematically variegated going forward. Part of that systematic variegation will result from the need to specify the particular responsibilities of actors who play a very different role in this ecosystem (e.g. individuals, networking services, infrastructure). However, alongside this it is also necessary to think more about the different substantive norms which are applicable to different expressive activities. Historically, the focus here has been on journalistic and other forms of special expression such as art, literature and (belatedly) academic work. The commonality tying these together is that they produce information, opinions or ideas for a collective public. This is seen as particularly critical to open and democratic societies and so it is not surprising that Article 85(1) of the GDPR continues to expect States to balance rights through wide and deep derogations here and grants them considerable discretion in this regard. However, online technology has also catalysed the dissemination of information to an indeterminate number for predominantly more privatised purposes including public conversational self-expression on social networking sites, rating websites and facilitative services like internet search engines. Through Article 85(1) the GDPR recognises a need for a reconciliation between rights here also, although few Member States have legislated for this within their local law. In any case, in failing to provide for specific vires, Article 85(1) appears to expect that the essential shape of the general derogatory scheme set out in Articles 9(1)(g), 10 and 23 will be continued here also. In other words, whilst many exemptions from the rules of data protection should be provided for, processing will still need to be grounded in a specified legal basis (GDPR, art. 6) and the data protection principles (GDPR, art. 5) must be construed with regard for freedom of expression but cannot be disapplied per se. All this must also be subject to both private and public i.e. Data Protection Authority supervision. It will be shown that the recent Grand Chamber case of GC et. al. v CNIL (2019) on search engine indexing, sensitive data and the data protection principles shows the Court of Justice striving to achieve this kind of outcome even in the absence of specifying legislation. Whilst the Courts certainly have a key role to play here, it will be suggested that there are also vital functions with both legislatures and Data Protection Authorities need to discharge.

13:45-15:15 Session 10D: IPR, regulation, and new industries
Use of AI versus the principle of legal certainty in VAT: impossible regulatory challenge or a groundbreaking opportunity?

ABSTRACT. In the latest paper of the VAT Expert Group of the European Commission, it has been acknowledged that in recent years a simple change of the letter of law in indirect tax matters is not enough. It is crucial to look beyond the already known and find different ways to improve VAT tax collection. The Expert Group finally noticed a chance to improve the VAT system by introducing new technologies, such as Artificial Intelligence (AI).

At first sight, it seems like a rather unusual combination: one might wonder what new technologies have to do with taxation? The answer might be surprising but in the light of the fourth industrial revolution, AI is a measure that will change the current state of play in the tax world. VAT is a transactional tax and for its coherent collection, a vast amount of data needs to be collected from the taxpayers. On one hand, it is commonly claimed that AI is a groundbreaking opportunity both for tax authorities to analyze scrupulously the tax data, and for the taxpayers to reduce administrative burdens. Both private and public sectors are working hard in order to develop a technological measure that might make the tax system more efficient. Efficiency is considered to be the key argument in favour of using and implementing AI in taxation. Possible advantages are very promising, and therefore, the change is happening now. AI is taking over the tax world.

However, despite this big enthusiasm behind the significant advantages of using AI, certain issues have already been spotted by academia. On the other hand, AI can pose threats to society as such; the so-called “black box“ that stands behind data-driven automated decision-making processes might potentially hinder privacy and confidentiality of the taxpayers. Moreover, it leads to the identification of other dangers, such as discrimination and bias. To continue further considerations, another important aspect must be taken into account: the notion of legal certainty.

The most important principle on which the VAT system is founded is the principle of legal certainty. Legal certainty guarantees developing mechanisms that give a stable and predictable character of legal systems and ensure that the decisions made by public authorities in relation to the citizens are consistent with the applicable legal order. This article will examine whether possible implications of using Artificial Intelligence in VAT can possibly disrupt and jeopardize the concept of legal certainty in VAT, on one hand, or, on the other hand, implementation of AI into taxation can induce trust of the citizens in the VAT system.

At the end of the article, the main question raised will be answered: does the use of AI in VAT constitute an impossible regulatory challenge, as it might stand in contradiction to the principle of legal certainty in VAT, or is it a groundbreaking feasible opportunity that will change the scene of indirect tax?

Taxation of the Digital Economy: A New Paradigm for International Tax Relations

ABSTRACT. Digitalization is transforming the global economy and reshaping global value chains. In an attempt to address the challenges that digitalization poses on international tax norms, the OECD, through Action 1 of its Base Erosion and Profit Shifting (BEPS) Project, and the European Union provide policy suggestions that seek to align the place of taxation with that of ‘value creation’. The OECD has also very recently introduced the concept of ‘new taxing right’ that attempts to allocate more taxing rights on a multinational enterprise’s profits to the ‘market’ jurisdictions, defined as the countries where customers or users are located.

These policy discussions have revealed a well-known problem: the role of the current international tax principles in the perpetuation of the imbalances in the international allocation of taxing rights among states. In the digital era, these imbalances are departing from the traditional division of states into residence and source jurisdictions which has been determined by capital flows and investments requiring physical presence either through a permanent establishment or a dependent agent.

Indeed, digitalization is blurring the lines between winners and losers of the existing international tax norms. It has generated conflicts between allies, the EU and the US, exacerbated by the recent French digital services tax and fiscal state aid investigations by the European Commission against several US digital firms, and has given rise to new coalitions between the US and China, both keen on protecting their digital firms’ financial interests, or France and India, both having big markets that would potentially benefit from new taxes based on where users and sales, rather than corporate headquarters, are located.

Additionally, developing countries are significantly impacted by these new investment flows. In the context of international tax relations, they have traditionally negotiated their double tax treaties, depending on their size, revenue base and reliance on corporate taxes. Following the recent tax policy proposals, it seems that the market size and capital flows with respect to digital investments are added to the list of considerations on how developing countries should devise their international tax policy.

This erosion of traditional coalitions followed by the emergence of new ones is unveiling the impact of the tax policy debate on the way the tax base is allocated among states, which further complicates the attainment of a global consensus on the taxation of the digital economy. But more importantly, it is emphasizing the need to establish reasonable rules for the accomplishment of inter nation equity and the redistribution of tax revenues among states to facilitate the implementation of the UN 2030 Agenda for Sustainable Development, including the eradication of income and public resources inequality among states. The present paper attempts to critically analyse the new imbalances provoked by the digital economy with a focus on developing countries and the need to introduce the concept of inter nation equity in the current tax policy discourse.

How purpose limitation and legal bases falter in the face of the GDPR’s Preamble 50

ABSTRACT. Preambles to EU legislation reword the content of the articles, and might declare the lofty intentions of the legislator. They are in themselves not supposed to contain concrete rules or fundamentally alter the meaning of the law. Yet preamble 50 of the General Data Protection Regulation does exactly that.

Preamble 50 discusses the processing of personal data for purposes other than those for which they were initially collected. In general, the reuse of personal data is restricted by the principle of purpose limitation and the necessity of a legal basis. In the main text of the GDPR, the rules on purpose limitation are given in articles 5(1)b and 6(4). Likewise, the available legal bases are enshrined in article 6(1). Together, these articles give the impression that strict limits apply to the opportunities data controllers have to repurpose the personal data they have collected. In reality, however, preamble 50 undercuts these clauses to a significant extent.

In general, data may not be further processed for purposes which are incompatible with the original purpose of the data collection itself. However, preamble 50 and article 5(1)b also contain a substantial exception in the case of further processing for archiving in the public interest, scientific or historical research, and statistical purposes (hereafter: research and statistical purposes). When data is being repurposed for research and statistical purposes this will always be ‘compatible’, regardless of how far removed such research actually is from the original purpose. Barred only by the proportionality test of article 89(1), these provisions thus already allow for broad repurposing of data. Remarkably, preamble 50 goes one step further still with regards to the requirement of a legal basis.

As a rule, each processing operation needs a separate legal basis. Thus, one would be inclined to believe that when personal data is being repurposed a new, separate legal basis is required. After all, this would be a new, separate processing activity. However, preamble 50 states that as long as the purpose of the new processing activity is compatible with the purpose of the collection “no legal basis separate from that which allowed the collection of the personal data is required.” This is striking because, as we have seen, processing for research and statistical purposes is always compatible.

Taken together, this would mean that preamble 50 allows the reprocessing of personal data for research and statistical purposes in all cases. The controller is no longer bound by the principle of purpose limitation nor is a legal basis required. We believe that this fundamentally undermines the intentions and the efficacy of the GDPR and weakens the position of data subjects vis-à-vis data controllers.

13:45-15:15 Session 10E: Privacy and other legal disciplines: integration and complementarity
Iron Age to Information Age: A privacy perspective on the evolution of identification by governments

ABSTRACT. One of the first government identification programmes in recorded history dates to the fourth century BCE in India, with detailed instructions on its implementation provided in Chanakya’s Arthashastra. From restrictions on travel without passports to the regular surveillance of suspicious persons for weeding out spies, Chanakya gave solutions to security concerns that are still prevalent now. However, as one might expect, data protection and privacy principles were not applied to these solutions. While the world is quite different now, primarily because of the rise of the internet and cheap digital storage, debates around the role of governments in identifying citizens and residents are just as current now as they were back then. Thousands of years later, identification processes utilised by governments around the world are ripe for comparison against such historical practices. The first half of the last century has not been kind with its use of identification by governments. Ranging from the persecution of anyone who did not fit the description of "Aryan" by the Third Reich to Kafkaesque mass surveillance and systematic executions of individuals in the USSR, identification programmes have a deservedly bad reputation. A sea change was seen from the second half of the century, with Europe leading the fray on the protection of privacy and personal data, conceptually through the evolution of data protection principles, and with international instruments such as the Convention 108; both of which played their parts in the recent adoption of the General Data Protection Regulation (GDPR). In parallel, it has also delved into co-operative government identification through schemes such as that envisioned by the regulation on electronic identification and trust services (eIDAS). Elsewhere in the world, and coincidentally in the same country that gave rise to Chanakya, India has also become home to one of the largest identity schemes on the planet with its Aadhaar card. Given the 1.25 billion people enrolled in the programme, the Aadhaar card’s scale outstrips anything previously seen in history. However, the data protection regime surrounding this programme is murky at best, though the situation is improving with greater public awareness. A future with even broader government identification programmes is not hard to imagine. Tracing a finger through history shows that while there have been ups and downs, this broadening trend has been consistent. At the same time, while it is a matter of fierce debate whether safeguards and limits on these practices have held up adequately, a similar trend can also be traced through history when it comes to the rise of the right to privacy and data protection. Do historical practices match up to our modern expectations of data protection, and have we truly been making a better, more privacy-friendly world given the sheer scale of technology when compared to ancient examples? This paper seeks to answer that question by examining select historical identification practices through the lens of modern data protection principles enshrined in the GDPR.

Designing Pro-Competitive Data Pools: Which EU Competition Remedies for Data Silos?

ABSTRACT. Some European regulatory initiatives within the Digital Single Market Strategy, as the recently enacted Regulation on the free-flow of non-personal data and the Open Data Directive, promote the flow of data both among private businesses and among private businesses and public institutions. The present study moves from the assumption that the sharing of data can- under specific circumstances- give rise to anticompetitive aggregations of research-valuable data in the form of closed data silos. The enclosed nature of such data silos prevent fruitful information interactions among other market players, thus blocking the technological progress in digital markets. The study addresses the question whether and how competition remedies available under EU law can be used for the design of pro-competitive data pools in digital markets. Interesting suggestions for these purposes are given by the recent enforcement policies enacted by the European Commission in high technology innovation markets as a reaction to an occurred harm to competition in innovation. Here, obligations to disclose essential information have been employed as a remedy to market abuses under art. 102 TFUE and obligations to divest research poles have been established in the context of merger procedures. This is confirmed by relevant case law under the essential facilities doctrine and by some merger decisions in the chemical and pharmaceutical sectors. The second section of the paper thus questions the suitability of such information-based remedies in the context of digital markets. Although aimed at restoring very different anticompetitive conducts, these remedies nonetheless appear to share the common function of opening up established innovation alliances for the transfer of research valuable information assets to external competing parties. However, if justifiable at a theoretical level, the practical feasibility of data sharing remedies in digital markets raises some difficult questions. The assessment of a competitive harm can prove to be challenging in light of some digital markets’ specificities, as network effects and economies of scale. Likewise, it is difficult to define the datasets to be shared by the infringing parties, the addressees and the timeframe of the disclosure. The third section of the paper will thus provide some first solutions to the identified problems. In light of data protection authorities’ invasive auditing powers regarding the features and the structuring of collaborative data-driven research endeavours, the collaboration between competition and data protection authorities can be useful in the assessment of the existence of a harm to competition in digital innovation Such collaboration can also be fruitful in order to better define the terms of the sharing remedy. In this respect, data protection principles, such as the principles of fairness, transparency and accountability, can work as a structural basis for the design of competition remedies regarding the sharing of data by competition authorities. Ultimately, it is argued that the interaction between the data protection framework under the GDPR and competition policy in the context of data sharing remedies, can provide some first suggestions with regards to when and how competition law can intervene setting remedies over formed data pools.

From cybersecurity to cyber resilience: quest for a legal basis in the EU Treaties

ABSTRACT. This paper does not aim at providing an exhaustive analysis of all the EU Treaties provisions relevant for the enactment of cybersecurity related initiatives. Instead, it seeks to focus on the lack of an express legal basis for regulating cybersecurity and enabling the EU response to threats to the security of its network and information systems. The first section suggests that the concepts of cyber security and cyber resilience are not well defined and, thus, encompass several viewpoints leading to risks of uncertainties and diverging interpretations. Subsequent sections address cybersecurity as a cross-cutting policy area relying on the patchwork of legal frameworks initiated on different legal bases due to the lack of an explicit mention of cybersecurity in the core of the Treaties provisions. The clauses on internal market, judicial cooperation in criminal matters and industry/R&D may be singled out as the most relevant provisions for cybersecurity regulation purposes. The article concludes with a call to address cybersecurity uncertainties as a strategic priority for the EU cyber capacities building. The conclusion is that there are reasons to be positive about the EU initiatives (NIS Directive, Cybersecurity Act, Cyber Diplomacy Toolbox), but this should not obscure the need for urgent and timely interventions on a range of practical issues and better coined definitions. Outline: 1. CYBERSECURITY AS RESILIENCE IN CYBERSPACE 2. FRAGMENTED LEGAL BASIS FOR CYBER RESILIENCE IINITIATIVES 2.1. ARTICLE 114 TFEU - INTERNAL MARKET 2.2. ARTICLE 173 TFEU – INDUSTRY AND ARTICLE 187 TFEU - RESEARCH AND TECHNOLOGICAL DEVELOPMENT AND SPACE 2.3. ARTICLES 82(2) AND 83(1) TFEU - JUDICIAL COOPERATION IN CRIMINAL MATTERS 2.4. ARTICLE 222 TFEU – ‘SOLIDARITY CLAUSE’ 2.5. 42(7) TEU - COLLECTIVE DEFENSE 2.6. CFSP AND CYBER DIPLOMACY 3. CONCLUSIONS

13:45-15:15 Session 10F: Regulating technologies around and on the individual
Data ecosystems: towards complete integration of privacy in competition law

ABSTRACT. Companies such as Google and Facebook are not merely conglomerates of Internet-based services which just so happen to process personal data. Instead they should be conceptualized as ‘data ecosystems’ and treated as such.

My aim is to introduce the concept of data ecosystems and to showcase the practical and legal problems raised by this phenomenon. This is done in order to advance the competition law and data protection narrative by emphasizing the complete and fundamental integration of personal data in the business model of data-driven companies.

Data ecosystems are companies which collect and monetize personal data through a network of widely diverging internet-based services which are all accessible through a single account. Search engines, video sites, mapping, file sharing, direct messaging, email clients, and a cornucopia of other services are all offered by a single company. In contrast to traditional conglomerates data ecosystems are unique in that all of its separate branches are interconnected, even if the services provided are vastly different. Each service can be used to collect specific types of data which can ultimately be combined into a single broad, yet detailed, personal profile. Personal data collection is thus not restricted to each individual branch, but rather serves the entire ecosystem.

Data ecosystems are also characterized by a strong incentive to expand into additional new markets. A clear example of this phenomenon is Google’s recent $2.1 billion acquisition of Fitbit. By buying into the market for (sports) wearables, on which Google was not previously active, it opens the door for obtaining new types of health and geolocation data. Each new acquisition by the company serves not only to increase the amount of services offered under the same banner or to enlarge the userbase, but rather to expand the data ecosystem as a whole and to increase its coverage of data subjects. It extends the reach of the data ecosystem to encompass new markets, new technologies, and (most importantly) new types of data, while making it more difficult for consumers to fully escape its grasp. As an added benefit it prevents promising start-ups from establishing rival data ecosystems of their own.

Since data ecosystems fully integrate personal data into their business model, data protection law and competition law should respond in kind by fully integrating personal data considerations into competition law. If they fail to do so, serious threats to data protection as well as fair competition can slip through the cracks in between the two legal regimes. Only by joining competition and data protection oversight, such as by recognizing privacy-based theories of harm or by taking data ecosystem expansion into account for merger decisions, can the two fields be effectively protected from their new and complex challenges.

Accountability and transparency of information collected by facial recognition technology systems

ABSTRACT. The high level of accuracy reached by FRT, especially thanks to the addition of Artificial Intelligence (AI) to these systems, the increasing number of available data, and computer power have increased its scope. The ubiquitous character of the FRT along with the state of the art of the technology make the present time the 'Era of Facial Recognition'. The current cheap cost of technology is an additional key factor to this respect. This technology could potentially undermine the fundamental right to privacy and data protection provided that it relies on the large-scale processing of sensitive data. As it has been recently stated by the current European Data Protection Supervisor, 'the deployment of this technology so far has been marked by obscurity.’ There is, therefore, a lack of transparency and accountability regarding the collection, processing, use and access of the data collected by FRT. Moreover, the regulatory efforts on FRT have arisen after the deployment of the technology. Its pervasive character, already present everywhere through CCTV, and the current level of public exposure through social media (with a massive sharing of individual’s pictures) have made almost impossible to trace the origin of the input data. Apart from that, there is a 'Pandora box' effect that makes extremely difficult, once the technology is implemented, to withdraw it. There is also a 'space race' effect consisting of increasing investment to research in potential applications of FRT. Due to the wide concern of the incompatibility of FRT with fundamental rights and, in particular, with privacy and data protection principles of transparency and accountability, Europe is currently considering imposing a five-year ban on the use of facial recognition in public spaces and the Data Protection Authorities have expressed their intention to take a proactive role in the discussion about facial recognition. This concern has been expressed by multiple actors at different levels: technology companies, scholarship, European authorities, etc. Apart from the above-mentioned actors, one of the biggest perils of FRT is that all citizens are affected by it. Not only asylum seekers, third-country nationals or criminals are the potential targets, but every average human being, even the vulnerable ones (f.e. children). However, both the possible problems and solutions require further depth. This research will frame in the literature stream of algorithmic accountability and transparency. Since this technology is empowered by algorithms, and their relationship with these principles analysed by this literature, is one of its weakest and most debated points, this seems the most appropriate choice. Besides, I intend to utilise the criteria present in the General Data Protection Regulation (GDPR), the Policy and Investment Recommendations for trustworthy Artificial Intelligence and especially, the Ethics Guidelines for Trustworthy AI by the Independent High-level Expert Group on AI, and different Opinions by A29WP to assess the (lack of) transparency and accountability on the data collected by FRT.

Fishing in the data lake for tax purposes

ABSTRACT. The growing technological ability to collect, process and extract new and predictive knowledge from big data is changing our society (Council of Europe, 2017). Every day, individuals share and disclose data on their location, payment transactions and communications. These data are not only valuable to commercial companies, but also to governments and the way in which they achieve various goals of public interest.

For instance, the present era of big data offers new opportunities for the collection of information by the tax authorities. Tax authorities are offered information by the tax payer and others, such as employers. Recently, however, they have also started to explore the advantages of large data sets that are gathered by third parties (e.g. energy suppliers, payment services) (De Raedt, 2016; Goudsmit, 2015; Wisman, 2014). The availability and use of such data could make the global fight against tax fraud more efficient since it enables tax authorities to identify and cluster tax payers based on risk of non-compliance with tax law.

At the same time, the use of big data means that private information will be acquired on a large scale, often without awareness by the tax payer. Moreover, tax authorities may share this information with other public authorities who can use the data for their (unrelated) purposes.

The use of big data by tax authorities therefore gives rise to significant legal issues, one being the possible interference with the right to privacy. In early 2020, a Dutch Court in The Hague decided that the automated (tax and social) fraud detection system in the Netherlands (SyRI) breaches the proportionality principle of article 8 ECHR (Court The Hague, 2020). It furthermore evokes questions regarding the right to data protection since the information gathered by tax authorities may include both anonymised, pseudonymised and non- pseudonymised personal data (De Raedt, 2017). Finally, there are questions regarding the principle of prohibition of fishing expeditions in tax matters (Van Der Sloot, Broeders and Schrijvers, 2016). According to this principle, tax authorities are not allowed to search (“fish”) for information, the existence of which is uncertain. As a result, the use of these large data sets for tax purposes seems incompatible with this prohibition.

A closer look at this prohibition unveils that it has no generally accepted definition although authors, legislators and judges often refer to it. For that reason, this paper aims to identify the main characteristics of this prohibition using a selection of relevant policy documents (international, regional and national) and case law of the ECtHR and the CJEU in which there is a reference to the principle. In every case a critical analysis of the meaning attributed to the prohibition will be undertaken. In a second step, based on the identified characteristics, the compatibility of big data tax audits with this principle in the context of tax matters will be evaluated.

15:15-16:45 Session 11A: Regulating Industries and Interoperatioability
Exploring the Use of Smart Devices by Vulnerable People from a Data Protection Law Perspective

ABSTRACT. This study identifies the most relevant data protection law provisions in the context of smart devices used by vulnerable people and discusses how they should be interpreted. The objective of this paper is to critically analyse the current legal data protection landscape and discuss the obligations of organisations in terms of GDPR compliance in relation to vulnerable people’s personal data. Firstly, this work explains why protecting vulnerable people’s personal data is important in the smart home context. Smart devices will be used more and more often in vulnerable people’s homes, regardless of whether they are designed specifically for them or for the general population (for example, smart door locks, smart alarms or voice assistants). Cybercriminals continue to invent new threats to IoT products and they are often successful in overcoming security barriers. Vulnerable people have less capacities to defend themselves against such data security risks. GDPR compliance is essential to protect vulnerable persons as well as being crucial to organisations if they want to escape potential fines and to build reputation as reliable businesses. Secondly, this paper defines the notion of vulnerability in the context of data protection legislation and discusses whether giving a concrete definition of a vulnerable person is possible. While the GDPR mentions children several times and explains how to identify them, it does not provide information on who should be considered as a vulnerable adult. Thirdly, this work discusses various legal bases that can be used to process vulnerable people’s personal data when they use smart devices. For example, an organisation would need to balance its own legitimate interests to process vulnerable persons’ personal data against the rights, freedoms and interests of the latter. A smart device, such as a motion sensor, can potentially save people’s lives. Could this influence the balancing exercise? What should be taken into consideration in this particular context? Fourthly, the study critically analyses the most relevant data protection principles (mainly data minimisation, security, transparency and fairness). In the case of transparency, the Article 29 Working Party (WP29) reminds that data controllers need to take into consideration vulnerable people’s needs when complying with transparency requirements. What does this more specifically entail in the smart home field? For example, concerning screenless smart devices, the WP29 advises to provide information orally, through their audio capabilities (when such capabilities exist). It also adds that the data subject should be able to listen again to pre-recorded messages, especially when oral information is delivered to visually impaired persons or vulnerable people who may have problems in understanding or getting access to written information. To conclude, the special situation of vulnerable people has to be taken into consideration by organisations developing and deploying smart devices. The GDPR is not always clear on how to achieve this. This work tries to clarify current provisions and to propose certain solutions.

New technologies in legal education: a necessity or merely a trend?

ABSTRACT. Digital technologies have become omnipresent in everyday life and their evolution and development are proceeding at an unprecedented pace. The new skill requirements have forced education policies and systems to adapt to these unprecedented technological changes and the resulting societal needs. However, efforts to implement these adjustments face a number of practical and systemic obstacles.

The modern debate on whether and how digital technologies transform educational process and our future in general forces us to re-evaluate the form and substance of legal studies. Accordingly, this paper is an attempt to contribute to the scientific discussion on how emerging technologies should shape legal studies and whether the legal education curriculum should adapt to a technology-driven future? The general aim is to formulate a theoretical justification for a legal duty to change legal studies through the lens of the right to education. Acceptability and adaptability, two principles of the right to education, require quality standards and a systematic flexibility in education that cater to the needs of transforming societies. Thus, study programs are expected to provide students with both knowledge and (transferable) skills, which will influence their personal development and which students will apply in their future careers. The necessity of creating an educational landscape that is flexible and that provides learners at all stages with specific and general skills to face the challenges of a modern workplace is also reflected in a variety of EU policies.

Although it is globally recognized that all levels of education need to readjust and exploit the full potential of new digital technologies so that learners can develop the skills the markets require, the relatively slow pace at which higher education institutions change and adapt in the light of fast-paced technological developments raises serious concerns. Educational institutions and the policies they reflect are still perceived as very conservative, resistant to change, and reluctant to innovate. In the light of the classical notion of the right to education, the unwillingness to tackle the opportunities and challenges brought about by technological developments could be seen as resistance to over-specialized and overly technology-oriented education.

This gives rise to the question whether the emergence and widespread use of new technologies in legal studies today could be understood as a (new) element in the tradition of liberal arts, and whether it could support a multidisciplinary and flexible approach that could successfully prepare the learner for the future labor market and, most importantly, future life. Or on the opposite, the inclusion of technological skills and knowledge in legal education can also be seen as a further step in the direction of an overspecialized and narrow subject education, which will eventually lead to a loss of traditionally humanistic educational values. Most likely, the answer depends on the scale of application of innovative technologies in and for legal education and whether enough room is left for diversity and a broader set of competences.

The Invisible Hand/Mind within the Black Box

ABSTRACT. Each financial crisis has been human-made, not an outcome of a metaphysical invisible hand, and this fact will not disappear even if one relies on artificial interpretations of Homo oeconomicus. Just as current developments in Artificial Intelligence (AI) and their interoperability with other technologies have left behind the parameters set by the famous Moore’s Law, we are witnessing an increasing offer of proposals that aim to predict financial crises, improve central banking and develop algorithmic trading, among others, building upon this potential. Certainly, this technological potential could influence the speed of financial innovation and, accordingly, within these proposals, one can find rather interesting ideas that will be materialized in the near future. After all, from the invention of abacus to the development of Machine Learning (ML), the evolution of computing has been closely related to the evolution of finance. However, one has to be cautious. We have to consider that most of these proposals consider scenarios in absence of ex post financial innovation and the stress that characterizes a crisis, looking at past scenarios based on the preferences of individual authors/developers who are in charge of creating those black boxes that currently lie at the heart of these developments. Personally, I am worried about systemic implications of these black boxes. As Frank Pasquale notes, “critical decisions are made not on the basis of the data per se, but on the basis of data analyzed algorithmically.” In other words, in calculations coded in software. Most of the time we do not stop to think about risks relating to AI playing chess, selecting genetic algorithms or turning the lights in domestic environments; however, if AI is employed to help to make decisions in portfolio structuring or monetary policy, then we need to understand how it reaches those decisions and have different views of Calabresi’s cathedral. Furthermore, we are assuming that the parameter for our financial black boxes are human financial intelligences, but can we be certain about it? The answer to this question is rather relevant given that the bias behind financial algorithms –and, consequently, behind the Intellectual Property Rights (IPRs) behind financial infrastructures- could set the cornerstone for the next great financial crisis. Risks and their contagion will not depend on Eugene Fama’s rationality, but on the interaction among infrastructures, which are based on “irrational” coded algorithms, which, in turn, will not react on a coordinated way. Building upon this fact, through this research, my aim is to highlight the risks behind financial black boxes that could give form to the next generation of financial infrastructures and their systemic relevance, and propose a set of standards to uniform these structures to coordinate actions and avoid contagion in Machine-to-Machine (M2M) interactions.

15:15-16:45 Session 11B: Interaction and conflict of regulatory interventions
Distributing authority – state sovereignty in the age of blockchain

ABSTRACT. According to the classical understanding of the state sovereignty, a state, as an independent political entity, is sovereign when it has full and sole control over its territory. This term can be also defined using the concept of authority. Sovereign state should therefore have the sole authority within a particular territory (internally), as well as demonstrate actual independence from outside authority (externally). Those aspects of control and authority will further on manifest itself in the form of, among others, the law making and the law enforcement.

Blockchain technology challenges the concept of state sovereignty thus seen in various dimensions. The most salient feature of this technology is its ability to distribute control and authority over data, transactions and networks. Having potential to improve transactional processes between citizens by eliminating middlemen, render some of the traditional legal institutions redundant or even, in the future, abolish governments completely or establish competing governments, blockchain carries with itself both great promises and chilling risks.

In the new reality of dynamically developing and gaining new use cases blockchain technology, sovereign states respond differently to those promises and risks. While some of them adopt the passive approach or the “wait and see” one, other try to apply existing legal frameworks or issue guidelines and warnings. Other instruments that are being used include regulatory sandboxes, attempts to cooperate internationally or, finally, introduce new technology-specific laws. Furthermore, it is noticeable that some governments around the world situate themselves as outsiders in relation to the technology (regulating and/or funding its development), whilst some try to embrace it as early adopters.

All of the abovementioned approaches to blockchain are relevant for and can have impact on the sovereign status of countries interacting with that technology. The aim of this paper is to present current and foreseeable problems related to the state sovereignty arising from the emergence, development and ever-wider use of blockchain technology. By using a critical analysis of available sources, in particular literature and selected governmental reports, statements, advice and strategic polices, the paper will map the current situation and debates in this regard. The result of the analysis will be a summary and description of challenges for the sovereignty of states posed by blockchain technology.

The regulation of artificial intelligence by ethical principles: Are soft law principles a relevant framework?

ABSTRACT. Since the beginning of the last decade, artificial intelligence (AI), the technology which has enabled machines to perform tasks normally attributed to humans, has experienced a real boom due to big data, progress in calculation methods and the development of advanced algorithms. It is the branch of machine learning and especially deep learning (using neural networks) that made possible the development of methods of detection, analysis and prediction. These methods have led to opportunities such as facial and object recognition, automatic translation and dialogue, decision support, automatic classification, etc. Because AI raises ethical and legal concerns, different actors enacted soft law principles. This paper will analyze the content of these principles and whether soft law is a relevant framework to regulate AI.

1. Same principles, different actors

AI systems are not infallible and, under the influence of science fiction, people felt concern about them. Discriminatory biases, invasion of privacy, machine increasing autonomy, diluted accountability, are as many ethical and legal concerns that individuals may have. In response to these concerns, and in the vacuum of any existing binding law, many actors have produced ethical principles to provide a framework for AI and make it trustworthy. These principles have been drawn up nationally, regionally and internationally by States, regional organizations, big tech companies or non-governmental organizations. Principles are about transparency, safety and security, privacy, non-discrimination, human control and accountability. This part of the paper will consider the divergent interests among these actors and how this may impacts the content and the scope of principles.

2. Soft law principles, a step toward binding regulation

Ethical principles, as soft-law rules, have characteristics that are particularly useful for the legal regulation of AI. Soft law is an adaptable law, which seems necessary for a complex and evolving technology such as AI. This kind of rules guide AI development and avoid a potential chilling effect. AI is an international phenomenon and soft law provides a flexible process to enact international rules, with less negotiation and jurisdiction issues. Finally, AI actors participate in the development of these principles. Because they are not imposed rules there are made more acceptable.

However, attention should also be paid to the limits of soft law. The main limitation is that it is a non-binding law. Therefore, these rules may not be effective since there is no liability for non-compliance with them. Moreover, the content of the principles could be in contradiction with national or international laws (e.g. to demand for algorithms transparency could offend business secrecy). It should also be noted that private actors have a particular influence in the elaboration of these principles. This could lead to a privatization of legislative power, challenging this prerogative of States and giving the power to the strongest actor.

Although ethical principles are an essential tool for the regulation of AI, they are not exempt from certain difficulties. They should therefore be seen as a step towards binding law rather than a self-sufficient mode of regulation.

'Regulating online harmful content'

ABSTRACT. While the internet has brought global freedom to communicate and exchange ideas, its growth has introduced difficulties too, for instance, the expression of personal opinion on social media sites bringing with it risk of ‘fake news’, misinformation, defamation, harassment and invasion of privacy. Major social media and telecommunication companies increasingly compete for business at all cost. Innovation in online services has delivered major benefits to individuals and society. But there is an intensifying, global debate over how to address the various problems that people - especially the young - experience online. Issues related to harmful content, including illegal and age-inappropriate content, misleading political advertising, ‘fake news’ and bullying, are particular areas of focus in this paper. The UK Government has said it will legislate to improve internet safety. The paper focusses on several government reports, recommendations and some legislation for broadcasters and social media providers in an attempt to regulate online, traditional terrestrial and on-demand and paid- for services. Several countries including Germany and the UK have enacted new legislation in relation to ‘fake news’ and online harmful content. The General Data Protection Regulation (GDPR) has set common rules within the European Economic Area (EEA) on how organizations and individuals can collect, process and store personal data. Global policymakers are considering the competitive dynamics of online markets, amid wider concerns from academics and other experts. In the UK, the Cairncross Review (2019) looked at commercial relationships between online platforms and news publishers. Another government report on ‘disinformation and fake news’, published by the parliamentary Digital, Culture, Media and Sport Committee (DCMS) focuses on the regulation of social media providers and broadcasting standards for traditional terrestrial and on-demand services. Existing broadcast legislation and regulation in the UK tends to be more restrictive compared with the guidelines under the European Convention on Transfrontier Television. Not all EU member states have ratified the Convention and unlike an EU Directive, the way the Convention has been incorporated in some domestic legislation means that anyone seeking to rely on it will need to seek local advice in each relevant jurisdiction. The UK Information Commissioner published proposals (April 2019) regarding minimum standards that online services, such as Facebook and Instagram, should meet to protect children’s privacy (under 18s). Germany introduced new legislation which allows for the deletion of defamatory and hate-speech, and extreme-sexual content that can harm children, teenagers and vulnerable adults from TV, on-demand and paid-for services, and social media platforms. The Network Enforcement Act 2018 (NetzDG) covers mainly social media platforms but also applies to all for-profit telemedia service providers and pay-TV services. The paper concludes with the question ‘does the regulation of broadcasting and online services breach freedom of expression’? Freedom of expression under Art. 10(1) ECHR includes communication of opinions and arguments advanced on social media platforms and traditional broadcasts (see: R (Animal Defenders International) v Secretary of State for Culture, Media and Sport (2008), includes the inoffensive, but also the irritating, the contentious, the eccentric, the heretical, the unwelcome and the provocative, provided it does not tend to provoke violence (see: Redmond-Bate v DPP (1999)). But in the Otto-Preminger-Institut v Austria (1994) case, the ECtHR concluded in relation to the obligations expressed relating to Article 10(2) ECHR, that expressions which are gratuitously offensive to others must be avoided wherever possible. The proportionality of an interference may depend on whether there exists a sufficient factual basis for the impugned statement, since even a value judgement without any factual basis to support it may be excessive (see: Dichand v Austria (2002); also: Monnat v Switzerland (2006)).

15:15-16:45 Session 11C: Data Protection and Privacy in the post-BREXIT process
From trust in the contracting party to trust in the code in contract performance: a critical perspective on the applicability of existing laws to smart legal contracts.

ABSTRACT. Smart contracts are deterministic and self-executing computer programs. They can live on a distributed ledger like the blockchain, but they can exist also in traditional database architectures. The difference is that blockchain-based smart contracts can take advantage of blockchain properties. In contrast with traditional databases, distributed ledgers nodes update on a peer-to-peer basis, thanks to predetermined and shared rules that serve for nodes coordination. There is a ‘single version of the truth’ between the nodes of the network. Instead of having multiple isolated databases, there are several copies of a single ledger. Parties do not have to trust that all the databases keep identical. Moving to smart contracts, while normally it is necessary to trust that same sets of code running on different network infrastructure do not differ, blockchain allows code to be embedded in a single ledger, without duplication. Therefore, blockchain-based smart contracts can be executed in the same way by all nodes. In addition, the added value of blockchain compared to other distributed ledgers is that it can guarantee reliable coordination among nodes, thanks to its tamper-resistance. Links between the blocks through hashes make the system resistant to malicious nodes. Hence, once added to the blockchain, smart contracts code cannot be altered or modified unilaterally. As a consequence, smart contracts cannot avoid execution and/or execute incorrectly. This property does not only allow identical execution, but also reliable identical execution. When smart contracts are used as smart legal contracts (i.e. they are applied to the contractual domain) it is asserted that no single party is in the absolute control of the blockchain, and cannot interrupt or modify the execution of the smart contract code. So, the authors state that smart legal contracts are self-enforcing and trustless because they remove the need to trust that the other party performs the contract. There is a shift from trust in the counterparty to trust in code. For this reason, some studies claim that existing legal remedies for non-performance and the specific European laws of e-commerce and Consumer contracts that have the function to supply to the lack of trust in the supplier are not tailored to apply to smart contracts. The present study aims to verify the validity of this assumption by adopting a different approach. Indeed, legal experts usually restrict their investigations to the characteristics of the technology, without paying attention to how the technology is used. They do not go beyond the dichotomy permissionless/permissioned blockchains, which is based on technical features. Instead, the work will first clarify what ‘decentralisation’ means when it is referred to technology and the difference with decentralised governance. Then, it will focus the analysis on four concrete scenarios, and not on the technology itself. The scenarios have been elaborated by observing how the market is currently developing smart legal contracts. For each scenario, it will be reconsidered whether existing laws are difficult to meaningfully apply to smart legal contracts.

technology mediated trust

ABSTRACT. This paper considers the impact of digital technologies on the interpersonal and institutional logics of trust production. It introduces the new theoretical concept of technology mediated trust to analyze the role of complex techno-social assemblages in trust production and distrust management. The first part of the paper argues that globalization and digitalization unleashed a trust crisis, as the traditional institutional and interpersonal logics are not attuned to deal with the risks introduced by the prevalence of digital technologies, and they are struggling to produce trust and manage distrust at the scale and speed of global networks of communication, media, finance, etc. In the second part, the paper describes how digital intermediation changes the traditional logics of interpersonal trust formation, and creates new trust mediating services. It also considers how old institutional forms of trust production are affected by the fact that these institutions employ digital technologies in their processes, and describes new, technological forms of institutional trust production. Finally, the paper asks the question: why should we trust these technological trust mediators? The analysis concludes at best, it is impossible to establish the trustworthiness of trust mediators, at worst, we have no reason to trust them.

Reimagining Electronic Communications Regulatory Environment with AI: Self-Regulation Embedded in “Techno-Regulation”

ABSTRACT. In 2019, Roger Brownsword published a book in which he described the transformation of regulatory environments via technology and the reimagining of legal values and legal rules that must accompany this transformation. He described “techno-regulation” as a far more effective tool for managing risks than traditional regulation. In “techno-regulation”, the regulator expects a perfect control and elimination of non-compliance by employing a particular technology, whereas regulatees may have only a limited capacity to damage, disrupt and circumvent the technology put in place. This article demonstrates how Brownsword’s description of “techno-regulation” matches current trends in radio spectrum management and argues that radio spectrum management is the precursor of all technologically managed environments. Radio spectrum is a national natural resource. Technologically, it can be used only in certain ways, and its width can satisfy only a limited demand. The upcoming decades will bring a whole range of new uses of radio spectrum, for example in the areas of autonomous transport, smart cities and homes, or the Internet of Things, which will place increased demands on “quantity” and “quality” of spectrum usage. Sharing radio spectrum and accessing it at low barriers is the future (but also already a partially implemented solution). Within shared spectrum bands, technologies and users must learn to coexist and manage potential interferences. For example, in 5.8 GHz band, RLAN / WiFi applications (i.e. common applications used to access the Internet) must coexist with space stations (satellites), the non-civil sector (military) and electronic toll collection systems. To solve the coexistence and upcoming challenges in radio spectrum management, the regulator espouses an experimental form of “techno-regulation” that corresponds to the second generation of regulation per Brownsword. It turns out that centralized regulation in electronic communications has only limited effectiveness with regard to the number of regulatees, their geographical location and the future development of the electronic communications market. AI is therefore used as a technology to enable users to fill in regulatory gaps and enhance market opportunities. With AI, enforcement of legal rules in relation to the use of a national resource is significantly shifting to a technological level with self-regulatory elements. The national regulator plays an important role in creating the environment for "techno-regulation" using AI technologies, for example, by ensuring that information about radio spectrum usage is publicly available while mitigating risks posed on cybersecurity and critical infrastructure by the release of such information, or by creating appropriate coordination tools. The article identifies the normative and non-normative dimensions of technologically regulated radio spectrum and comments on the specificities of radio spectrum as a technologically managed environment. The article assesses the driving forces behind the current trends in radio spectrum management as well as the regulatory fitness of the “techno-regulatory” approaches to radio spectrum management. Finally, the article describes how regulatees attempt to disrupt, hack or otherwise compromise the technology deployed by the regulator, and how radio spectrum management achieves control over compliance in the transformed environment.

15:15-16:45 Session 11D: IPR looking at the future
Should consent for data processing be privileged in health research? A comparative legal analysis

ABSTRACT. Several national data protection laws appear to afford a privileged position to scientific research, including health research. Provisions that might otherwise apply to data subjects and data controllers, including rights exercisable by data subjects against controllers, are lifted or lessened. However, when it comes to considering whether consent should serve as the lawful basis for processing data in the health research context, a fair degree of policy and regulatory divergence emerges. This divergence seems to stem from a normative link that some draw between consent as a research ethics principle and consent as a lawful basis in data protection law.

We look at the EU General Data Protection Regulation (GDPR) and three national laws, either implementing the GDPR or inspired by it, to provide points of comparison: South Africa’s Protection of Personal Information Act, 2013, the UK’s Data Protection Act 2018 and Ireland’s Health Research Regulations 2018. We supplement this analysis by considering other relevant laws and regulations governing health research in these jurisdictions.

Under each of the POPIA, the DPA 2018, and the Irish Health Research Regulations 2018, scientific research may qualify as an exemption, parallel to data subjects’ consent, from the general prohibition on processing of personal data. However, the differences in the conditions and restrictions imposed on the research exemption show a nuanced divergence of the policy choice regarding the extent to which consent should be treated as a favoured option. While the GDPR’s default position shows little regulatory preference (or “bias”) to consent as the appropriate protective measure in the case of research, a more moderate approach is adopted by South Africa’s POPIA and the UK’s DPA 2018, under which processing of sensitive data for research purposes face a lighter regulatory touch, depending on whether a public interest can be identified. On the other extreme of the spectrum, the Irish Health Research Regulations 2018 represent a starkly different approach that strongly favours consent as the primary choice of safeguard.

Such divergence in how consent is privileged across jurisdictions shows the difficulties in balancing individual and public interests. We argue that there is merit in distinguishing research ethics consent from data processing consent, to avoid what we call “consent misconception”, and come to advocate a middle-ground approach in data protection law, i.e. one that does not mandate consent as the lawful basis for processing personal data in health research projects – but does encourage it. This approach achieves the best balance for protecting data subject/research participant rights and interests and promoting socially valuable health research.

The Threshold is the Place to Pause

ABSTRACT. In considering the implications of artificial intelligence on the workplace, one is struck by the confused intermingling of outcomes and ambitions. This presentation critically assesses the aims of technological innovation (in particular artificial intelligence) as applied to the workplace, focusing on the idea that artificial intelligence is an ultimate goal for such innovations. Artificial intelligence is understood here as one part of the larger digitalization movement: a collective term for artificial intelligence, robotization and new technologies.

Ambitions and realities The disparity between perceptions and current realities points out how the discussions of the digitalized workplace have been largely prospective. Communications technologies (and most pervasively, social media) have been the most used form of new technology presently. They facilitate information sharing and personal connections. The smart phone has been an entry point for digitalization because it is a more accessible form of technology. The more widespread current use of social media suggests not just the stage of digitalization, but it also elaborates upon the present phase of ‘industrial revolution’, to adapt Klaus Schwab’s language of a fourth industrial revolution. Artificial intelligence (data collection, assessment and intelligent oversight technologies) is more commonly identified as the positive future; an innovation that will improve the efficiency of and output from the workplace.

Stages of digitalization of the workplace Viewed in the early 21st century, digitalization of the workplace is a multi-stage process. The first stage (innovations in communications) has seen a change in the means by which communications have been conducted where there are now multiple, rapid and reliable platforms for connecting with others around the world. Overlapping with the first, the next stage recognizes the data created by these communications technologies. This leads to creating means of canvassing and packaging this data for useable secondary applications. Algorithms are developed for these purposes. A third stage (performance and predictive analytics) considers whether this raw data may be aggregated and classified in ways that may lead to predictive outcomes. Algorithms increase in their sophistication by including workplace performance incentives. A fourth stage (human oversight and human displacing technologies) extends the applications for performance and predictive analytics. Attention turns to adapting the third stage data to more complex algorithms that can modify themselves and create new algorithms in response to data. Precision performance mechanisms (nudging of workers) are honed and expanded for use in professional types of workplaces. Efforts turn to human replacing technologies on a wider scale; premised on the absence of reliability of humans.

The Present as a Point of Expectation Using these categories, we are firmly in the first stage and have ventured some ways into the second (use of algorithms) and third (performance nudging), with ambitions for human oversight or human displacing technologies. One noteworthy aspect of this movement is the seemingly unwavering confidence that we are headed in the optimal direction. The law remains somewhat behind regarding the intersection of information technology and the workplace. The General Data Protection Regulation (GDPR) is of questionable adequacy as it concerns the workplace. Instead, the GDPR facilitates the on-going trend in employment law of offering wider protection for commercial purposes than sustainable workplace rights. If these decisions are made by way of artificial intelligence, persistent monitoring must accompany such forms of decision-making so that the factors considered are adequately balanced between capital and social interests.

Drawing up a national ethical framework for the development and deployment of trustworthy and responsible AI: Not a case of one-size-fits-all

ABSTRACT. A governance framework in the development and deployment of Artificial Intelligence (AI) is essential. This framework serves as a precursor to the emerging area of AI Law. Whilst appreciating the use of AI in improving our lives by creating unprecedented opportunities, it also raises new ethical dilemmas for our society. AI and its ability for algorithmic regulation whether reactive or predictive have its misgivings and manifold concerns. A framework of this kind aims to realise the use of AI in society for altruistic purposes whilst minimising the negative effects and risks. The adoption of an ethical framework is a vital step in taking a position on core principles of an algorithm by design. A failure in assessing the development and deployment of an AI algorithm against this framework may result in legal liability and accountability. The paper aims to make a case for the need to adopt a framework for AI governance at a national level as well as to ensure that the national policy is aligned with emerging international and regional standards. The research objectives include firstly, an appreciation of the historical and contemporary discourse on the relationship of ethics in regulating algorithms including the assessment of ethical and legal dilemmas presented by algorithmic regulation; secondly, a review of leading frameworks including, inter alia, the OECD, the European Commission, UK Parliament’s Select Committee on AI and the Asilomar AI Principles, to identify core principles and draw out a list of commonalities and distinctions within these frameworks; and finally, the challenges in drawing up a national framework with a special consideration if the Malaysian environment requires the consideration of any additional principle(s) unique to its national values and aspirations, or alternatively, a deviation from already prescribed frameworks.

15:15-16:45 Session 11E: Individual vs. Society or Individual with Society?
Moving Beyond the Absence of “Evil”- Social Media and Privacy By Design

ABSTRACT. This paper is an exploration of what it would mean to develop a social media platform that ab initio was intended to guarantee privacy by design. The paper is intended as a precursor to the development (at least in prototype form) of such a platform and as such is aimed at inviting discussion and contribution from the audience as well as to discuss the views of the authors. The topics to be addressed are as follows: 1) What is privacy by design and how has social media engaged with the concept thus far. 2) What economic, legal and social phenomenon underpin the present model of social media design (we shall be considering lack of default systems limitation as a check on abuse and the market incentive of surveillance capitalism and their impact on key social media players). 3) Finally, the paper will touch on the non-financial costs of privacy in social media – for example is “fake news”/hate speech an inevitable problem for this kind of platform.

The central hypothesis of the work are quite straight forward firstly that despite use of the phrase “privacy by design” in the GDPR actually undermines/disincentivises its application because of the very broad ranging exceptions permitted under s1 of Article 25. Not least because Article 25 focuses on “the state of the art” rather than steering the art a common criticism of the regulatory frameworks facilitated by privacy by design. Secondly, that whilst it has been broadly recognised that for many social media platforms the users and their data have de facto become the product/marketized element of the business. However, this has largely been due to excessive reliance on neo-liberal models of consent colliding with user indifference and impotence. It is the contention of this paper that with a positively benign model of corporate governance (as opposed to a passively non-malignant one) functional business models can be developed that do not rely upon surveillance capitalism. Finally, it should be noted that we do not consider this model a panacea to all the ills of social media targeted misinformation and the algorithmic targeting of inciting material will become significantly more difficult under a model of true privacy by design. Whilst users will inevitably “leak” systems should prevent their exploitation by outside forces. Finally, should this model of development prove viable there is potential for further research in the use of algorithmic solutions to facilitate positive effects of social media for example digital media fragmentation and immediacy allow the identification and correction of fake news.

Digital Constitutionalism and Internet Bills of Rights: The Force of Declarations

ABSTRACT. We need a ‘constitution’ for the Internet. This is the claim that over the past few years has frequently resonated in the words of various digital literati. In the Internet, both nation-states and big technology companies restrict individual fundamental rights in multifarious ways, consequently generating a series of intrinsically ‘constitutional’ questions: how to protect human dignity in the digital society? How to preserve individual autonomy? Which essential guarantees do we need to safeguard our fundamental rights? Some scholars advocated the need to draft a ‘bill of rights’ for the Internet, a written document on the model of the ancient declarations of rights of the eighteenth century. Following this appeal, many have crafted their own decalogue of digital rights. Interestingly, this practice seems to be à la mode at the moment. We can count almost two hundred of these declarations. This paper explores what the added-value of the Internet bills of rights is in the constitutional ecosystem, ultimately unveiling the ‘force’ of these declarations, an old constitutional instrument entrusted with a truly contemporary mission. The first part of this paper will start with a reflection on the evolution of constitutionalism in the context of the digital society. It will be explained that constitutionalism is an ideology that evolved over time. Its quintessential principles gradually ripened; they were enriched by new values and changed to face the evolving complexities of society. It will be contended that, today, constitutional values designed for an analogue society struggle to deliver their prescriptive message in the digital environment. The concept of ‘digital constitutionalism’ will then be proposed to denote the new movement of thought that advocates for the translation of these principles in the context of the digital society. It will be clarified that we are not facing a radical change of paradigm. Digital constitutionalism will be presented as a theory that champions the evolution of core constitutional values, their translation into principles capable of informing the process of constitutionalisation of the digital society. The second part of this paper will then analyse the role of the Internet bills of rights within this complex process. It will be posited that these declarations, by adopting the typical language of constitutions, they seek to be part of the current conversation on how to translate the core values of contemporary constitutionalism in the context of the digital society. However, in contrast to other constitutional instruments, the Internet bills of rights are not legally binding sources nor the output of institutionalised processes of deliberation. They thus appear as a more ductile instrument whereby their promoters are free to experiment new legal solutions in a gradual way, through a multitude of initiatives, and in a more democratic manner, including actors beyond the worlds of politics and business. In relation to other mechanisms of constitutionalisation, it will be argued that the Internet bills of rights ultimately play a compensatory and stimulatory role, fostering a debate on how to adapt existing norms and highlighting areas of significant discrepancy between binding legal texts and social reality.

Algorithmic Opacity, Private Accountability, and Corporate Sustainability Disclosure in the Age of Artificial intelligence

ABSTRACT. Today, firms develop machine learning algorithms in nearly every industry to control human decisions, creating a structural tension between commercial opacity and democratic transparency. In many forms of business applications, advanced algorithms are technically complicated and privately owned, hiding from legal regimes and preventing public scrutiny, although they may demonstrate their erosion of democratic norms, damages to financial gains, and extending harms to stakeholders without warning. Nevertheless, because the inner workings and applications of algorithms are generally incomprehensible and protected as trade secrets, they can be completely shielded from public surveillance. One of the solutions to this conflict between algorithmic opacity and democratic transparency is an effective mechanism that incentivizes firms to engage in information disclosure for their algorithms.

This article argues that the pressing problem of algorithmic opacity is due to the regulatory void of U.S. disclosure regulations that fail to consider the informational needs of stakeholders in the age of AI. In a world of privately owned algorithms, advanced algorithms as the primary source of decision-making power have produced various perils for the public and firms themselves, particularly in the context of the capital market. While the current disclosure framework has not considered the informational needs associated with algorithmic opacity, this article argues that algorithmic disclosure under corporate securities law should be used as a tool to promote algorithmic accountability and foster social interest in sustainability.

First, as I discuss, advanced machine learning algorithms have been widely applied in AI systems in many critical industries, including financial services, medical services, and transportation services. Second, despite the growing pervasiveness of algorithms, the laws, particularly intellectual property laws, continue to encourage the existence of algorithmic opacity. Although the protection of trade secrecy in algorithms seems beneficial for firms to create competitive advantage, as I examine, it has proven deleterious for society, where democratic norms such as privacy, equality, and safety are now being compromised by invisible algorithms that no one can ever scrutinize. Third, although the emerging perils of algorithmic opacity are much more catastrophic and messy than before, the current disclosure framework in the context of corporate securities laws fails to consider the informational needs of the stakeholders for advanced algorithms in AI systems.

In this vein, through the lens of the SEC disclosure framework, this article proposes a new disclosure framework for machine-learning-algorithm-based AI systems that considers the technical traits of advanced algorithms, potential dangers of AI systems, and regulatory governance systems in light of increasing AI incidents. Towards this goal, I discuss numerous disclosure topics, analyze key disclosure reports, and propose new principles to help reduce algorithmic opacity, including stakeholder consideration, sustainability consideration, comprehensible disclosure, and minimum necessary disclosure, which I argue can ultimately strike a balance between democratic values in transparency and private interests in opacity. This article concludes with a discussion of the impacts, limitations, and possibilities of using the new disclosure framework to regulate private accountability and sustainability in the AI era.

15:15-16:45 Session 11F: Regulation of Technologocal innovation
Crossing over to the dark side: averting dark patterns through the force of consumer protection law

ABSTRACT. Defined as “tricks used in websites and apps that make you do things that you didn't mean to, like buying or signing up for something” (, much of the academic scholarship on the regulation of so-called “dark patterns” has focussed on data protection legislation. Yet another area of law, consumer protection, is also aimed at fostering user empowerment. In this paper, we analyse whether and to what extent the current EU Consumer Protection acquis is well-placed to make a substantial and/or complementary contribution towards curtailing the use of dark patterns. The European Commission’s adoption of a “new deal for consumers” is aimed at strengthening enforcement of EU consumer law and modernising the EU’s consumer protection rules in view of market developments. Alongside the Unfair Commercial Practices Directive, the Consumer Rights Directive, and other legal instruments, we consider different types of dark patterns to assess the application and fitness for purpose of the EU Consumer Law acquis to act as an effective deterrent and/or sanctioning mechanism.

Common elements of consumer protection legislation include requirements of clarity and comprehensibility of information and commercial communications on-line, rules on truthful advertising, and against misleading and aggressive commercial practices, as well as rules that go to the substance of contract law. In order to analyse through the lens of consumer protection law, in this paper, we identify common types of dark patterns used on digital platforms (e.g. forced registration, hidden legalese stipulations, etc.). This patchwork of legislation that makes up the EU consumer protection acquis is analysed in terms of its applicability to the utilisation of dark patterns, and the effectiveness of the related mechanisms of enforcement.

While assessing whether and to what extent the application of Directive 2019/2161 will strengthen tools made available by the Consumer Protection acquis to combat the use of dark patterns, we conclude that various tools in the legal arsenal should be utilised to maximum effect to ensure a trusted environment essential to the continued development and success of the digital single market. This will ensure design processes and user interfaces are developed for digital environments that respect and uphold the interests of consumers interacting therewith.

Consumer Protection, Algorithmic Creditworthiness Assessment, and the Regulation of Technological Innovation

ABSTRACT. Consumer scores describe individuals or groups in order to predict, on the basis of their data, behaviors and outcomes. Scores use information about consumer characteristics and attributes by means of statistical models that produce a range of numeric scores, and they proliferate in day-to-day interactions: in the US only, roughly 140 scoring algorithms are implemented for a wide range of services, and the most advanced of them can elaborate up to 8.000 individual variables. In particular, credit-scoring systems are used to evaluate individuals’ creditworthiness for access to finance. Credit scores are implemented by both institutional operators and emerging P2P lending platform: a positive credit score represents an essential means for access to credit, and therefore as a tool for individual and social development. At the same time, not everyone can be allowed to access to credit under the same conditions: individuals and businesses shall be distinguished based on predictions regarding their likelihood to repay loans, and the characteristics of their credit shall be determined accordingly. When effective, good evaluations enable lenders to respond promptly to market conditions and customer needs: both lenders and borrowers stand benefits. These new forms of scoring provide an opportunity to access credit to transparent and unbanked individuals without a consistent credit history, promoting forms of inclusion. Nevertheless, they entail significant risks: lenders should be attentive to avoiding disparate impact and unfair outcomes, while at the same time considering how to comply with the obligations of disclosure and transparency towards consumers. Lastly, how these new systems (e.g. scores implementing aggregated data and scores based on indirect proxies for sensitive factors) fit in the current European regulatory framework is still largely uncertain. Striking the balance between conflicting interests and reaching the optimal level of access to credit poses a fundamental challenge for regulators. In order to provide a normative response to these concerns, the paper provides an overview of credit scoring algorithms and their role in order to promote access to credit and financial inclusion, comparing them with the already existing tools (such as the FICO score). Then, it investigates the similarities existing between the consumer-scoring systems and the activity conducted by credit rating agencies – subject to careful examination by regulators in the US and in Europe following the 2008 financial crisis) to develop a regulatory proposal. In particular (arguing in favor of an intervention by analogy), the research illustrates a tentative regulatory model taking advantage of the framework delineated by the Reg. EU/462/2013 for credit rating agencies as a matrix to develop specific obligations for companies involved in the development and use of consumer-scoring algorithms, ultimately allowing for information on credit scoring methodologies and relevant data to be monitored and audited by consumers and supervisory agencies.

Is Flexible Regulation for Artificial Intelligence-Based Medical Products Appropriate? Examining the US FDA’s Proposed Use of Principles-Based Regulation

ABSTRACT. Artificial intelligence (AI) has radically accelerated the development of novel therapeutic and diagnostic software in medicine. Yet, the continuously learning and updating nature of AI defies existing medical device regulatory regimes across jurisdictions, which assume products remain stable following approval or clearance by the state. Safety and effectiveness issues arising from black-box uncertainty in AI and underdeveloped postmarket oversight for devices creates an urgent need for new regulatory programs to oversee evolving risks, if regulators are to allow iteratively learning medical devices to enter the market. In April 2019, the US Food and Drug Administration (FDA) proposed a framework to perpetually manage AI-based software as a medical device (SaMD) over the product’s entire lifecycle. Without expressly indicating so, the FDA proposal adopts a principles-based regulatory model for AI-based SaMD, alongside more detailed prescriptive rules, to capture the continuously updating character of AI while decreasing agency resource consumption.

Principles-based regulation (PBR) ideally offers greater flexibility, transparency, and efficiency in its design and enforcement by synthesizing broad-based rules, an outcomes-orientation, and meta-regulatory design. However, PBR has also received significant criticism from scholars and policymakers since the 2008 global financial crisis for insufficiently identifying and managing risks. Regulated entities may not support principles-based approaches either, as the intentional imprecision in their design can confound firms’ compliance efforts and result in inconsistent enforcement outcomes. Nevertheless, the unique challenges, opportunities, and uncertainties presented by AI-based medical products may merit a reconsideration of the applicability of principles-based approaches for some intended uses of SaMD.

This paper will characterize and assess the regulatory model in the FDA’s proposed plan for AI-based SaMD, contextualize this descriptive analysis with a theoretical exposition of when and how to best deploy PBR, and consider how the application of AI to medical devices may modulate these established indications for PBR. The paper will argue that a principles-based approach is not inherently better than other regulatory designs but could offer an appealing management option for emerging AI-based medical products in an ideal institutional and organizational setting. Pragmatic considerations and real-world context, however, may require supplementing PBR with other strategies.

16:45-18:15 Session 12A: Protection of individuals and new technologies
Conceptualising copyright exceptions as user rights: the impact of contracting out

ABSTRACT. It is increasingly common for copyright exceptions to be to conceptualised as user rights. In a landmark 2004 decision, the Canadian Supreme Court characterised their fair dealing defence as a ‘user right’ in CCH Canadian Ltd v Law Society of Upper Canada. In the same year, the CJEU described exceptions as ‘ancillary rights’ in Technische Universität Darmstadt v Eugen Ulmer. More recently in 2019 the CJEU has reiterated that copyright exceptions confer rights on users in Funke Medien v Bundesrepublik Deutschland. But what does it mean to characterise a fair dealing exception as a user right? And what might the consequences be of such a characterisation in understanding the relationship between exceptions and inconsistent contractual provisions?

This paper builds upon work presented by at Bileta 2018, incorporating insights drawn from the recent trio of 2019 CJEU copyright cases on copyright exceptions and the now-adopted Copyright in the Single Market Directive. The analysis draws upon Wesley Hohfeld’s taxonomy of legal relations and engages with Zohar Efroni’s access-right paradigm and Pascale Chapdelaine’s investigation of copyright user rights. In doing so it attempts to provide clarity as to the nature of copyright exceptions and their potential status as rights, focusing specifically on the scope of the duties that this might imply for copyright owners. It then considers recent legislative developments in the EU and the UK which make contractual provisions unenforceable where they operate inconsistently with the exercise of specific copyright exceptions and considers how such developments might be understood within this framework.

The Legal Response to Online Disinformation in Germany, France and Taiwan

ABSTRACT. Spreading disinformation in order to gain advantage out of it is nothing new in human history, and there are various laws addressing this issue. Nonetheless, the setting of the internet has made it more challenging for the laws to tackle with. There are government reports implying that given the nature of the internet, the cyberspace has become “fifth domain” which poses unconventional threats especially to democratic societies. Among all threats identified, online disinformation often reported to have impact over democratic elections, such as 2016 US presidential election and the first Brexit referendum. The internet once hailed for its creating an open and fair speech environment for all users and helping democratization, the fact that every internet user could post any content accessible to the public on the internet nowadays seems to be a problem. How to effectively curb online disinformation without jeopardizing the freedom of speech, remains a very difficult question for law makers across the world. In this paper, I will start by presenting cases of spreading online disinformation that attempted to sway the 2018 local election and 2020 presidential election in Taiwan. Then followed by a serial of discussion of various regulatory models to address online disinformation. The first one introduced is the Network Enforcement Act in Germany, followed by the French method of combating manipulation of online information. Subsequently, I will introduce the legal amendments passed throughout 2019 in Taiwan to tackle online disinformation and discuss the main differences of these legal models. In the conclusion, I will present the outcomes of different legal models and discuss whether they are effective and whether the freedom of speech remains intact.


ABSTRACT. Online news media are using recommender systems to personalize the news. The data-driven and automated character of recommender systems changes the extent to which news users are free in their interaction with news media. If a recommender system makes the choice for Jael on what to read next, the system might pre-empt Jael’s own free choice. But the recommender system makes these suggestions based on Jael’s behaviour and the system aims to present Jael the news items that they would have chosen themselves too. Furthermore, Jael is free not to follow the suggestions of the recommender system and choose other news items—at least, depending on the design and user interface of the system.

The question thus arises if recommender systems promote or limit the freedom of online news users like Jael. The answer to this question depends on our notion of freedom. The concept of freedom as non-interference means that someone’s freedom is limited only when another actor interferes with their life or choices. Under freedom as non-interference, news recommender systems limit news users’ freedom only if these systems actually interfere with their life or choices. For example, if Jael receives a selection of news articles that are not what they want to read and what they need to become an informed citizen, then the system interferes with Jael’s right to receive information. According to the concept of freedom as non-interference, Jael is then less free in exercising this right. However, if Jael receives a selection of news articles that are exactly what they want, while there is maybe a small risk that they receive the wrong articles but this risk never materializes, then under a concept of freedom as non-interference, Jael’s freedom is not affected. By contrast, the concept of freedom as non-domination means that someone’s freedom is limited when they are at the mercy of the power of another actor. For example, if Jael receives a selection of news items that are exactly what they want, but Jael is also aware that there is a risk that the recommender system sends them a selection of non-relevant or manipulative articles or will not respect their privacy rights, then under a concept of freedom as non-domination, Jael’s freedom is decreased. Under a concept of freedom as non-domination, the actual interference does not need to happen to recognize a limitation on freedom.

To answer the question if recommender systems promote or limit the freedom of online news users, this paper is structured as follows. Section 2 discusses in brief how recommender systems are part of a changing media system, in which journalistic media become more responsive to their audiences. Section 3 contrasts freedom as non-interference with freedom as non-domination. Section 4 discusses how, through the lens of freedom as non-domination, the use of recommender systems decreases the freedom of news users to exercise their rights in various ways. Section 5 is more forward looking and considers how the law could help news users to regain some of their freedoms.

16:45-18:15 Session 12B: Online Regulation
Self-generated Imagery of Children: Reconciling the Law, Norms & Safety

ABSTRACT. This paper examines the issue of self-generated nude imagery of minors, with a view to proposing a regulatory framework that protects children and provides confidence in the law for victims. The literature on the prevalence of ‘sexting’ among older teenagers is somewhat inconsistent: there is evidence suggesting that 'sexting' is prevalent among older teenagers within relationships, but other studies point out that the threat might be exaggerated. There is, however, clear evidence that an alarming number of images identified by hotlines as child sexual abuse/exploitation (CSAE) material were self-produced by children.

The laws that criminalise the production, distribution and possession of CSAE material were originally enacted with the objective of protecting children from predatory adults. Advances in technology and societal norms now mean that content is created not just by predatory adults, but children themselves. A strict application of the law would mean that self-generated nude imagery would class as 'indecent photographs' (under UK law), which renders its production, distribution and possession as criminal offences. In the case of ‘adolescent sexting’, therefore, both the sender and recipient of the image may be committing one or more offences of producing, distributing and possessing an indecent photograph of a child. A peculiar situation arises where the recipient re-distributes the image (without consent of the other party as would typically be the case with ‘revenge pornography’), in that it makes it difficult for the victim to report it to the police. This is because the creator/sender of the image has also technically committed the offence of taking an indecent photograph in the first place, regardless of whether it is self-generated and within the perceived safety of a relationship. A strict application of the law can thus be counter-productive to child protection in this instance.

The time has come to examine closely the regulation of self-generated sexual imagery of children. It is clear that the law has led to unintended consequences for child safety. Prosecutorial discretion or charging practices intended to protect children go some way to mitigate the problem, but do not fully alleviate the concerns in this regard. Reforming the law to decriminalise self-generated imagery as a whole is also problematic. This paper will shed light on some of the legal, social, and policy considerations in this context and will propose a framework for reform in this area. This would include not just changes in the law, but also calls for a greater role for ‘norms’ and ‘architecture’ as regulatory tools in order to achieve meaningful and proportionate outcomes.

“Of course you have nothing to hide – but then again, it’s not about you”: A critical view of consent as “individual control” in EU data protection

ABSTRACT. In EU data protection law, the notion of individuals’ control over any personal data relating to them is central to the way in which the right to data protection is framed. Article 8 of the EU Charter of Fundamental Rights specifically provides that personal data must processed “on the basis of the consent of the person concerned” and grants individuals rights of access to and rectification of their data.

In legal theory terms, this means that EU law gives quasi constitutional status to the conception of an individual’s relationship to their data as one of “authority over”. In practical terms, this increasingly means that individuals exercise this control by authorising the use of their personal data for a variety of purposes and in exchange for a range of - actual or perceived – benefits. What follows from this is the development of a highly individualised, “neoliberal” concept of individual control, where such trade-offs are viewed, if not as the exercise of individual “property rights” in data, then as an expression of individual liberty.

However, this fails to take into account that while these trade-offs may benefit, in whatever way, the individual themselves, they may ultimately result in longterm harm not just for that individual but to the interests of others and/or the public interest. Examples are the use of online tracking data for political microtargeting and the further use of, say, health data collected by devices like “Fitbit” or services like 23andme to establish patterns that will ultimate be used for the purpose of automated decision-making that will affect even those, who have not individually consented to the use of their own data for this purpose. The question therefore arises, to what extent data protection law should be able to limit data uses on the basis of individuals’ consent, where the negative repercussions of those data uses potentially go beyond harm to the relevant data subject.

This paper will examine whether the conceptualisation of the right to data protection as an individual’s “right to control” adequately reflects the historical objectives of “information privacy” and “informational self-determination”. What do we ultimately seek to protect, when we protect those rights? And is an understanding of individual control as absolute right actually necessary or indeed desirable? Just because *I* may have nothing to hide, does this mean that others have nothing to fear from the data uses I authorise?