BILETA 2020: BILETA 2020 - REGULATING TRANSITIONS IN TECHNOLOGY AND LAW
PROGRAM FOR TUESDAY, APRIL 7TH
Days:
next day
all days

View: session overviewtalk overview

10:15-10:30Coffee Break
10:30-12:00 Session 3A: Cybercrime and Cybersecurity
10:30
Regulating Global Export Control of Cyber Surveillance Technology – The Wassenaar Arrangement and the Disrupted Triangular Dialogue

ABSTRACT. This paper investigates how states use and develop export control regimes to regulate the global transfer of cyber surveillance technology – broadly defined as software, hardware and technology used by intelligence and law enforcement agencies or by network operators acting under their direction to covertly monitor, exploit or analyze data that is stored, processed and transferred through ICT means. It is a test case assessing the (in)ability of conventional export control mechanism to address cyber policy issues sitting at the intersection of security, human rights and trade and business interests affecting myriad stakeholders.

The origin of current discussion traces back to early to mid 2010s. In the aftermath of Arab Spring movement that swept across Middle East and Northern Africa, the growing sales, production and trade of cyber surveillance items, which used to evade public scrutiny, were brought into spotlight. Political controversy and civil advocacy leveled against large-scale exploitation of cyber surveillance technology by authoritarian governments and the role that many commercial surveillance companies played in supplying such technology to them led to a major reformative moment for export control regimes. In the following years, certain types of cyber surveillance items have been added to the control lists both internationally and domestically. Yet, mass cyber surveillance appears in a wider variety of destination countries, often in an extensive width and depth that often lead to serious human rights violations. The entire process also continues to be facilitated by commercial surveillance companies –those of Western and non-Western origins alike.

Against this backdrop, the paper examines how restrictions on the transfer of cyber surveillance technology are enumerated in export control instruments at different levels of governance. Its focus is twofold: one is to analyze a series of amendments to the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies, the only multilateral instrument that provides a common legal framework setting terms and conditions of cyber surveillance items subject to export control. Another is to compare relevant legal developments of three leading actors in the transfer of cyber surveillance technology – namely the United States, European Union and China. The paper closely assesses the implication of their diverging legal development in the post-Cyber amendment to the Wassenaar Arrangement.

11:00
Enhancing Consumer Interests through The Promotion of Interoperability

ABSTRACT. This Submission connects Artificial Intelligence (AI), digital platforms providers and consumers. AI-driven digital platforms and blockchain have been on the rise for some time. Regardless of the services offered, these AI-platforms collect immense amounts of information about consumers' behaviour. The platform providers translate the data they assemble into observable and measurable data points, to predict and anticipate future consumer behaviour. Data as currency can be the means of the companies' revenue models. Providers desire to exclude uncertainty from automated processes. They aim at promoting predictive behavioural analytics, and removing the ‘human element’ out of the loop. Moreover digital platform providers are likely to abandon responsibility altogether for transactions. The users could end up in a cul-de-sac with no room for unpredictable and spontaneous behaviour. This disruptive change in how we transact requires reflection upon broader and long-term consequences of consumer interests on digital platform. The approach is from the perspective of promoting or even requiring services interoperability to empower consumers (whilst perhaps, stifling innovation). The European Parliament has called for the Commission to promote an open environment, with open standards, innovative licensing models, open platforms and transparency, and, in order to avoid lock-in, proprietary systems that restrain interoperability.

The paper attempts to (better) define and apply services interoperability on digital platforms in four steps: 1) Defining Interoperability (Services, Data, Platform); 2) normalization and standardization 3) What are the consumers’ rights when AI and big data is processed on a digital platform they use? 4) An overview of current EU regulation on Digital Platforms – including Online Intermediary Services. These four chapters are followed by a synthesis. (the contribution will be around 8.000 words).

The first step is aimed at defining interoperability. What is interoperability? Next to the notion of services interoperability, a case study is data interoperability and data portability: what does it mean and which legal principles govern interoperability? How do interoperability requirements protects consumers on digital platforms? Is there a conceptual dichotomy between personal data and big data? What needs to be done in order to avoid consumer vendor lock-in?

The second step is high-level research in on how standardization of technology used on digital platforms would enhance the consumer’s interests? This results in an overview of interoperability initiatives by standardization institutes, such as the European Telecommunications and Standardization Institute (ETSI) and the International Telecommunications Union (ITU).

The third step aims to determine the harm that may be done to consumer interests who more and more are dependent on digital platform services for their daily needs. How transparent are the terms and conditions re. interoperability on AI-driven platforms?

The fourth step consists of an inventory of recent new regulation and initiatives in the EU. To the extent we discover a regulatory gap, this will be discussed. The research outcome is a paper mapping technology standards for digital platforms, and see this both from the consumer’s, the platform provider’s and the regulator’s perspective.

10:30-12:00 Session 3B: Law, Legal Compliance, and AI
10:30
The Philosophical Foundations of Information Technology Law Panel abstract

ABSTRACT. Presenters Catherine Easton (Lancaster), Daithí Mac Síthigh (QUB), David Mangan (Maynooth)

Stream Approaches to law and governance

Panel Abstract: There have been few attempts to situate information technology law (IT) within a philosophical framework. And yet, the rapid development of innovation reveals a need for direct dialogue on grounding principles. Information technology challenges the orthodoxy of physical jurisdictional boundaries that have been a hallmark of legal systems. To engage with this, we continue to use the term cyberspace in an effort to provide for an extended audience within the broader law community (judiciary, practitioners, policymakers, academics and students working directly in the area).

The presentation will critically discuss how information technology is itself an area of law. The approach taken is that IT law is both distinct and integrated: distinct insofar as IT is the genesis of particular legal issues; integrated in that IT is a backdrop to issues within more traditional legal disciplines.

Globalization remains an often-applied term to this area; but this collection is not subscribing to that notion. Instead, information technology is situated as part of the evolution of human communication and interaction. It is a further step in meeting and transcending boundaries. Still, in assessing this evolution, contributors will map out current and emerging challenges through the elaboration of applicable philosophical foundations.

Conceptualising the Internet as a space requiring accessible design (Catherine Easton: abstract for discussion in the panel)

This is an investigation of the Internet as an online space to which equal access is required. It will examine the nature of the online environment, arguing that barriers to access are equally as discriminatory in the “real world” as they are in the online. It will focus upon design barriers to access and initiatives by self-regulatory bodies such as the W3C to address accessible design. It will then look to some of the approaches taken to conceptualising the Internet as a public or private space when determining whether to apply anti-discrimination measures.

10:30-12:00 Session 3C: Data Protection & Healthcare
10:30
PANEL Everything is personal data: now what? In search for new boundaries for protection against information-induced harms

ABSTRACT. This panel picks up on the criticism of data protection that everything will be personal data in near future (Purtova 2018), that data protection law is increasingly used to tackle any data-related problem and overreaches (Koops, The trouble with data protection), and the system of the GDPR is not sustainable in a long run. The panel investigates – from perspectives of law, economics, and social sciences – where new boundaries for legal protection against information-induced harms can be drawn in addition / alternative to the current concept of personal data.

Deconstructing personal data as the organizing concept in data protection Nadya Purtova

Data protection is based on a causal assumption that ‘privacy harms occur when personal data is processed’ and the harms will be avoided if personal data processing is regulated. This talk draws on legal theory on the structure of legal norms to deconstruct the concept ‘personal data’ to reveal which values, processes, practices or technologies – in the lawmaker’s view – are involved in this causal process, that could possibly serve to (re)define boundaries of data protection.

Economics on data and information: lessons for data protection Sebastian Dengler

This talk will address how data and information are conceptualized in economics, as an economic resource, in terms of rivalry and excludability, and otherwise. Which alternative characteristics of data and information in economic terminology are most important in the understanding of the nature of data in the context of data economy and data-driven harms? What does it mean for data protection?

Charting information in science and law: lessons for data protection in machine learning Raphael Gellert

Studying the notion of information and its conceptual boundaries in information theories and law may provide guidance on how data protection can be reorganised, in particular, in relation to the challenges of machine learning. The concept of personal data is defined through the concept ‘information’ in data protection law, hence the notion of information is central to data protection law. The talk will review how information is conceptualized across disciplines, including law (the law of cybersecurity, genetic data and biobanks). Is there a unified understanding of information across (legal) domains and what does it tell us about how data protection should address machine learning?

Optimising systems and optimising behaviour: data-driven harms in the smart city Evelyn Wan and Tineke Broer

Smart systems in the city are implemented to improve the functioning of and optimise the output of existing systems, whereby streamlined data gathering, processing, and analyses could offer automated decisions which make existing processes more efficient. These mass data-gathering practices place users under surveillance; users expected to adapt their habits to these systems. Concerns have been raised about the impact of these technologies on freedoms and autonomy of citizens, as well as on democratic values. Discussing specific empirical cases from smart grid and smart city projects, this presentation investigates the potential harms and objections to the technology, and whether (and if so, how) such projects ought to be regulated.

10:30-12:00 Session 3D: New frontiers of copyright law
10:30
Blockchain in-game collectable items and copyright law: copyright implications of a new way of creative works consumption

ABSTRACT. Blockchain is the technology that supports cryptocurrency as it enables the decentralised transfer of value between anonymous parties on the internet. Blockchain generates an audit trail of all cryptocurrency transactions to prevent the double-spending of crypto. In the last years, a large number of proposals are put forward whereby blockchain is set to innovate copyright. One area of blockchain use that witnesses promising developments is the online gaming market. In this context, blockchain technology is largely used to tokenise, i.e. link a value, which represents an in-game collectible item such as a sword or figurine, with an entry on the blockchain in such a way that the ‘ownership’ of that in-game item or transfer of ‘ownership’ of that item from one user to another can always be accurately tracked and remunerated via the blockchain. This new economy for in-game items enables several facilities for developers and game players. For example, some game developers are using blockchain to introduce a degree of interoperability which enables users to multitask in-game items across various games blockchain games or games which support blockchain plug-inns. In this context, a player can use for example a sward or figurine he/she purchased in game A, in game B. Other facilities are available for players of blockchain games such as the option to sell, trade or lend in-game items to other users. Overall, players enjoy more flexibility over the use of in-game collectibles.

From a copyright perspective tokenised in-game items raise interesting questions. Who owns the copyright in a tokenised in-game collectible figurine? What is the scope of the license that users receive? Under which copyright regime should in-game items be regulated? How do these new means of in-game item consumption sit with the CJEU case law, such as the rules on exhaustion? This presentation will analyse the relationship between blockchain supported in-game collectible items and copyright law to address these issues.

11:00
You got a licence for that dragon? Exploring legal ownership of digital goods in virtual worlds

ABSTRACT. This BILETA-funded study gauges user perceptions of ownership in virtual worlds and measures understanding of the contractual terms that bind the user in relation to ownership of digital content. The study focusses on Elder Scrolls Online, a popular and long-running computer game that allows purchase of digital items. 30 users participated in semi-structured interviews, and an online survey gathered 100 responses; both methods address questions regarding IP ownership in online games and a variety of related topics.

Whilst users agreed to terms through a ‘clickwrap’ contract executed prior to accessing any content, terms are generally weighted strongly in favour of the company and may differ from the user’s perception of the terms of ownership governing digital content their subscription fees or in-game purchases.

The contractual terms in ESO reserve all IPRs during the use of the game for the Company. Additionally, the Company makes no guarantees or representations that any in-game material is owned by the player. Therefore, should the Company cease operations at any time or alter the game play to the extent that the player loses in-game assets. Despite what, in many cases, is a significant investment, the player has no recourse or remedy in the event these items are lost or removed.

As observed in other literature on online worlds, there is widespread misunderstanding as to what own or have agreed to in the terms and conditions, either because of the complexity and length of the agreement or because the users don’t read the agreement and develop their own understanding based on perceptions of community norms. Users may not think of their generated material in terms of copyright, but they may expect ownership – or at least attribution – over their own authorship and artistic or literary contributions. Are any in-game creations or activities sufficient to garner even a thin copyright, for instance, as an adaptation?

Possible candidates include:

- Character design and naming - Lore - Fan fiction and related works - In-game items, collected and crafted - Guild structures - Intangible cultural heritage

Despite the terms, the Company will often work to resolve any issues with players’ digital assets as a matter of customer service and goodwill. However, the terms stand. The legal analysis of the terms and conditions was presented at BILETA in 2017, and attendees expressed interest in seeing the outcome of the player interview data. This data could influence the treatment of the validity of these contracts, which seems to be the only way to circumvent the arbitration clauses, and the understanding of players as to what their relationship is with the Company and with the virtual property.

Scant socio-legal research on user perceptions in this rapidly evolving area exists. Some researchers have provided the foundation for a unique insight into an intricately functioning subculture that is much discussed but for which it is difficult to access or gain data. As a participant-observer in the ESO community myself, I am familiar with the in-game structures and recurring player debates surrounding this issue and have unique access to respondents that a researcher outside the community would not have. With this understanding and access, better and more thorough information about the user experiences in this subculture are explored.

11:30
Four Years After Launching the EU ODR Platform for Resolving Consumer Disputes: The Need for Re-Designed of the ODR Platform

ABSTRACT. Increasing consumer trust when purchasing good or services online is one of the political priorities of the European Commission. In order to build trust and provide an effective dispute resolution system for consumer, the European Parliament and the Council adopted the ADR Directive and the ODR Regulation on consumer on 21 May 2013. In February 2016, the European Commission launched a web-based platform (which called the European ODR Platform) that enables online submission of the complaints and their transmission to the ADR entities in the Member States. The aim of the platform is to facilitate the online resolution of disputes between consumers and traders over online transactions. According to the recent report conducted by Commission on the functioning of the European ODR platform in September 2019, a considerable number of cases (80 %) complaints were automatically closed within 30 calendar days the legal deadline for the consumer and trader to agree on a competent ADR body. This paper aims to provide insight into the functioning of the platform and to discuss its ongoing challenges.

10:30-12:00 Session 3E: Education and Technology
10:30
Differential Data Protection Regimes: Fundamental Rights Protection and Research Exemptions in Health Data Pools

ABSTRACT. Our study explores the existence of differential data protection regimes within the General Data Protection Regulation and validates them in respect to the two European policy objectives of protecting fundamental rights and maximising data-driven research and innovation capabilities within the Digital Single Market. The practice of health data pools offers a perfect as a use case for our study. Our research journey moves from a survey of health data sharing practices, through the lenses of case law, as the ruling by the Tribunal of Cagliari Court of Appeal and the Italian data protection authority regarding the sharing of genetic health data of Sardinian data subjects to the UK-based for-profit corporation Tiziana Life Science plc. Then it contextualizes health data sharing practices at European policy level, with specific reference to the emerging European principle on the free movement of data, recently substantiated in the new Open Data Directive. Under these premises, the second section of the paper validates free flow of data objectives within the GDPR. In particular, it unveils the possible alignment of the GDPR’s research exemption with broader policy goals of the Digital Single Market Strategy. It thus identifies the existence of differential data protection regimes within the GDPR, which are applicable to health data sharing practices. Accordingly, we identify differential data protection regimes proposing a taxonomy along the lines of a spectrum that goes from a “pure” fundamental rights-oriented data protection regime to an enabling, market-oriented one. To this end, the regulatory framework under art. 9 GDPR, regarding special categories of data, including health data, is taken as a benchmark. Against this backdrop, we make the case of three differential data protection regimes applicable to health data pools: i) the first given by data subjects’ determinations in the form of consent or in respect to his/her fundamental interests, in accordance to art. 9(2) letters a) and c) GDPR; ii) the second, based on the public-interest related nature of the processing, as under art. 9(2) letters h) and i) GDPR; and iii) the third, where the processing is necessary for scientific or historical research purposes or statistical purposes under art. 9(2) lett. j) GDPR. In this last regard, the research exception given by the complex interaction between articles 5(1); 6(1); 6(4) and 89 GDPR provides significant derogations to ordinary data protection principles, as the principle of purpose and storage limitation, as well as rights, as the rights to be informed and to erasure. Under the special regime regarding processing activities for research purposes, the GDPR appears to promote the free-flow of information within the internal market. This triggers however the need of a clearer exposure of fundamental rights’ safeguards under art. 89 GDPR in the context of health data-driven research activities. Here, we explore a difficult but possible distinction between research conducted for private and public interests, stressing the opportunity to differentiate the data protection regimes in case of public health data pools and private ones. The case of a minimal data protection “essence” is made in respect to public health data pools, whereas the need of a higher protection threshold, closed to the ordinary “full” data protection regime, is claimed in respect to private health data pools. Ultimately, the case of private-public health data pools is examined, identifying specific parameters for the scaling of the above-identified differential data protection regimes, when private and public interests are jointly involved in a research-based data pool. Implications on competition will be noted as well although they will not be fully explored here.

11:00
The Influencers’ Republic: Where Marketing Meets Political Speech

ABSTRACT. The Cambridge Analytica scandal brought to light the dangers of using social media to promote electoral interests. In consequence, social media platforms started taking measures to limit the scope of the impact of political vehicles of speech in their online space. As an illustration, Twitter and Facebook are a few of the platforms that banned political advertising at the end of 2019. However, the effectiveness of social media in delivering political messages to user-citizens is undeniable. Just in the case of Facebook, the platform hosts more than 2 billion accounts and almost its entire profits come from advertising revenues. For this reason, the self-regulatory measures imposed by various online platforms are likely not going to suffice to tame this recent phenomenon. What these bans do, in fact, is create a window of opportunity for the deployment of a different type of marketing on social media, and one that has the potential to further blur the lines between commercial and political speech. This approach is influencer marketing. With influencer marketing budgets achieving historical highs, and more citizens turning to social media for news but also entertainment, influencers (also sometimes called content creators) are individuals who monetize content for commercial gain. So far, advertising laws applying, i.e. to consumers, dictate that commercial communication must be disclosed. However, this industry has led to alarming enforcement questions, as detection at scale remains very difficult to achieve for public authorities. Adding to this the potential of influencers engaging in political speech for profit, serious questions arise regarding the disappearing line between commercial and political communication. Political speech enjoys the highest degree of protection by national constitutions as well as supranational and international charters (e.g. ECHR). Unlike commercial speech which, in some cases, does not enjoy constitutional protection, political speech is the foundation of constitutional democracies. The blurring line between political and commercial speech introduces a new layer of complexity in tackling political advertising hidden beyond political statements. Indeed, political speech is likely to attract commercial speech inside a broader scope of protection with the result that potential limitations of this kind of speech (e.g. regulation) would be required to pass a very strict test through the balance with other constitutional safeguards or legitimate interests according to the criteria of necessity, legitimacy and proportionality. This could also question the scope of other regulation designed to govern commercial speech like advertising. This paper aims to investigate this matter by addressing the specific challenges arising from the monetization of speech by influencers on social media from the perspectives of constitutional law and consumer protection. The primary goal of this paper is to show how the convergence of political and commercial speech could not only overcome the bans of political advertising but also become a way to obstacle regulatory interferences. Besides, this paper proposes a normative framework based on procedural rights for users to deal with the rise of the republic of influencers.

11:30
Brain-Machine Interfaces And Ethics: A Transition From Wearables To Implantable

ABSTRACT. retrieved version

10:30-12:00 Session 3F: General principles of regulation & new technologies
10:30
The Taming of the Shrew: A Tale of AI Regulation, Legal Certainty and Regulatory Sandboxes

ABSTRACT. The tremendous impact of the so-called ‘disruptive’ technologies to society creates a plethora of new risks, some of which highly unpredictable. This has regulators dealing with unprecedented challenges that allegedly require ‘novel’ solutions allowing them to better comprehend, adjust to and ultimately regulate innovations. A notorious example of such a new dynamic and adaptive regulation is the use of regulatory sandboxes by increasing number of jurisdictions in different domains, from data protection to FinTech. Even though the concept of regulatory sandbox is as of yet undefined, it has become a must-mention point in virtually every conversation concerning the regulation of new technologies, especially artificial intelligence (AI). One of the main reasons for regulators to seek evolution of the traditional ‘reactive’ type of regulation is allegedly to preserve the principle of legal certainty as part and parcel of the rule of law.

This essay argues that the stochastic, (self-)adaptive and learning capabilities of AI technologies coupled with their opacity and ubiquity set them apart from other innovations being tested in regulatory sandboxes. It is submitted that they in fact endanger legal certainty. For example, a technology based on blockchain that allows identity verification of people or documents as a service is an innovative solution, will potentially benefit consumers and is certainly a viable candidate to be tested in a regulatory sandbox by a regulator such as UK Financial Conduct Authority (‘FCA’). After the end of the process, the company developing the service would have allegedly fixed any issues flagged by the FCA and the regulator would have gained a much better understanding of the technology, the risks it creates and possible avenues on how to mitigate them. This would allow consumers to benefit from the service and have the confidence it is ‘compliant by design’. Applied to AI technology evaluating loan applications from SMEs, for instance, this approach would not necessarily render the same results. A machine-learning-based application would ‘by design’ continue to learn after exiting a regulatory sandbox and could potentially act in ways and create risks that could not have been foreseen or mitigated even through the most rigorous sandbox process.

The essay aims to provide an answer to two main questions. First, should regulatory approaches to AI through regulatory sandboxes be considered at all and, if not, what the alternative could be. Second, would such an approach compromise the principle of legal certainty and, if so, how could this be mitigated, or should the principle be reconceptualized or even, rather revolutionary, displaced in the context of adaptive technologies. The paper will explore the uncertainty created by the lack of a unified definition to AI and regulatory sandboxing. It will consider the nature of the new evolving anticipatory regulation, particularly regulatory sandboxes, including practical examples with AI technologies from different jurisdictions. Finally, it will analyse how the centuries-old principle of legal certainty could be preserved in a world governed by ‘technological management’ (Brownsword 2019).

11:00
The Tokenized Economy and the Law: From Smart Property to ICOs, STOs and IEOs

ABSTRACT. Among the many changes to the legal and economic world we know brought about by the blockchain revolution and decentralized ledger technologies, some of longest-lasting ones will arguably be connected to the so-called tokenization of assets. This expression refers to the trend to convert physical property into digital tokens, and then issue them on a blockchain-based plat-form through a smart contract (hence the expression “smart proper-ty”). From the economic point of view, this practice opens up some very promising new avenues for the financial sector, but at the same time it raises many legal issues that need to be properly addressed: for instance, how does the tokenization affect the traditional property law? What protection do we want to grant to the purchasers of the tokens? How do we make sure that the person tokenising an asset has a legitimate title over it? How can the token circulate? Can transac-tions be undone, in the same way as they would in the analogue world? And so forth. But the tokenization also has a substantially disruptive potential in the area of investment law: Initial Coin Offerings (ICOs), Security Token Offerings (STOs), and Initial Exchange Offerings (IEOs) can indeed lead to many new opportunities of funding for startups and companies in general, and of investment for retail and institutional investors. But the legal framework in which all of this is happening is far from clear. Similarly, the tokenization of securities could effectively materi-alise the scenario envisioned by someone, in which «eventually the acquisition of a company will just be the acquisition of its private keys. Everything else about the corporate structure (charter, cap ta-ble, payroll, financials, contracts, assets, etc) will live on a public chain rather than in a tangle of docs and spreadsheets» (Srinivasan). Clearly, this has many legal implications, that the article addresses thoroughly. Some jurisdictions internationally are at the forefront in regulat-ing the tokenized economy. The article considers the main trends emerging from these first regulatory efforts, and engage in a critical discussion of them. In fact, the article concludes by arguing in favour of a regulatory hands-off approach, one that tries to allow room all possible room for innovation, avoiding the risk of stifling it in a rush to regulate. Instead, a wait and see approach will be advocated for, on the assumption that a technological revolution does not necessarily imply a legal one, but existing law might be already very well equipped to accommodate even such innovative economic practices. If any new law is required in this area, it is one that does away with the existing legal limits to the unfolding of these technologies up to the fullest extent of their potential, rather than one enabling their legitimate use, something that we should arguably be already able to take for granted.

11:30
Information security of wearables, home-diagnostic tools and smart medical devices

ABSTRACT. The contribution summarises the current regulatory framework for information security of medical devices that are based on information technology, such as AAL technologies.

The directive on medical devices (EU) 2017/745 recognises devices that incorporate programmable systems and non-implantable active devices. Manufacturers shall set out requirements concerning and IT security measures, including protection against unauthorised access. Not all devices that are used for home-diagnostics, Active Assisted Living or as “fitness wearables” are marketed as medical devices. The manufacturer or distributor can choose, whether he subjects himself to the regulation of the directive or not by marketing strategy. Devices that are not marketed as "medical devices" are not subject to the regulation.

The contribution further aims to analyse and compare legal responsibility of manufacturers, vendors and operators of registered medical devices and the legal responsibility of companies that market diagnostic devices as “hobby” devices with potential extension to social networks.

The contribution argues, that the information security standards for registered medical devices and "hobby" devices should not be, in principle, different.

12:00-13:00Lunch Break
13:00-14:30 Session 4A: Cybercrime and Cybersecurity: perspectives from the African continent
13:00
Open Government and Data Protection: A Dilemma or A Reconciliation?

ABSTRACT. Technology does not only alter the behavior of society, but it also modifies the government institutions. Following the freedom of information movement in the 1950s, the open government initiative has taken the agendas of administrations, policymakers, scholars, and citizens. Open government is a doctrine that gives citizens the right to access government documents and proceedings to not only allow effective public scrutiny and oversight but also to allow citizens to reuse them under open licenses. The initiative has followed a different methodology than that of freedom of information movement that without receiving a request from a citizen, it releases information mainly via technological facilities, e.g. the internet. However, disclosing administrative data inevitably encompasses a dilemma. On the one hand, releasing a substantial amount of data can contribute to transparency, accountability, innovation, economic advancement, and citizen engagement, but on the other hand, it can create privacy implications such as the risk of re-identification, discriminatory practice and the lack of control over personal information. Therefore, these two features generally clash with each other; although releasing fewer data strengthens the protection of privacy, it hinders the objectives of open government or vice-versa. A way to soften this dilemma and in order to enhance all of the advantages of openness and accessibility, it is critical to address techniques that can mitigate the privacy concerns while maximizing the utility that has been offered by open government initiative. In this paper, there are four possible privacy-enhancing practices have been foreseen: (i) data minimization and data retention, (ii) data protection impact assessment, (iii) license restriction, and finally (iv) transparent transparency, which aims to increase the knowledge of data subjects with respect to open data systems.

13:30
Algorithmic transparency and the IP rights in data and databases

ABSTRACT. As a sub-branch of Artificial Intelligence, Machine Learning (ML) is an inductive method of problem solving which can accomplish tasks that once required human participation and discretion. As governments and other institutions increasingly deploy ML-based systems to predict, rate and act upon individuals’ behaviour or personal traits, there is growing political and legal demand for transparency (e.g., GDPR Art. 22) so that the outcome of these systems could be interpretable, and thus contestable where necessary.

Previous research has revealed that transparency in automated decisions entails not only openness and disclosure in the conventional sense but further administrative and technical measures such as ex-ante algorithmic audit or ex-post black-box testing of these systems. The implementation of such broadened scope of transparency inevitably involves the reproduction and adaptation of the relevant data and/or datasets. Accordingly, this paper provides an analysis of the potential areas of conflict between the possible transparency measures and the relevant IP regime (i.e. copyright, sui generis database right and trade secret protection).

In the introduction, we identify three categories of data for the purposes of IP analysis: i) the training data; ii) the actual data analysed for a specific outcome; and iii) ML output (recommendations, predictions, ratings, and etc.). The rest of the paper inquires to what extent the automated decision-makers could rely upon the IP rights to mitigate their transparency obligations.

In ML-based decision-making systems, copyright protection of data and datasets as artistic or scientific expression may be of relevance in two ways. First, the individual items (text, audio, video, etc.) in a database may be eligible to copyright protection under the 2001/29 InfoSoc Directive. Second, databases (as compilations) may enjoy protection due to the creativity in the selection and the arrangement of the items comprising them( Art.3(1) of the EU Database Directive). As a novel contribution, the analysis in this part also includes the copyright eligibility of the ML output either as an individual result or in the form of a database compiling various results or past decisions (e.g. a numeric score, binary decision, a profile or a textual suggestion).

Sui generis database protection under the EU Database Directive is another major IP regime which is relevant for algorithmic transparency. This part inquires under which conditions the use of the databases to scrutinise ML systems would amount to substantial extraction as provided by the Directive. It is also discussed whether machine generated data could pass the substantial investment test regarding the obtaining, verification or presentation of the contents of the database under the spin-off doctrine as established by the ECJ.

The final part of the IP analysis inquires how trade secret protection could act as a barrier against transparency demands. Both the individual data items and the databases in whole could satisfy the secrecy requirement under Article 2(1a) of the EU Trade Secret Directive in the sense of being (i) not generally known or (ii) not readily accessible. Regarding the limitations to trade secret protection under the EU Directive, the paper discusses reverse engineering and independent discovery as legitimate paths toward learning a trade secret.

The paper concludes that the data operations aiming to render ML-based decision systems transparent do not easily fit in the exceptions and limitations provided in the relevant IP regimes, and discusses further legal approaches which could provide the optimum extent of transparency with minimum prejudice to the integrity of the systems or to the legitimate interests of the stakeholders involved.

14:00
Care to explain? Defining the AI-infused explanatory relationship in terms epistemic (in)justice

ABSTRACT. Knowledge generation is a social practice, which means that power dynamics are at play in it. The normativity that this imposes is, and should be, alive in any knowledge generating technologies we develop. AI-infused decision (support) systems are no different. However, to asses our technologies' qualities on our terms, we need to discuss them in our terms. Where this was hard before, it has now become problematic. Current socio-technical systems are seen to defy such insight.

In response to this, complementary explanation rights and duties are drafted to support those that exist in law and professional norms, and fill the gaps where they do not. But in order to provide the necessary guidance to all disciplines involved, all 'breeds' of rules still need explanation in this new territory.

A closer look at how the increased need for explanation is 'problematized,' meaning, how is it presented by drafters, policy makers and researchers reveals legal, technological, social, and philosophical concerns. These problematizations shape the solutions that are sought. Technological interventions (explainability) are connected with e.g., human psychology and communication theory, others focus on re-appraising existing explanation rules.

But domain-specific norms have a history, an ontology, and political dimensions that need to be understood. Importantly, explanation rules influence the decisional process that is accounted for. Existing norms cannot simply translate to what should be explained when the decision modalities change. As to the other examples, zooming in on how humans understand each other is useful when it is sufficiently clear what should be explained in a certain context. This is not to say that what needs to be explained does not depend on human sociality—it very much does. This however should alert us the political dimensions of the need for explanation, which risks to be neglected when the spotlight is on individual capabilities.

Recalling the sociality of knowledge (epistemic) practices, we need to ground the authority we afford to explanations on the quality of both their agential and content dimensions. To this end, the problematizations of the growing field of epistemic (in)justice are of value. Its rich literature offers norms towards responsible knowledge generation, and importantly shows this to be an ideal to continue to strive for in light of extensive harms that are done in their disregard. Epistemic​ injustice ​spells out how epistemic authority​ follows from other powers, and is –intentionally and inadvertently– used to strengthen these. Examples from Critical Theory Studies reveal the challenges of deconstructing 'false' knowledge paradigms: investigations of interpretative resources, how they are used, by whom, in what ways, with what effects. Epistemic​ justice describes proper practices in acknowledgment of these power-knowledge dynamics. Due care, accuracy, and competence are norms that emphasize methodological dimensions. Honesty, sincerity, trustworthiness relate to dispositions of those engaged in the process. Both sides feed into each other, and inform what needs to be in place on institutional level in terms of checks and balances.

A focus on the justice dimensions of epistemic practices affords the 'thinking tools' to come to a more fundamental understanding of what the explanatory relationship should afford both parties. This paper analyses these norms with the main focus on the decision maker/explainer. Their responsibility is charged in times where explainees are challenged to understand how they are being treated. However, decision makers in the loop of AI-infused systems face equal challenges. If we want to retain their moral capabilities,we need to make sure they understand when to rebel on behalf of the explainees in their care.

13:00-14:30 Session 4B: AI and Workplace
13:00
Facilitating Techno-Legal Education in the data moment

ABSTRACT. Legal problem solving often revolves around the notion of seeking solutions to legal -interdisciplinarity that lies embedded in different cause of action. This means that Lawyers must be able to collaborate with other professionals or deal with non-legal matters to address their client’s issues. The changes in the frontier of technology with a rapid pace of growth from big data to machine learning have pushed lawyers and law firms to build on technology-oriented outputs within legal practice. So also the development of the discipline of law within scientific and technological domains and governing policies of digital transformation have fostered an innate need for technology training early on in the legal profession. This need for a techno-centered approach to legal education has raised the debate on the level of tech training that law schools ought to provide law students. Some law schools and universities have resorted to teaching coding and AI at a professional level while many other traditional institutions still consider such learning beyond the precinct of legal education. In addition, many universities have also reported lack of qualified staff to teach interdisciplinary elements of technology to law students. Against this background the fundamental question to consider is how much technology study is required for law students and how to facilitate the transfer of technological knowledge to bridge the gap within traditional legal education. In particular, the paper aims at (i) exploring existing and potential venues for accommodating interdisciplinary approaches at the Faculty of Law and (ii) providing recommendations in a form of best practice for implementation of the techno-legal curriculum for the legal faculty. The paper is inspired and rooted in the efforts ongoing at the Faculty of law, University of Copenhagen following EU’s Digital agenda to allow law students to register and follow courses at technical institutions to enable them to gain knowledge within the domain of technology. The paper examines the advantages and potential drawbacks of this cross-learning opportunity to follow courses beyond the Faculty of Law and to determine the nature of intervention sought as a way forward. The legal profession is rapidly changing driven by its impact from emerging technologies and innovation. Such changes are bound to create new challenges for future legal practice and legal education must aim to be adept to meet the changing needs. However, AI and data oriented technologies should not limit the reliance on essential legal skills and human intelligence. Instead, they need to augment the combination of human skills and experience within the domain of technology.

13:30
Is e-signature legislation applicable to electronic identity?

ABSTRACT. Before the eIDAS Regulation was enacted, it was often assumed in the literature that electronic identity systems were regulated “through principles, rules and concepts borrowed from different EU legal instruments”[1] , whereby the e-signature Directive (Directive 1999/93/EC)was the most important. This was based upon the fact that electronic signatures were considered to be data attached to other data and used as a method of authentication, while the term authentication has considerable varied meanings which led to uncertainty. In fact, as explained in the IDABC study, two schools of thought could be identified, whereby one considered the e-signature Directive as only applicable to electronic signatures, and the other one as applicable to all kinds of Public Key Infrastructure (PKI) technologies[2].

In the eIDAS Regulation (Regulation 910/2014) it was clarified that electronic signature are data attached to other data and used to sign, and specific trust services next to electronic signatures were included, which shows that the first school of thought was correct. Besides, in the proposal of the eIDAS Regulation [3] it is explained that issuing means of identification is a national privilege and can therefore not be included in the eIDAS Regulation as a normal trust service, and therefore it is clear that the eIDAS Regulation cannot cover electronic identity outside a cross-border situation.

However, the implicit or explicit expression of consent can also entail that the signature is created with the intention to authenticate the signatory [4]. Therefore, theoretically an overlap between electronic identification and electronic signatures is still possible.

This paper will explain the issue and will analyse whether and how the definition of electronic signatures could be applicable in some cases to electronic identification, considering the Belgian eID, Estonian eID and Austrian Bürgerkarte. Finally, the paper will consider in how far the eIDAS Regulation can be applicable in this regard to the concept of Self-Sovereign Identity.

[1]Norberto Nuno Gomes de Andrade, ‘Legal Aspects’ in Electronic Identity (Springer London 2014) 37. [2]Hans Graux and others, ‘IDABC Study on EID Interoperability for PEGS: Update of Country Profiles Analysis & Assessment Report (D2.1 Report on Analysis and Assessment of Similarities and Differences; D2.2 Report on Impact on EID Interoperability)’ (October 2009) 107. [3]Proposal for a Regulation of the European Parliament and of the Council on electronic identification and trust services for electronic transactions in the internal market of 462012 2012/0146 (COD), 4 (European Union 462012). [4]European Commission and others, Feasibility Study on an Electronic Identification, Authentication and Signature Policy (IAS) Final Report. (Publications Office 2013) D1.1.b p.11.

14:00
Financial privacy within the EU: exploring a mutating scenario

ABSTRACT. This paper explores the privacy issues raised by current trends in the financial industry, addressing the possible need of legal and/or technical adaptations. Financial privacy is one of the most controversial aspects of the wider debate on how law enforcement powers should be balanced against individuals’ right to confidentiality. Regulatory and governance frameworks at the international and European level ensure that financial institutions and firms cooperate with law enforcement agencies providing access to financial databases. Enhanced control and law enforcement over digital financial networks, however, implies linking funds to users’ identities. In a digital economy, this paves the way for perfect enforcement and surveillance regimes that, when combined with broader datasets produced by users’ ubiquitous online activities, could have chilling effects on privacy and individual freedoms. While such regime of financial surveillance has been enhanced in the aftermath of the financial crisis, technological and legal solutions have been proposed to address the problems of trust and transparency from diametrically opposite angles, with fundamentally different solutions. On the one hand, privacy-enhancing cryptocurrencies offer the possibility to bypass institutional monitoring mechanisms and to transact online using pseudonymous accounts. On the other, the adoption of the GDPR has impacted procedures, policies as well as corporate and public awareness regarding privacy. It is still unclear whether such technical and legal developments will contrast the aforementioned expansion of financial control, and how they will impact the regulatory and technological monitoring infrastructures of electronic payments. The full digitalization of payment services; the privatization of the fintech sector incentivized by the profitability of data; the competition among public and private stakeholders to create digital financial networks under their control; the possibility to link online financial transactions to users’ identifiers and online activities; and the threat of fully anonymous cryptocurrencies: these are some of the ongoing trends in the financial industry which must be scrutinized to understand privacy-related concerns emerging in this domain. After mapping out such trends, this article explores issues and possible solutions regarding to financial privacy within the EU. What are the arguments in favor of, and which ones are against, financial privacy? Which privacy rules are applicable to financial data in the EU? Are they sufficiently implemented, or do they succumb in favor of other law enforcement prerogatives such as Anti-Money Laundering policies? If a lack of privacy protection exists, is it desirable to develop technological and legal solutions that allow a greater degree of anonymity? Under which circumstances? How should solutions be imposed (e.g. through mandatory or voluntary legal provisions), and which actors should be responsible for implementing them? These questions will be tackled by: (a) assessing the relevant European legal frameworks; (b) analyzing the legal literature on GDPR and privacy protection within the EU, in particular concerning financial data; (c) studying blog posts and technical proposals from advocates of financial privacy (i.e. cryptocurrencies developers and online communities); and (d) interviewing stakeholders (financial institutions; fintech firms; law enforcement agencies; cryptocurrencies developers; users).

13:00-14:30 Session 4C: The role of consent
13:00
Regulating Cultural Change: Fanfiction, Transformation and Fair Dealing

ABSTRACT. With the passage of the Copyright in a Digital Single Market Directive (DSM Directive) in 2019, much has been written about the likely impact this will have on the market for cultural works and the changes it will bring to online interactions. Much of this research has focused on doctrinal analysis, especially focusing on Article 17 and the impact of ‘upload filters’ on fair dealing copyright exceptions. The law in this area has changed in advance of the technology it relies upon - there is as yet no technology that can facilitate the application of a fair ‘fair dealing’ exception such as pastiche. Little empirical research has been done on the likely impact this will have on literary works. This research is novel as it takes the requirements for fair dealing, as laid out in case law, and tests them quantitatively to prove that these types of work should be protected by the pastiche exception contained within s30A CDPA 1988. Focusing on the transformative nature of fanfiction, a form of non-commercial cultural sharing and produsage, it uses a case study of posts to Fanfiction.Net, the world’s largest archive of these works, to conclude that there is no evidence that these works are harmful economically to the underlying work. They merely reflect a transition in the way we interact online that should not be regulated by copyright law. Copyright holders cling to uncertain old methods of protecting their works, rather than embracing new potential markets. These UGC works will now provide an example of a permissible change in culture that will be blocked off by the new Directive. This work enables a deeper understanding of one of the types of non-commercial cultural works that will be strongly effected by the DSM Directive, and the conclusions it draws deepens the discussion around the fallacy of the possibility of successfully filtering fictional works online. As cultural markets change, so must copyright in order to keep up, and this research argues that the changes will negatively effect an important section of the population.

13:30
Risk Assessment tools in criminal justice: Is there a need for such tools in Europe and could their use be in compliance with European data protection law?

ABSTRACT. The use of risk assessment tools for offenders’ classification has a long history in the criminal justice system of the United States of America (U.S.A.). These tools are used to inform court’s decisions in different stages of the criminal justice system, from pretrial services to proceedings closely related to the defendants’ freedom. Examples of these stages are the assignment of bail amounts to suspects for court appearance after an arrest, parole decisions, rulings related to probation, or even decisions during sentencing proceedings. Moreover, there is an ongoing and mature debate on the role that risk assessment tools play in the criminal justice system of the country. On the one side of this debate, lies the concern that these algorithms are racially biased and lead to discriminatory decision-making. On the other side of this debate, is the view that risk assessment tools could assist in the reform of the U.S. criminal justice system, which has historically led to racial injustice and mass incarceration.

On the contrary, risk assessment tools are not used by the judicial authorities in the Member States of Council of Europe and the European Union. More precisely, the Council of Europe’s European Commission for the efficacy of Justice (CEPEJ) notes in the adopted “European Ethical Charter on the use of artificial intelligence in judicial systems” that pilots have been carried out in some of the member countries in order to explore the potential use of these applications, but they have not yet been applied on a wider scale. Other European countries have established scientific councils examining the use of algorithms in the field of Justice. The supporters of risk assessment tools present this technology as the solution to the problems of racial injustice and mass incarceration that are severely affecting the U.S.A.’s criminal justice and prison systems. However, do such problems exist in Europe, and is the use of risk assessment tools a good practice that should be adopted to solve them? What are the legal risks that arise when it comes to data protection?

The paper will offer a description of the actuarial risk assessment tools available in the U.S and the latest developments in this field, i.e. the incorporation of machine learning in their functioning. Moreover, the paper will examine whether the use of such tools could have an added value in the European criminal justice field. Finally, the paper will reflect some of the legal issues that could arise for the use of such tools in Europe based on the European data protection law.

14:00
DISCOVERING THE (UN)SECURE SIDE OF NON-PERSONAL DATA IN A PRIVACY RISK-ANALYSIS FOR IOT: SOME CONSIDERATIONS ON THE UPRAAM METHODOLOGY

ABSTRACT. In order to promote better privacy engineering practices, there is an urgent need for implementing privacy by design concepts in a proactive and preventive term, thus enabling users’ trust. Emerging technologies such as Internet of Things (IoT) collect and communicate, through the increasing number of devices and sensors interconnected, tremendous amount of data, personal and non-personal. Users’ profiles can be easily inferred by these data collections. This was indeed the finding of Article 29 Working Party (opinion 8/2014), which in this regard identified two main design problems: the information asymmetry between the users and data controllers and the inability of users to have any control over their data. Nonetheless, risk analysis for data protection and privacy compliance in the IoT ecosystem often underestimates the potential dangerousness of non-personal data for the (re)identification of the individual. Indeed, the expanding of IoT represents a major source of non-personal data. Non-personal data can ontologically be sorted into two classes: data that originally did not relate to an identified or identifiable natural person (such as data on weather conditions generated by sensors installed on wind turbines) and data that were initially personal data, but were later made anonymous. This has been highlighted in the literature, showing that non-personal data may give rise to data protection and privacy concerns for two reasons:: (I) the data can be de-anonymized using other publicly available information about the user, and (II) the data can be aggregated into large datasets for big data analytics in combination with the reuse of public sector information. Both practices undermine informational privacy by providing means for identifying individuals. In this paper, this problem will be addressed by taking the UPRAAM methodology as a case study. The method has been specifically designed by the Privacy Flag project to assess the privacy compliance of IoT deployments under international data protection provisions (ITU, OECD, UN, WTO), as well as national ones (the Swiss framework) and supranational ones (GDPR, Directive 2000/31/EC, Directive 2002/58/EC). The strength of this tool lies its user-friendly approach, which enables non-specialists without legal training to assess whether a product, service, or information management system is compliant with privacy rules (in a privacy safe area) or is likely to breach some privacy rights (in a privacy risk area). End-users categorize the personal data collected and processed by the service or application they want to assess. If no personal data is collected, UPRAAM will count this as a privacy safe area. But it is personal data that we are dealing with. So, obviously, UPRAAM does not take non-personal data into account in its risk assessment, nor therefore does it consider how this data may be used for personal identification, thereby threatening privacy and data protection. In conclusion, I will argue that a risk assessment of privacy in the IoT should be comprehensive: it should also look at non-personal data and mixed datasets—i.e., datasets composed of both personal and non-personal data—regardless of whether or not the personal and non-personal data can be untangled.

13:00-14:30 Session 4D: IP Rights and the digital underground culture
13:00
Promoting Access to Justice against Copyright Infringements: Case Study of the Internet Courts in China

ABSTRACT. The right to access the courts is a basic human right in civilised societies. Due to the maturity of the Internet, copyrighted works are frequently disseminated on the Internet without the consent of the copyright owners. When a copyright holder wants to claim damages for copyright infringement, she must file a lawsuit against the website operator at the venue of the illegal website, which usually requires substantial quantities of time, spirit, and money. Although each country has its own judicial system, it is unfriendly and often unaffordable for the victims of copyright infringements seeking access to justice. In particular, it is uneasy to provide solid evidence of copyright infringements on the Internet because the electronic evidence stored on the current centralised database has data security and trust problems. In response to this challenge, China established three Internet Courts in 2017 to move dispute resolution for copyright infringements from the physical courts to the Internet. All the proceedings in these Internet Courts are conducted on the Internet, so the time and expenses of the litigants can be largely reduced. Most notably, these Internet Courts accept the use of blockchain as a method of securing evidence, to overcome the risks that evidence stored on the Internet can be hacked or falsified. The notion of an Internet Court, which substantially enhances popular access to justice, is a significant judicial innovation. It is of special significance for those lawsuits with small value claims and online evidence, and in which the parties are separated by long distances. However, these Internet Courts leave much to reflect on. First, the defendants may not be willing to use the online court to resolve disputes, so it would be unfair and unjust to force the defendant to respond on the Internet. Because all of the arguments and evidence are presented online, the quality of litigants’ statements may be largely influenced. Second, the physical court has its air of sanctity, but that is difficult to maintain in the Internet Court. Because the parties in the Internet Courts are not litigating face-to-face and not in the same physical place as the judges, the litigants may suffer a loss of dignity during the trials of the Internet Court. Lastly, the blockchain can only prove that the hash value of the electronic evidence has not been tampered with after being placed in the blockchain, but it cannot prove the truth of the original evidence. More particularly, the Internet Courts in China use the consortium blockchain for evidence reservation, whose ability to withstand internal or external attacks is far less than that of the public blockchain. Therefore, the Internet Courts does not completely eliminate the doubt that evidence may be forged. Even so, this article argues that true justice is not only to pursue absolute correctness of judgements, but that true justice should also strike a balance between the correctness and efficiency of trials. Therefore, the Internet Courts may yet establish a new judicial paradigm to pursue a balance between correctness, time, and cost.

13:30
Cybercrime governance and internet fraud eradication: Travesty of digital rights in Africa

ABSTRACT. The African continent accounts for the fastest increase in internet penetration globally. The percentage rate of internet penetration in Africa as of 30 June 2019 is 39.8 while the rest of the world put together is 60.9. Digital connectivity has tripled in the last five years and contributes to development in the continent and recently to the ever-increasing cyber-threats perpetrated against government institutions and private sector stakeholders. Africans perpetrated the two recent international massive internet fraud (United States of America v Emmanuel Oluwatosin Kazeem & others and United States of America v. Valentine Iro & 79 others). Consequently, the international community has questioned the impact of cybercrime governance in the African sub-region. This has galvanized African governments into reappraising measures towards eradicating cybercrimes. The effects of the measures are questionable as they tend to breach the digital rights of Africans. Law enforcement agents now act on intelligence reports, which most times turn out to be false alarms, Africans are being subjected to unlawful stop and search, invasion of digital devices in order to clamp down on internet fraudsters. Are these measures capable of eradicating the upsurge of internet fraud being perpetrated by Africans? What lessons can African Heads of State learn from developed countries on the eradication of cybercrime? In what ways can foreign governments help in Africa's quest to rid the sub-region and globally of cybercrime? This research stems from the increasing involvement of African youths in cybercrime scourge and its consequent impact on the global economy. Unless drastic measures are put in place, cybercitizens will continue to bear the brunt of these fraudsters, which is currently escalating with the successful techniques of internet fraud trade being taught in special institutions within the African sub-region. Part of the focus of this research shall be geared towards obtaining relevant data or information from internet fraudsters and cybercrime institutions in Nigeria, Nigeria haven been adjudged as a safe haven for cybercriminals and cybercrime perpetration. Experiences gathered would be of immense advantage in eradicating the ever-increasing menace of internet fraud globally.

14:00
Financial Intelligence Units and the question of the applicable data protection legal framework

ABSTRACT. “Money laundering is one of the key ‘engines of crime’ sustaining global criminal business worth billions of dollars” – claims Rob Wainwright, the Executive Director of Europol (Europol, 2017). Acknowledging the need to confront the flows of illegally obtained financial assets, a global anti-money laundering and counter-terrorist financing (AML/CTF) framework has been developed. Since the 1990s, one of the cornerstones of this framework lies in the sharing of knowledge of potentially suspicious transactions between financial institutions and the state’s competent authorities, including law enforcement, but also e.g. tax or customs authorities. Such sharing is facilitated by the Financial Intelligence Units (FIUs), assigned with the task of receiving and analyzing the suspicious transaction reports (STRs), issued mainly by financial institutions. FIUs then disseminate any information they consider related to money laundering or terrorist financing activities to competent state authorities. Therefore, the FIUs are not only central players in the AML/CTF landscape, but also a crucial link in the chain of the exchange of information between the private sector and the competent authorities.

Given that, a lot of this information concerns personal data, this paper proposes to take a close look at the FIUs’ data processing activities from the EU data protection point of view. More specifically, it will focus on the challenges with regard to the identification of the applicable data protection legal framework, i.e. problems with delineating between the General Data Protection Regulation (GDPR) and the so-called Law Enforcement Directive (LED).

To that end, the paper will first provide an overview of the existing FIUs. By analyzing their statutory character, it will ascertain whether differences in the adopted models of the FIUs (administrative, law enforcement, or judicial) play a role when determining the applicable law. Secondly, the paper will recall the concerns already expressed by some EU Member States during the legislative works on the European Data Protection Reform with regard to the scope of application of the GDPR and LED in the AML/CTF context, as those concerns have never been carefully addressed. Thirdly, it will analyze the issue, most recently raised by the European Commission in the report on assessing the framework for cooperation between FIUs, regarding the lack of harmonized approaches towards the exchanges of information between Member States’ FIUs and the FIUs of third countries (European Commission, 2019). According to the Commission, despite a “clear obligation” to apply the GDPR when the Member States’ FIUs exchange information with third countries, most FIUs apply the LED instead, or both the GDPR and the LED (ibid.).

This paper will argue that the problem publicly acknowledged by the European Commission, is in fact not limited to the exchange of personal data with foreign FIUs, but might be of a larger scale, and concern also other FIUs’ data processing operations. The ultimate goal is to pin down the uncertainties with regard to the applicable data protection regime in the context of the execution of tasks by the FIUs, and to reflect on possible consequences of this lack of legal certainty.

13:00-14:30 Session 4E: Government, Privacy and Data Protection
13:00
Use of AI Based Technologies in International Arbitration

ABSTRACT. Developments in science and technology of today’s world and the rapid change brought by such developments lead to new questions in international dispute resolution. One such question is the interaction between artificial intelligence (AI) and international arbitration. In private international law relationships, arbitration is usually preferred as a method of dispute resolution by the parties of a dispute due to several reasons of confidentiality, expertise and time duration. With the born of AI, the question is whether AI based technologies and arbitration may work in cooperation. Theoretically, it can be said that intervention of AI in arbitration can be enabled by an agreement between the parties who choose arbitration to settle their dispute -save that such an agreement is valid in lex arbitri- and can contribute in lessening the work-load in the process as regards nomination and appointment of arbitrators, communication among parties, analysis of the related documents as well as the relevant rules of applicable law to the facts of the case at hand. As such it can be argued that AI shall speed up the process, minimize the costs as well as the risks of human mind. However, lack of data to develop AI based technologies for arbitration, potential unexpectable results that are reached by AI and due process concerns equally create doubts concerning participation of these technologies to arbitration. Thus, number of legal problems seems to arise from a potential interaction between AI based technologies and international arbitration, e.g. (i) What kind of intervention of AI is possible under current rules on arbitration? (ii) What are the advantages that an intervention promises? (iii) What are the limits and risks of such an intervention? Further distinct but related question would be whether recognition and enforcement of arbitral awards given by the involvement of an AI based technology can be possible under the New York Convention of 1958 on the Recognition and Enforcement of Foreign Arbitral Awards and/or under national legislation. This paper – which is based on the ongoing research of the authors’ within the project on “Artificial Intelligence and Law”, funded by Scientific and Technological Research Council of Turkey- aims to analyse and discuss the possibility of creating AI based technologies as well as its advantages and shortcomings both in settling disputes by arbitration and the recognition and enforcement of arbitral awards.

13:30
African governments and the influence of corruption on the proliferation of cybercrime in Africa: Wherein lies the rule of law?

ABSTRACT. Recent international and domestic reports have consistently tagged Africa as a safe haven for cybercriminals and cybercrime perpetration. Most of these reports have attributed the absence of a holistic cybercrime legal framework and lack of implementation as the basis for their conclusion. Arguably in the absence of a regional cybercrime legal framework to curtail the menace of cybercrime in the African sub-region, African state governments have enacted legislation and policies in that regard. However, cybercriminals from African descent have been known to be involved in massive internet fraud cases globally. What has informed this development? This paper argues that the level of corruption in private and public institutions have a significant influence on the proliferation of cybercrime in Africa. The human greed and factor in attaining greater heights in the African society by whatever means have made corrupt insiders in private and public institutions to relay critical information to cybercriminals to facilitate their criminal intentions despite the availability of cybersecurity measures. These activities by corrupt insiders have gone unnoticed, thereby making cybercriminals to bypass security infrastructures put in place by governments. The implication is that cybercriminals can now effortlessly victimize cybercitizens by using social networks to profile them. Suggestions for reduction of political leaders' flamboyant lifestyle, institutionalized integrity workforce, surveillance, the increment of workers' emoluments, and adequate welfare package amongst others are made as possible solutions.

14:00
The new rules on audiovisual media: a step forward or a hollow shell when it comes to vlogger advertising?

ABSTRACT. Continuous advancements in technology are at the basis of the altering media landscape and social interactions of global society. Nowadays, audiovisual content is available ‘on the go’: allowing viewers to consume video content on-demand and on portable devices, regardless of location and time. The traditional image of the passive media consumer, in front of a television, is being challenged by the 'prosumer', who consumes, creates and shares audiovisual content online. Youngsters in particular are choosing to spend their screen time online – watching their favourite ‘vloggers’ (i.e. video bloggers) on video-sharing platforms such as YouTube, Facebook Watch or Twitch.

This shift has, in turn, influenced the means and methods of online commercial communication: ‘vlogger advertising’ is exponentially gaining importance, especially integrated forms of advertising – where the commercial message and the editorial content are intertwined – is commonplace in vlogs. Knowing that certain vloggers are able to attract more viewers than traditional television programmes, integrated vlogger advertising sparks legal and regulatory questions. Adding to these concerns is the fact that vlogger advertising is not under the same level of regulatory attention as traditional advertising.

The EU legislator has responded to the changing media environment by modernizing the Audiovisual Media Services Directive (2018/1808/EU) (hereinafter: ‘AVMSD’) on two levels: (1) the scope of application is stretched to video-sharing platforms, (2) user-generated content creators (such as vloggers) are included to the extent they qualify as ‘audiovisual media services’.

This research builds on my previous research, which found that existing EU and national legal frameworks are, in theory, able to arm minors against (integrated) vlogger advertising. However, legal actions against vloggers are still little in the EU. This is remarkable considering the omnipresence of questionable advertising practices on video-sharing platforms. This observation has led to the central question of present research: Will the new AVMSD be able to provide actionable solutions to tackle vlogger advertising, or is it rather a classic case of the increasing chasm between rights in law and rights enforced?

By analyzing the directive, it became apparent that the new rules are likely to still cause difficulties in light of their practical implementation. The issues I plan to discuss in this paper are the following: - When are vloggers covered by the AVMSD? Only vloggers that carry out an economic activity in the sense of art. 56 and 57 TFEU fall under the scope. Any guidance on where to draw this line between ‘professionals’ and ‘hobbyists’ is lacking. - How should vloggers make commercial messages recognizable? The AVMSD requires vlogger advertising to be ‘recognisable as such’. - Jurisdictional competence issues, sprouting from the ‘country of origin principle’ underpinning the AVMSD. Children often watch vloggers from another Member State, yet media regulators have no automated referral system in place. - Enforcement competences are dispersed amongst different governmental and self-regulatory bodies.

The results of this research, as well as possible remedies, will be discussed during the conference.

13:00-14:30 Session 4F: Financial and Tax aspect of technology regulation
13:00
Finding the Balance between Security and Human Rights in the EU Border Security Ecosystem

ABSTRACT. The lack of internal borders within the Schengen Area is one of the unique features of the European Union. However, many have raised fears that it is open to exploitation, and leaves the entire territory of the Union vulnerable, since authorities are unable to accurately monitor exactly who is within their territories at any one time.

One method which attempts to address this issue has been through the establishment of several large scale IT databases within the area of borders and security, most notably the Schengen Information System (SIS II); the Visa Information System (VIS) and the European Asylum Dactyloscopy (Eurodac). Two new databases have also been proposed, the Entry-Exit System (EES) and the European Travel Information and Authorisation System (ETIAS). What these databases do is collect certain categories of information regarding individuals who enter the territory of the EU, in order to identify whether they pose a risk to its internal security.

Recent proposals have suggested that it would be beneficial to bring these various databases together within an interoperable framework. However, this raises a number of issues, particularly in regards to human rights, and has the potential to fundamentally alter the balance that currently exists between the protection of human rights, such as privacy and data protection – already heavily restricted in the area of border security – and the necessity of collecting personal information in order to protect the EU’s internal security. Questions can be raised as to how these new proposals would affect who has the right to access this information, and when – as well, as how the various pieces of information might be linked together.

It is therefore proposed that, in light of this evolving landscape, what is required is a new manner through which to consider the situation, one which enables a more holistic perspective to be taken. After all, while each individual database might be justifiable and proportionate on its own merits, when looked at from a more holistic perspective, or taken as representing a cumulative threat, this might not be the case.

How can this be achieved? In nature, the concept of the ecosystem provides a method through which to understand and study the world. It recognises the existence of a closely interconnected system of actors, who are engaged in the exchange of information and resources. In particular, it places a great importance on the interconnections that exist between different actors, allowing interferences to be made as to how the actions of one can affect the behaviour of another. In many ways, the field of EU border security can be seen as representing an ecosystem.

This paper therefore considers whether by taking an ecosystem approach it is possible to gain a better understanding of the risks which interoperability between these databases might pose to human rights, and thus determine whether a fair balance between the competing interests of human rights and security can be found.

13:30
A presentation of “A State of Creativity”

ABSTRACT. My presentation will discuss my monograph “A State of Creativity.” The argument is that creativity has been integral to the development of the modern State, and yet it is becoming increasingly side-lined, especially as a result of the development of new machinic technologies such as 3D printing. Arguing that inner creativity has been endangered by the rise of administrative regulation, I explore a number of reforms to ensure that upcoming regulation does take creativity into account.

Creativity is key to the regulation of a society. Capitalism has mirrored the processes of creativity but capitalist regulation of creativity has begun to become separated from the creative process. This is because of underlying technological change which has led to creativity departing from capitalist norms. More uses are made of technology which cannot be represented under the capitalist paradigm. Regulation needs to therefore move away from the conception of capitalism and begin to directly accept and recognise the processes that lie behind creativity. This is not a narrow conception of creativity but a broad one which encompasses all forms of making. If regulation can reflect the ebbs and flows of creativity, then regulation will be correctly aligned with the underlying process behind society.

I investigate how the failure to incorporate creativity into administrative regulation is, in fact, adversely impacting the regulation of new technologies such as 3D and 4D printing and augmented reality, by focusing on issues concerning copyright and patents. This reveals how regulation has moved away from considerations of creativity, despite the importance of creativity for the existence of the State. In summary, if we do not embrace creativity, the future of the State is imperilled.

14:00
The role of the individual in EU data protection law

ABSTRACT. In an environment of mass digitalisation and datafication, myriad concerns arise about the effectiveness of information privacy and data protection laws. But a significant line of that criticism calls into the question the role of an individual. Can an individual consent in a data-saturated environment? Should we frame our legislative regime around the protection of an individual? What about group or collective concerns? Some claim that privacy and data protection laws are overly individualistic, overly burden the individual or that we should turn our concerns to group or collective concerns. In order to engage with this issue and consider the commonalities in these lines of scholarship, one must first examine the role of the individual in data protection law.

Through an examination of the General Data Protection Regulation, Article 8 of the Charter of Fundamental Rights in the European Union and associated case law of the Court of Justice of the European Union, this paper offers an account of the role of the individual in EU data protection law. This paper sets out to achieve two things.

First, this paper demonstrates the centrality of the individual to EU data protection law, and argues that an examination of the individual is a fruitful means of assessing the capacity of EU data protection law to reach its aims.

Second, recognising that the individual plays a number of roles within the regime, it is useful to develop a framework for understanding those roles. This paper offers a conceptual framework by which we can understand and assess the role of the individual in the regime. This paper argues that the individual serves three primary functions in EU data protection law. The individual serves as the normative foundation underpinning the regime. The individual is also the primary subject of data protection law, as the regime is organised around the protection of the individual as the data subject. Additionally, the individual is central to the enforcement of data protection law, empowered to act in their self-defence through the protection of certain individual decision-making and the grant of individual rights. This framework then forms the basis of an assessment of the merits of each of these roles.

14:30-16:00 Session 5A: The Philosophical Foundations of Information Technology Law
14:30
The questions left unanswered: Sampling, quotation and dialogue in the aftermath of Pelham

ABSTRACT. This piece addresses the implications of the ruling in C-476-17 Pelham on the practices and purposes of music sampling. The judgement in this case covered wide-ranging issues in regarding the rights of phonogram producers, the potentially infringing nature of sampling and the applicability of copyright exceptions alongside fundamental rights of artistic freedom and protection of property contained within the European Charter of Fundamental Rights. Whilst the judgement in this case arguably strikes a balance of sorts between these issues, it nonetheless leaves many legal questions unanswered. As the Advocate General noted, producers of music have rights in their phonograms, but can also be artists in their own right. Coupled with the potential legal consequences determined by the ‘recognisability’ and by implication, the length of a sample used, the exception of quotation (for criticism or review) can also come into play. However, this raises its own potential complexities regarding the factors expressed by the court that require the sample to form some form of ‘dialogue’ with the original work it has been sampled from. Obviously, issues of infringement will be fact-specific in light of these considerations, however this does allow opportunity to explore how ‘recognisable’ a sample needs to be, which in turn will depend on what element of a sound recording is sampled. This also relates to qualitative issues within the framework of copyright infringement itself and necessarily involves considering the matters of transformation. Furthermore, dialogue is an essential part of artistic and musical composition; however it will be asserted that this concept should not be directed ‘inward’ by reference back to a copyright work, or works, sampled, nor should it be confined to an overly textual interpretation. Music is an integral part of cultural identity and rhythms in particular (which was the type of sample in question in Pelham) can form a foundation upon which other compositional arrangements can interact, and indeed create a dialogue with each other. It will therefore argued that although the judgement does not seem as restrictive as might have been thought following the opinion of the Advocate General, highly significant qualifications remain and important issues of expression, as related to quotation and dialogue, need to be appreciated.

15:00
Unpacking self sovereign identity: origins, promises, and challenges

ABSTRACT. There is no uniform rule about identity; the concept and its governing norms shift according to the legal, technological or institutional context. The creation of a new -and uniform- digital identity ecosystem is an aspiration that has progressively risen into prominence in disperse ways. Various identity management solutions are emerging in different jurisdictions, with the goal of creating a unified, privacy-preserving identity bridging the offline with the online. The market of digital identity is already quite substantial and very diverse. Within this trend, the concept of self-sovereign identity has re-emerged. No consistent definition of the concept has been established. In general terms, we can describe self-sovereign identity as an identity management system, developed by a private or public entity which takes technological design decisions guided by a set of principles that are loosely defined and not universally accepted as a common standard. It is essentially a technological solution which transcribes the goal of autonomy and individual control through decentralization and “user-centric design” over the usage, storage and transfer of one’s digital data. The concept is attached to expressions of both individual control (sovereignty) and trusted verifiability— an aspiration familiar to what blockchain is promised to bring in contemporary data protection discourse. All identity management solutions rely on the use of decentralized ledgers, cryptography, and local processing of data. These technological design options aim to materialize some of the core principles of lack of central authority that controls the identity data, of decentralized verifiability and privacy. Due to the granular prioritization of the purported design principles, and the progressive distancing of current technological state of the art from the self-sovereign ideological underpinnings, “decentralized identities” is being used interchangeable although still suffering from the same semantic uncertainties. With coinciding objectives and features, self-sovereign identity projects have become increasingly attached to blockchain technological development and mainstream adoption. At the same time, blockchain enthusiasts are hinging on the success of self-sovereign identity solution as the first implemented use case of blockchain technology. The expansion of decentralized identity solutions, the growing market, and the institutional interest all bring out questions with regards to the legal framework surrounding their implementation. The eIDAS Regulation defines levels of trust services and provides thus the regulatory environment that enables the creation of different legally compliant identity systems solutions. In addition, ensuring GDPR compliance is challenging. Finally, many applicable legal norms are domain-dependent, with certain areas being highly regulated (ie financial markets and institutions). These conciliations are at times harder to achieve. The paper will provide an overview of the self-sovereign identity ecosystem within the current technological environment that involves decentralized networks and it will trace some of the questions that surround it: What are the social, technological, and legal circumstances that have led to the self sovereign aspiration? What are the fundamental problems surrounding individual identity that this solution is trying to address? Which actors are involved in developing self-sovereign identity solutions? Which are the legal shortcomings to its implementation and adoption by the current social and technical structures?

15:30
Post-Conflict Environments as Regulatory Sandboxes

ABSTRACT. As the idea of off-shore banking and companies became less attractive the unregulated environment of post-conflict and especially frozen conflicts provides an alternative. Apart from the unlimited access to do business in an unregulated market also modern and innovative approaches appear. The lack of rule of law makes it impossible for foreign investors to invest and have a guaranteed and previously precisely calculated profit, with an official or unofficial guarantee by the government that the business laws and environment is not going to change. Therefore the huge legal loop hole and a demand to invest and make some kind of additional profit is very big on all sides, investors and local governments as well. Accordingly various kinds of sandboxes can appear in post-conflict countries all depending on the needs of the business a counter partner political system and its financial elites deem necessary. With concluding the already present FinTech opportunities in the post-conflict environments we will continue with the ones which are very complex and also comprehensive with the new innovations of today, such as crypto-currencies. The huge success and wide variety of payment possibilities with currencies such as Bitcoin and its present demand on the global market put an additional burden on its financial control and sources it comes from or ends up. As financial markets and countries apart from some notable examples including Switzerland have taken a long way to put under control the streams of local finances, it was a real relief for the actors in the grey zone to be able to pay in crypto currencies. Case studies such as Cyprus will be used as a positive example of how it developed from an off-shore destination to an EU member state. Ultimately we plan to research the future role of the UK financial market after it, leaves/stays, the EU and starts implementing it long awaited new economic and financial changes which would benefit banks, investors and ultimately citizens as final consumers.

14:30-16:00 Session 5B: Regulation on advertising
14:30
Unpacking due diligence in the era of workplace robotics: perspectives from different stakeholders

ABSTRACT. Robotics has been integrated into manufacturing lines since the time of industrial revolution and the primary regulation for traditional industrial robotics requires the implementor to construct a clear barrier between human operators and robots due to safety concerns. Recently, smart industrial robotics is being designed for the purpose of safe human-robot collaboration, with the aim to remove the physical barrier in the factory, such as an autonomous robot trolley transporting goods within the factory. Another example is a collaborative robot designed to work next to human operators, handing over equipment and parts on an assembly line. However, at least in the UK, this type of robotics is yet to be widely adopted and in most cases the robots are still kept in the cage. Employers are obligated to perform due diligence under health and safety legislation for workplace safety, but the specifics on the level of due diligence performed and the risks in implementation, is up for interpretation, which is problematic.

To explore this issue, we took on exploratory approach and conducted an empirical study on the wider topic of the challenges in the deployments of both embodied and unembodied intelligent systems. We conducted semi-structured interviews with 13 experts (i.e. legal practitioners, technologists, academic researchers, manufacturers, and consultants). The interview questions were focused on the legal, social and ethical concerns of smart technologies and the key aspects of technology adoption. Applying thematic analysis to our interview transcripts, we capture and present the main concepts revealed by the participants. Our findings show that ‘due diligence’ is one of the main challenges (see Figure 1).

Figure 1*

Therefore, the purpose of this paper is to unpack how the term ‘due diligence’, in relation to the process of technology design and implementation, is practiced by different experts who are also the key stakeholders for adoption of new intelligent and autonomous systems in industrial workplaces. Our paper proposes that the level of due diligence performed is based on how the stakeholders interpret this duty, which varies from lawyers to technologists to business representatives. Their backgrounds, training, practical concerns and interests shape how they understand their role. Whilst the term ‘due diligence’ is referred to by various stakeholders, the meaning is different from one stakeholder to another. We find that due diligence does not only concern safety assurance or regulatory compliance but rather a relationship between understanding of law, risk management, and design of technology. Most importantly, technology should be designed and used for the purpose of supporting human workers, thus exploring the impact of technology integration on their rights, goals and professional identities is crucial.

*The diagram is subject to change. This only represents the findings from an exploratory study which still requires further development.

15:00
The post editorial control era: how EU media law develops a model of cooperative responsibility corresponding to platforms’ organisational control.

ABSTRACT. This paper explores the governance system established for platforms in the recently revised Audiovisual Media Services Directive (AVMSD). Although the AVMSD has brought platforms into European media law, it recognises that platforms’ control differs from traditional media service providers’ editorial control. Platforms outsource the production and publication of content to their users, and focus on algorithmically organising this content. This organisation of content is moreover strongly influenced by users’ (sharing, watching, and liking) behaviour. That is of course not to say platforms have less influence than legacy media organisations. Rather, control on platforms is exercised in a different manner and by multiple parties.

The AVMSD accordingly defines platforms by their organizational control over user-uploaded content. The responsibilities the AVMSD attaches to the exercise of organizational control not only involve the platform, but also the users and uploaders that are able to exercise influence on the visibility of content on platforms. By doing so, the AVMSD moves away from both its own centralized approach to the editorial responsibility of traditional media service providers, as well as the more centralized approach to the responsibility of platforms taken by the Copyright Directive.

Instead, the AVMSD lays the groundwork for platform regulation based on cooperative responsibility. Cooperative responsibility is based on the idea that public values on platforms are impacted by a variety of stakeholders, and platforms often do not have the capacity to address the impact of their service without taking account of and involving these other stakeholders. Accordingly, platforms are not only responsible for actions under their direct control, but must also place other stakeholders in a position to exercise their influence responsibly.

This paper asks how the obligations the AVMSD attaches to the exercise of organisational control can be understood and operationalised in light of cooperative responsibility. It develops its argument in three steps. It first examines platforms’ organisational control under the AVMSD. It argues that in contrast to traditional editorial control, platforms are able to exercise significant influence by putting in place the architecture through which their users interact, and organising these interactions. Secondly, the paper categorises the measures the AVMSD requires platforms to take into those that put responsibility on the platform (such as removal), and those that put responsibility on the users and uploaders (such as transparency or user control tools). Finally, it analyses the obligations the AVMSD imposes on platforms in light of cooperative responsibility. One aspect of platforms’ responsibility is that they are required to put in place and design the measures outlined in section 2 so that uploaders and users can assume responsibility. However, crucially, the AVMSD also establishes mechanisms through which the allocation of responsibility between platforms and other stakeholders can be negotiated as users adapt to changes in the platform architecture and the effectiveness of the model is monitored.

15:30
Article 17 of the CDSM Directive and Freedom of Expression: Shaping the Future of User- Generated Content

ABSTRACT. Considering the Court of Justice of the EU’s landmark decision in Glawischnig-Piesczek v Facebook, the purpose of this paper is two-fold. Firstly, to critically assess the compatibility of Article 17 of the Directive on Copyright in the Digital Single Market (the CDSM Directive) with the right to freedom of expression under Article 11 of the EU Charter of Fundamental Rights. Secondly, to suggest several procedural safeguards, which ensure that automated content recognition systems or so-called ‘upload filters’ can be implemented in a way, which is compatible with this right to freedom of expression. The paper will be based on an in-depth analysis of Article 17 of CDSM Directive, Article 14 of E-Commerce Directive including its Recitals 47 and 48, Article 11 of the EU Charter, the case law of the CJEU and the European Court of Human Rights, as well as recent academic sources. Article 17 lays down filtering obligations to prevent future copyright infringements for online content-sharing service providers (OCSSPs). It requires that user-generated content (UGC) to be reviewed by OCSSPs before it can be uploaded and made available to the public to meet the requirement of ‘making best efforts.’ However, under the CDSM Directive, the liability exemption included in Article 14 of the E-Commerce Directive shall no longer apply to OCSSPs. There seems to be a general agreement in the literature that the obligations set out in Article 17 appear to override such ‘safe harbour’ framework as per Article 14 of E-Commerce, as well as requiring OCSSPs to make ‘best efforts’ to prevent the uploading of unlawful copyrighted content through ‘upload filters.’ In Glawischnig-Piesczek v Facebook, the CJEU found that upon receiving a notice of complaint, OCSSPs were required to remove and/or block ‘identical’ and ‘equivalent’ content previously found to be illegal by Member State courts. It held that this obligation could even apply globally where it is within the framework of the relevant international law. However, there appears to be a research gap on the compatibility of Article 17 with Glawischnig-Piesczek v Facebook. To fill this gap, this paper suggests that, for the CDSM Directive to comply with Article 11 EU Charter, upload filters must satisfy the ‘minimum criteria’ for freedom of expression-compatible law on blocking measures, which was suggested by the ECtHR in Yildirim v Turkey. It concludes that, unless this Yildirim ‘minimum criteria’ is taken on board, Article 17 will breach the right to freedom of expression.

14:30-16:00 Session 5C: Ethics of AI
14:30
Contract Law in the Age of Big Data1

ABSTRACT. We are living in a data era where data drives everything we do, including businesses. In data-driven business models, users’ personal data is collected in order to determine consumer preferences and to tailor production and advertising to these preferences. In these new business models, consumers do not pay a price but provide their data, such as IP numbers, locations, and email addresses in order to benefit from digital service or digital content. Content or service providers and users have something in common in these “free” business models which is to benefit from each other. Users benefit from the content or service but they provide their personal data and/or they are exposed to advertisements in return. This legal framework and trade cannot be made without taking contract law into consideration. Contracts naturally facilitate interactions between companies and users. Their transactions are regulated by contracts in which their agreement on data use and data processing are stipulated. In the last two to three decades, in particular, personal data has started to function similar to the functionality of money in synallagmatic transactions. In this vein, it goes without saying that the role of contract law is becoming more significant and complicated because it does not only concern the economic ordering and economic transactions anymore but also as it is personal data, data protection law comes into play which is considered a part of fundamental rights protection. Data is always collected and processed through a contractual relationship and in this paper, I will argue that there are problems arising from contracts involving data to which contract law applies and that contract law can map these problems. The scope of this study will be limited to issues where data is provided as counter-performance and where data is provided in addition to a monetary payment.

15:00
Smart cities and the public interest conundrum: preserving the city and citizens’ rights in the era of smart urbanism

ABSTRACT. To address urbanization challenges, cities are using the great potential of data and technology, thereby evolving into ‘smart cities’. Overall, these initiatives differ considerably in terms of scope, ambition and objectives making the finding of a common definition of the smart city difficult. Regardless of these differences though, one can argue that the concepts of public spaces, public interests and public functions are increasingly put into question as cities become testbeds for innovation. The close collaboration of public/local authorities and private companies in the form of Public-Private-Partnerships (PPPs), unclear objectives (public interest vs commercial) and monetization of personal data collected in public spaces blur the quintessentially public character of cities. How could that be preserved?

This is a pertinent question given that smart city solutions may constitute interference with citizens fundamental rights, with privacy, data protection and (in)equality being major concerns well documented in academic literature. While similar concerns emerge in other environments (e.g. online, the smart home) a crucial distinction is the diminished control of individuals as they have to live in smart urban environments. This top-down approach raises a fundamental question about public/local authorities’ responsibility to safeguard fundamental rights.

This research explores their responsibilities in particular in light of (1) the EU Charter of Fundamental Rights and the conditions for permissible interferences with fundamental rights and (2) the lawfulness principle in the GDPR. The GDPR has the objective to ensure, that all fundamental rights are protected when personal data is processed, and is particularly relevant in a smart-city context given the sheer volumes of personal data processed. The research puts the notion of public interest at the centre. Given the importance of ‘public interest’ as a justification for limitations on fundamental rights (Art. 52(1) Charter), and as one of the most pertinent legal grounds for personal data processing in the smart city (Art. 6(1)(e) GDPR), the aim is to examine how the notion is to be interpreted and to delineate the public-private boundaries that are increasingly blurred in the smart city by addressing the following questions: o The limits of public interest: is it an all catch term capturing all innovation and urbanists looking for problems to test technologies? Is/When is it polluted by the PPP smart city paradigm? o How does it link with the condition on legality, mandating the existence of a law for limitations of fundamental rights and for the grounding of the GDPR’s public interest legal basis : do we need smart city laws to ensure the legality of smart urbanism? o Does it come to play only insofar as public authorities are directly responsible for a smart-city solution, or places a broader obligation to interfere with the invisible hand to ensure citizens are protected in public spaces? How can such intervention be exercised?

15:30
Technology, Manupulation and Human Dignity

ABSTRACT. The current era of accelerated technological progress is characterized by the rapid application of Artificial Intelligence (AI) across domains, which causes abrupt changes in society. Predictive abilities of AI support the medical diagnosis, increases energy efficiency, predict crime, and helps people in their medical diagnosis, forever altering our perception of the world.

However, recent studies show that AI can be used not only to predict choices, but to bypass the cognitive autonomy of individuals, and to influence human emotions and thoughts by fine-grained, sub-conscious, and personalized levels of persuasion.

This growing capacity to subliminally alter the anticipated course of another's action, coupled with the new market logic that commodifies human behavior into a prediction product has raised the concern that the companies with data can (and will) modify user behavior to act upon certain offers, services, or products.

Technology ethicists are continuously raising the concern that 'online manipulation' (using algorithms to subvert another person’s decision-making power covertly) undermines the notions of human autonomy and moral agency. This seemingly legal problem has become subject to ethical reflections because of its unprecedented nature and the limitations of the current legislative frameworks.

Consumer protection law is limited to regulating individual economic transactions, and for now, the data transactions undermining the agency of an online user outside of her awareness stays out of its scope. Also, while there has been a trend in European academia to address this issue through the lens of personal data protection legislation, AI challenges other aspects than mere personal data protection (e.g., psychological integrity, autonomy), and thus, it cannot be the omnibus governance solution for AI. Moreover, while ethics may fill this gap, it is often ambiguous for AI governance and lacks enforceability, triggering the need for a more specific binding legal instrument.

This paper aims to internalize ethical considerations into the legal discourse by exploring the repercussions of 'online manipulation' on the general legal principle of human dignity. The concept of human dignity can be understood to entail treating human beings as the 'ends' instead then 'means.' In the words of Joseph Raz and Lon Fuller, respecting human dignity entails treating humans as "persons capable of planning and plotting their future."

For some scholars, human dignity is the mother principle that grants legal recognition to human moral agency and therefore is the foundation of three pillars of political liberalism: democracy, the rule of law, and human rights. This paper intends to re-think what does human dignity entails in the context of manipulative use of technology to provide a conceptual framework for future regulation.

14:30-16:00 Session 5D: Iot & Wearables vs. Security, Privacy & Data Protection?
14:30
An Assessment of the EU Kidsmodel of Internet Governance: A Case Study for School Children in India

ABSTRACT. Internet governance refers to the rules, policies, standards, and practices that coordinate and shape global cyberspace. The Internet created a new environment that is complex and dynamic through connectivity. Internet connectivity generated innovative new services, capabilities and unprecedented forms of sharing and cooperation, it also created new forms of crime, abuse, surveillance and social conflict.

A need is felt to protect the children legally since it is estimated that 1 in 3 of internet users globally is below the age of 18 years. The literature further indicates that the internet has caused behavioral issues among children such as cyber-bullying, internet addiction, etc. In around 2012 EU Kids undertook a project to compare and explain children’s experiences of risk and safety. The key results of the survey indicated that 30% of 9-16 year old had contact online with a stranger they did not meet face-to-face. 9% had gone for a face-to-face meeting with someone they met first online. 15% of 11-16-year-olds send or receive sexual messages on the internet. 14% of 9-16 years have seen sexual images on the internet. In 2016, UNICEF recommended embedding children’s rights into the activities and policies for internet governance that included a need for a strong multi-stakeholder empirical research study to ensure that important information about children’s internet access and use is collected locally so that inequalities and problems can be addressed. The existing literature on internet governance primarily highlights the rights of children with respect to the internet. Meanwhile, India has the largest share among developing countries and is ranked 9 in the E-commerce sales in 2017 with the total e-commerce sales of 400$ billion that amounted to about 15% share of GDP. Adding on India’s annual average expenditure per online shopper was about 1130$ billion.Going by the number of Internet users, it is time India considers Internet Governance for children specifically for children who spend lot of time in a school environment. In this context, the paper focuses on the following issue: 1. Following the EU Framework, the kind of framework, guidelines that can be formulated for internet governance policy and implementation structure at the National level, School-level, and Family level in India?

15:00
A Behavioral Perspective on Cybercrime: Complexities and Way Forward

ABSTRACT. With the advancement of technology and globalization, the natures of crime and criminals have also evolved. The recent surge in crimes such as identity theft, financial fraud, money laundering, piracy, etc. are the work of a different kind of criminal – more intelligent, less belligerent – than those studied up till now. The Statistical data of the National Crime Records Bureau, Ministry of Home Affairs India, suggested that in the year 2013, “incidence of cybercrimes (Information Technology Act + Indian Penal Code sections) has increased by 63.7% in2013 as compared to 2012 (from 3,477 cases in 2012 to 5,693 cases in 2013)”. The increase in the number of cybercrimes is a sign of concern and shows that cybersecurity is topical and a real threat in the years to come. Moreover, anonymity has contributed to highly psychoactive experiences in the online world. The chances of detection, apprehension, and prosecution over the internet are exponentially smaller. The researchers in the field have formulated and explained various psychosocial, behavioral and cultural factors that led to a certain behavior. Research Studies have shown that crime online feels more acceptable to people than physical theft (Zhang et al, 2009). The psychologist at the University of Notre Dame furthered that economic factors may provide such offenders with a means to justify his/ her actions, but they are not the real motivator. The computer appears to act like an ethical filter creating psychological distance (Crowell et al). Husted (2000) demonstrated that cultural variables such as power distance, individualism, masculinity and uncertain avoidance (the extent to which members of a culture feel threatened by uncertain and unknown situations) determine the rate of crime online in a region. Overall, the concept of cybersecurity is too technical in nature and is beyond the understanding of an ordinary individual. Most of the research done in this area concentrates on the modalities of the crime and the aspect of the security in that context. It is an alarming need towards a holistic crime prevention approach towards cybercrime that would involve a balance in implementing policies including technology, law, and behavioral science. In this context, the paper deliberate on the following issues: 1. What is the behavioral side of a cybercriminal in relation to crimes committed in cyberspace? 2. What is the psychosocial, behavioral and cultural factors impact of cybercrimes?

15:30
Patentability of AI and Blockchain: The Trend of the Computer-Implemented Invention

ABSTRACT. For the past few decades, the growth of computer-implemented inventions (hereinafter “CII”) has been speedy and accelerating; nevertheless, the development of CII patent is comparably slow-paced. For instance, after Alice establishing the Two-Step Test, almost every CII is considered abstract, and thus, cannot acquire patent-eligibility or patentability in the United States. The boom of emerging technology, such as Artificial Intelligence (hereinafter “AI”) and blockchain technology, has urged the discussion and deliberation of CII patents worldwide. Currently, all authorities concerned view blockchain technology and AI invention as CII. On May 30, 2018, European Patent Office (hereinafter “EPO”) held a conference discussing the patentability of AI. In the conference, the chairman Grant Philpott referred AI as a “supersoftware” and concluded that the fundamental principles of the patent system combined well with the innovations in AI. Thus patent protection applied to many AI inventions. On December 4th, 2018, The European Patent Office (EPO) held a conference discussing the patentability of blockchain technology and pointed out “Blockchain Inventions = CII” and further provided the corresponding review guidelines to each of the underlying blockchain technology. At present, the Taiwan Intellectual Property Office released analyses on AI and blockchain patent, a report on frequent questions of AI patent application, and underwent multiple discussions. However, Taiwan has not undergone any corresponding legal change for now. European Union, the United States, Japan, and China released the updated policy documents concerning CII: EPO published Guidelines for Examination in the European Patent Office, November 2018 Edition. The updated 2018-edition added a whole new chapter on AI and machine learning. It shed light on how the existing legal framework applied to the blockchain technology and AI. As to the United States, the United States Patent and Trademark Office (hereinafter “USPTO”) released 2019 Revised Patent Subject Matter Eligibility Guidance on January 7th, 2019, which split the Step-A of the Two-Step Test into two prongs. CII will not be inherently abstract; therefore, the patent is more likely to be granted. Japan Patent Office also published Revision of the Examination Guidelines and Examination Handbook for Computer Software-Related Inventions in March 2018 and co-authored a comparative study on CII/software-related inventions with EPO. Moreover, China published the Revision of the Patent Examination Guidelines on December 31, 2019. China Patent Office added a new chapter about mathematics and business methods, indicating how AI and blockchain technology patent applications meet the requirement. Speaking of the patentability of AI-invented inventions, it has remained controversial. EPO recently refused two applications with an AI machine as an inventor because the inventor has to be human beings. The position of other authorities concerned is tentatively unsure, but they open up to professional inputs of all kinds. For instance, USPTO released Request for Comments on Patenting AI Inventions, and WIPO published Impact of AI on IP Policy: Call for Comments. This essay provides an overview of the AI and blockchain, outlines the key legal trends of CII patents and analyzes the discussions from different perspectives.

14:30-16:00 Session 5E: IP & New Technologies: friends or foes?
14:30
Children’s data and TikTok: what’s happening in the EU and China?

ABSTRACT. TikTok is a popular short video platform, used to create and share short lip-sync, comedy, and talent videos, which fastly became popular among teenagers and children from all around the world (Zhong, 2018; Patrick, 2018). As a social media platform developed in China, and aimed at young people, TikTok seems to be raising many questions, ranging from concerns about hacked accounts to the extent to which both TikTok and its sister application in China, Douyin, provide proper protection of children’s data on its network (The Guardian, 2019; Lyons, 2020). Recently, TikTok agreed to settle allegations by the US Federal Trade Commission (FTC) that it violated the Children’s Online Privacy Protection Act for $5.7 million (Matsakis, 2019), and is under investigation in the UK for similar concerns (Hern, 2019). These events demonstrate the urgent need to evaluate the regulatory system in the EU and China regarding children’s data protection and the compliance practices of TikTok (Douyin). This paper aims to detect the regulatory and implementation challenges and gaps of children’s data protection practices in the EU and China, with a case study of TikTok (Douyin). In order to achieve this research objective, the following research methods will be employed: (1) a critical-descriptive and comparative analysis of the specific requirements in Chinese and EU legislative and regulatory documents relating to children’s data protection, (2) a critical textual analysis of the separate privacy policies of TikTok and Douyin applied to EU and Chinese residents, (3) an assessment of the compliance practices of TikTok and Douyin with the requirements developed in the first step. China released the “Measures on Online Protection of Children’s Personal Data” in August 2019, providing further clarity on how to protect children’s personal data online, together with China’s Cyber Security Law and the Law on Protection of Minors (Zhang and Yin, 2019). In the EU, the analysis will focus on the General Data Protection Regulation (GDPR) and relevant documents issued or endorsed by the European Data Protection Board. Recital 38 GDPR mentions explicitly that children merit specific protection with regard to their personal data. As leading and popular social network platforms aimed at children in the EU and China, our preliminary research shows that both TikTok and Douyin should invest more effort in ensuring the safety and fair processing of children’s data. With regard to the GDPR, for instance, TikTok should make the privacy policies more understandable to children, mention the use of automated decision-making processes, mention details regarding transfer of personal data to third countries, clarify the requirements and methods for parental consent, and guarantee stricter enforcement of the age policy. As regards China, for instance, Douyin should be more serious about the age requirement, clarify the methods to confirm the identity of parents, and obtain the explicit consent from the guardians.

15:00
A “must-carry” obligation for online platforms? Exploring Article 17 of the Copyright in the Digital Single Market Directive

ABSTRACT. Article 17 of the recent Copyright in the Digital Single Market (DSM) Directive (2019/79) introduces a new liability regime for online content-sharing service providers (OCSSPs). To escape liability for the user-uploaded copyright-protected content, OCSSPs have two paths. First, they must obtain licenses from rightholders or make “best efforts” to do so. If that proves impossible, they must take preventive measures to ensure the unavailability of that content. For critics, these measures lead to the adoption of “upload filters”, in violation of the Directive’s own ban on general monitoring. However, Article 17 also mandates that preventive measures shall not prevent the availability of uploaded content that does not infringe copyright or is covered by newly introduced mandatory exceptions for: (a) quotation, criticism, review; and (b) use for the purpose of caricature, parody or pastiche. These exceptions, explicitly grounded in freedom of expression, cannot be overridden by contract (e.g. OCSSPs’ Terms of Use) or technological protection measures. In the hierarchy of Article 17, the mandatory exceptions trump the preventive measures. This leads to the research question at the heart of this paper: does this provision impose a new form of a “must-carry” or a “stay-up” obligation for OCSSPs? A “must-carry” obligation is a concept traditionally used in electronic communications. It refers to the obligation of transmission services to make certain channels that serve general interest objectives available to the public. The central aim of such obligation was to guarantee access to public service broadcasting and ensure a diverse choice of programmes in order to effectively protect the right to freedom of expression and access to information of the public. However, from the perspective of the private entities subject to the obligation, this amounts to a limitation on their own right to freedom of expression, as they are forced to provide content that they normally would not be interested in carrying. Article 17 of the Copyright in the DSM Directive appears to put OCSSPs in an analogous positon. Member States implementing the Directive can–and arguably must–impose obligations on private platforms to carry content that they are not interested in, provided it is privileged by a mandatory exception. The paper draws an analogy with communications law to explore the question from the perspective of the right to freedom of expression. In line with the theories of positive obligations and horizontal application, Article 10 ECHR does not bestow any right to a forum on private property. However, the ECtHR concluded in Khurshid Mustafa that to comply with the obligation to protect the right to freedom of expression, States might be required to set certain limits for rules on private property. Case law of the CJEU provides several examples of situations where States implement rules on private broadcasters in the general public interest, such as to ensure access to pluralistic information. This paper examines if the new obligations under Article 17 could lead to a similar interpretation and how that fits with the growing body of CJEU case law interpreting fundamental rights-based copyright exceptions.

15:30
Don’t Tell Them now (or at all) – End User Notification Duties under GDPR and NIS Directive

ABSTRACT. 2016 saw the adoption of two important legal instruments in the field of cybersecurity, namely the adoption of the omnipresent General Data Protection Regulation as well as the Network and Infor-mation Systems Directive. The objective of both instruments is to ensure appropriate security and confidentiality of data. Although there is great alignment between the requirements of the GDPR and the NIS Directive in terms of risk-based security measures, the instruments have distinct interests: the GDPR covers privacy of personal data, while the NIS Directive encompasses the confidentiality of the services covered and the underlying data. The latter in many cases is in fact personal data, meaning that the NIS Directive can be regarded as a complementary law to the GDPR. While the NIS Directive is broader in terms of the subject-matter covered, i.e. digital data incl. any data relating to network and information systems and its provision and continuity, it is at the more restrictive as regards address-ees, which only include operators of essential services (OES) and digital service providers (DSP). Both instruments introduce similar notification obligations based on the assumption that security threats can only be eliminated if security risks and data breaches are communicated to public authori-ties. The NIS Directive requires OES and DSP to notify, without undue delay, the competent national authorities of security incidents having a significant impact on the continuity of the services they pro-vide. Where an incident simultaneously constitutes, or becomes, a personal data breach, the provider also needs to inform the data protection regulator separately under the GDPR without undue delay and within 72 hours. In practice, two separate regulators may have to be informed about the same incident. In addition, under Art. 34 (1) GDPR, the provider shall communicate a personal data breach without undue delay to the data subject if the personal data breach is likely to result in a high risk to the rights and freedoms of natural persons. However, Art. 34 (3) GDPR foresees exceptions to the notification obligations and enlists conditions under which the communication to the data subject shall not be required. Member States may provide for further derogations to Art. 34 (1) GDPR, of which some will be outlined. This paper will first of all map the notification obligations and the roles of the regulators. From an in-terdisciplinary perspective, we will additionally outline scenarios in which none of the exceptions for immediate notification applies but nevertheless the provider has a prevailing and legitimate interest in suspending end user notification. While a need to mitigate an immediate risk of damage for an individ-ual would call for prompt communication with data subjects, there are scenarios which may justify a delay in communication, for instance where a service provider needs to analyse the current attack to prevent further attacks and assess the full impact. In the latter, any delay in communication should fulfill the requirement of “without undue delay”.

14:30-16:00 Session 5F: Fundamental principles of Privacy and Data Protection
14:30
AI and new military technologies – is there a regulation method?

ABSTRACT. In my studies I am dealing with the disruptive technologies and their impact on the law of armed conflict (LOAC, also called international humanitarian law), with the special interest to lethal autonomous weapon systems (LAWS). For the purpose of the following considerations, LAWS shall be operationally defined as military combat systems that can select and engage human targets without meaningful human control, it is without conscious human-led decision making. Therefore, LAWS shall be understood as AI-based means or methods of warfare. Despite the fact that nowadays the development of technology has gained a new, even more accelerated pace, the creation of appropriate legal (and international) regulations is still a tedious process requiring diplomatic negotiations, compromises, specialized knowledge and, above all, time. Hence the frequent criticism of non-adjustment of legal norms to contemporary reality. LAWS are usually depicted as a controversial technology which from one side is providing for numerous benefits (which are primarily cost-effectiveness and minimization of risk of losing soldiers) but is inherently linked with moral and legal dilemmas, collectively referred to as the dehumanization of modern military operations and deprivation of people of control over the use of lethal force. In my paper, I am not going to discuss this matter. Instead, I am going to focus on procedural and formal issues linked with the following questions: How to regulate this kind of technology? Is it a truly novel and original issue for international law? Are there any concepts or frameworks in which LAWS can fit? And finally is there any consistent and sustainable practice of States when it comes to the international regulation of military-oriented technologies? I want to present my conclusions deriving from the research based on historical and comparative method. I have analyzed the reasons for regulating the development and use of specific military-oriented technologies (as chemical, biological, nuclear and conventional weapons) and methods of their regulation (total ban, selective control, moratorium or substitution with political arrangement). Consequently, I have identified converging and differentiating points together with critical factors. This enabled me to grasp the shape and logic of international regulations of means and methods of warfare, this is the importance of context, current political situation and quality of the international community. During BILETA 2020 conference I want to outline those conclusions in the context of AI-based military technologies and their prospective international regulation. I want to combine my theoretical findings of a possible method of international regulation with the situation in place, this is the ongoing negotiations taking place in Geneva at the CCW forum since 2013. I believe that this may bring a new perspective to the BILETA conference and extend the discussion to AI-based military-oriented technologies.

15:00
Taste Sensations: Has the IP system finally bitten off more than it can chew?

ABSTRACT. In recent years, several significant judgments at both an EU and domestic level have been handed down which appear to challenge the traditional protective boundaries of intellectual property protection for non-traditional, and olfactory marks. These include deicsions in Hotel Chocolat v Waitrose; Toblerone v Twin Peaks; Nestle v Cadbury; and Nestle v EUIPO. These are all contentious – and high profile decisions dealing in Lancome; Smilde; Vodafone; and #LDNR add to the contentious and confused system of protection. The problematic decisions in the long-running Nestle v Cadbury litigation, together with the controversial decision relating to the legal protections awarded to Rubiks’ Cubes in Simba Toys have added to more recent concerns judicially voiced through cases relating to the protectability of Toblerone chocolate bars, and London Taxi Cabs.

This paper will critically consider recent trends in case law and jurisprudence, offering comments on what these changes may mean for the future protective sphere of trade marks. These changes will be assessed in light of the significance of the changes to the trade mark requirements introduced by the CTMR, and will critique the significance of such changes. This paper will consider associated issues including the role of leading case law given the change to the well-established requirements, before concluding whether the protective regime has become unconventional, or simply more straightforward, and considering whether or not the IP system has finally bitten off more than it can chew!

15:30
Home and Home Protection in The Law: a Reflection

ABSTRACT. As the center of human life, home has been an important concept in most jurisdictions, playing a crucial role (as a legal proxy) to protect many home-affiliated values and rights. However, recent digital and network technologies have fundamentally changed our home, home environment and home life, in many ways. An eminent change, for instance, is the rise of the Home Virtual Space (HSV) that creates a virtual dimension of home and mixed reality in need of further legal protection.The current legal framework encounters many fundamental challenges, while pursuing home protection as equally strong as well-known in the common law home-castle doctrine.

Practically, for instance, is it problematic to extend the home protection under Article 8 of the ECHR to non-home spaces such as business premises and other non-home areas? Is it lawful for law enforcement agents to hack computers at home for criminal investigation? What is the new legal status of device and service providers in smart home environment when HSV becomes the center and backbone of smart living? Theoretically, where are the boundaries of the home in cyberspace, and can we have a “digital home” as many already claimed? How can modern law cope with the new reality that the digitalized home turns more into a data center with new communal functionalities, and occupants data become livestock? And finally, what is a home that deserves special protection in law’s long struggle with the growing conflict between physicality (as the cornerstone of modern law reasoning) and virtuality?

This paper will reassess the concept of home in the law with a philosophical overview. It tries to answer a fundamental question in modern law: what is a home for a human being that deserves law’s special protection? It tries to explore how the law, as a technology itself, has responded to different technological developments in the home in different historical periods. In particular, it seeks to illustrate how the home concept, scope, (protected) values, and protecting instruments may have evolved in dynamic law-tech interactions in the past.

The paper is structured as follows. After a short introduction, Section II will focus on the pre-industrialization time, analyzing how the home and law co-evolved from the primitive stage (of moving hunters & gathers) to the sedentary stage (of an agricultural community). Section III will discuss how the industrial revolution has changed home and law with new technologies, including electricity, networked water supply, sewage, telecommunication, etc. as well as the home’s central role in human life (from communal to individual). Section IV will briefly reveal modern law’s increasing struggle with protecting the new home characterized by digitalization, connectivity, automation, ubiquitous computing, and virtuality. It also tries to compare how the new home developments may differ from the previous ones in terms of legal significance. Section V will try to explain what the home really means for a human being (in the abstractness), and how modern law may draw from that.

16:00-16:15Coffee Break
16:15-17:45 Session 6A: Childrens rights and safety
16:15
Let them speak- regulating online abuse in secondary schools

ABSTRACT. Schools have a central role in the regulatory framework for online abuse amongst young people. Online abuse is facilitated by technology, however it is foremost, a social problem. Regulators in the United Kingdom have been responsive to the phenomenon of online abuse amongst young people, however some policy measures are counterproductive, by disincentivising adolescents to speak honestly to adults such as teachers and police, when they find themselves in trouble online. This is particularly demonstrated by policy associated with sex and sexting, which formalises and criminalises arguably the unremarkable yet intimate daily communications of teenagers.

School staff are likely to under-react to an episode of non-sexual online abuse. Conversely, in respect of sexting, staff may spend a disproportionate amount of time and resources addressing a sexting incident due to the its policy framework, and association with child exploitation. Sexting policy and Government advisory documents link sexting with a risk of harm, and the heavy-handed approach advocated may make students reluctant to seek assistance if things go wrong. Incidents of consensual sexting arguably form part of a normal teenage sexual exploration, not requiring safeguarding or disciplinary resources. Students are aware that to disclose sexting to teachers will risk having their parents informed, and this affects how candid they may be with teaching staff. If schools were permitted discretion to deal with sexting matters, young people may seek assistance earlier and more often, which may lead to better outcomes for victims.

The compulsory recording of sexting offences by school-based police in accordance with the National Crime Recording Standard, whether or not the behaviour was consensual, is problematic. This policy does not help maintain a relationship of trust between young people and adults. Schools are in a difficult situation in seeking the advice of a Safer Schools Police Officer (SSPO), where they are concerned a student’s behaviour may be recorded as a crime, and potentially be included in a future DBS check. It may be useful if schools could access support and advice from their SSPO without the additional concern the student will be recorded as a crime suspect.

In an attempt to respond to the concerns about the harmful effects of online abuse, powers were given by the Education Act (2011) to schools to search for and delete data from student devices without consent. However these powers may be of little practical use. There is little, if any, evidence these powers are used by teachers. Although schools have been provided with police-like powers to search devices in dealing with online abuse, the police lack such powers. Staff strategies for dealing with student behaviour rely upon maintaining a culture of mutual respect, and the invasive powers under the Education Act (2011) are not commensurate with such a relationship. Where serious matters arise, or if students do not cooperate, teachers prefer to defer to the expertise and skills of the SSPO. It may be useful for SSPOs to be given powers to deal with devices, rather than rely on under-trained staff to interfere with student devices in situations where students do not consent.

An effective response to online abuse is necessary due to its links with long-term psychological harm, interruption to educational attainment, and loss of enjoyment of life for young people. Policy change which supports communication and the relationships between students and school staff, and students and police, may contribute to this outcome.

16:45
Deep learning facial recognition and AI-specific regulation: making filtering rules compatible with the case-law of the Strasbourg and Luxembourg courts

ABSTRACT. This paper critically evaluates to what extent deep learning facial recognition and filtering technology could be implemented in a way which is compatible with the right of individuals to a fair trial, privacy and freedom of expression under Articles 6, 8 and 10 of the European Convention on Human Rights (ECHR), as well as the General Data Protection Regulation 2016/679 (GDPR). The analysis draws upon deep learning facial recognition and filtering materials, the case-law of the European Court of Human Rights and Court of Justice of the European Union, and academic literature. It critically examines the compliance of deep learning facial recognition and filtering systems with the Strasbourg Court’s three-part, non-cumulative test to determine whether these systems can be adopted: firstly, that it is ‘in accordance with the law’; secondly, that it pursues one or more legitimate aims included within Article 8(2) and 10(2) of the ECHR; and thirdly, that it is ‘necessary’ and ‘proportionate’. The paper also critically assesses the compatibility of deep learning facial recognition technology with the ECtHR principle of presumption of innocence under Article 6 of the Convention. The paper seeks to fill a major gap in the literature. It proposes that for this kind of automated individual decision-making and profiling to be a human rights-compliant response, the use of deep learning facial recognition must be specifically targeted. Put differently, to be lawful, in addition to providing appropriate procedural safeguards, there should also be a stringent requirement becoming legally compulsory to register in a database a set of principles or filtering rules which fully comply with the case-law of the Strasbourg and Luxembourg courts. It concludes that unless, AI-specific regulation takes place in this area and the procedural safeguards suggested in the paper are heeded, the development, procurement and deployment of deep learning facial recognition and filtering technology will violate Articles 6, 8 and 10 of the Convention and the GDPR.

17:15
Public undertakings meet the Data Economy: Implications of the Open Data & PSI Directive for the legal governance of data in the utility sector

ABSTRACT. The recently adopted Directive (EU) 2019/1024 of 20 June 2019 (“Open Data & PSI Directive”) aims to increase the availability of public and publicly funded data, which according to the European Commission are major cornerstones of a common European data space (COM/2018/232 final and COM/2018/234 final). One of the most significant changes in the new Open Data & PSI Directive is making (some of) the rules regarding access and re-use of public sector information applicable to data held by “public undertakings” performing services in the general interest, subject to certain conditions. These entities were excluded from the scope of the former “PSI Directive” (Directive 2003/98/EC), to the extent that it covered only information held by public sector bodies (national, regional or local authorities and bodies governed by public law). Utility sectors are mentioned in the Open Data & PSI Directive as key areas where the availability of data for re-use should be improved. As such, public undertakings active in sectors such as drinking water and electricity will be prominent addressees of the new Directive. With the help of sensors and other “smart” devices, these public undertakings obtain vast amounts of data that are used to develop better ways to operate and monitor their utility networks, and to predict failures or need for maintenance. The Open Data & PSI Directive thus encourages public undertakings in the utility sectors to make the data produced for their own processes, available for commercial and non-commercial re-use by third parties, with the aim of stimulating data-driven innovation. Besides incentivizing access to and re-use of public sector data, the Open Data & PSI Directive also affects the governance of data held by public undertakings active in utility sectors. For example, the Directive contains rules that determine what kind of data should not be made available under the PSI regime; rules concerning the conditions for re-use (e.g. formats, charges, licenses, etc.); and specific rules for high-value datasets. Yet, the Directive is a horizontal framework for data access and re-use that was not exclusively devised for the utility sector, and as such, it has to coexist with specific rules that apply to that sector, which may have different policy objectives. This paper explores what will be the implications of the Open Data & PSI Directive for the legal governance of data held by public undertakings active in utility sectors. To better grasp such implications, the Dutch electricity and drinking water sectors will be used as case study. In particular, this paper will consider the repercussions of the new PSI rules for the current governance of data held by Dutch Distribution System Operators and drinking water companies, in view of the forthcoming transposition of the Open Data & PSI Directive into national law. In doing so, this study will identify possible regulatory challenges and will discuss whether the new PSI rules, together with the existing sector specific rules, provide a consistent legal framework for the sharing of data by infrastructure managers, or whether such framework requires further development.

16:15-17:45 Session 6B: Safeguards and Justice
16:15
The “informatory purpose” in copyright: towards a new autonomous concept of EU law?

ABSTRACT. Within the EU copyright discourse, the concept of “informatory purpose” has been somehow left in the penumbra. Despite the fact that the notion appears verbatim in the InfoSoc Directive - one of the regulatory lighthouses of the discipline -, the scholarship has not developed a specific focus on it. However, the idea of carving out a “free zone” for the dissemination of news and, in general, information across society lies at the very core of recent heated debates and highly controversial projects of reform of EU copyright law. Articles 5(3)(c) and 5(3)(f) InfoSoc Directive allow Member States to provide exceptions to the rights of reproduction and communication to the public for original content for the purpose of reporting of current events and ensuring wide access to political speeches, public lectures and similar subject matter of high relevance for the public life. In conjunction with the European restrictive approach towards exceptions and limitations and the three-step-test, both provisions set the boundaries of such (optional) exception by establishing that (i) the source of the information reported shall be given credit, and (ii) the use of the protected material is free “to the extent justified by the informatory purpose”. On the latter aspect, the Court of Justice of the European Union (CJEU) has recently had interesting occasions to express itself, being called to decide upon the qualification of a music sample as quotation and, most importantly, the unauthorized publication of an edited book and the release of (allegedly copyrightable) military information on a newspaper. Even though engaging with key aspects of the clash between copyright protection and freedom of the press, the Court has proven wary of taking a straight-forward stand on the gauge of the informatory purpose and its related implications. In light of the fact that the CJEU has not missed many occasions to foster the process of copyright harmonization across the Union, it seems proper - if not utterly necessary - to explore the reasons of this cautious approach and anticipate possible future developments, reflecting on whether the notion of informatory purpose will be next in line to be labelled an autonomous concept of EU law and, if so, what its most likely configuration will be like.

16:45
The No Choice “Consent”- User consent in autonomous transport systems

ABSTRACT. Transport systems and Smart Cities are looking to the advantages of autonomous and connected mobility to; collect data, gain insights, create business opportunities and optimise safety of vehicle and road users in the future. However, the operation of these systems will often require the consent of the vehicle or road user. Obtaining consent will involve the user making a freely given, informed acceptance of risk. In the case of location data, risk relates to the potential future consequences of sharing such information, which is likely to include the production of tertiary data connected to an individual. The required parameters for valid consent in such circumstances is defined by GDPR. In the case of transferring responsibility from vehicle to driver in partially automated vehicles (SAE Levels 3/4) consent will involve the driver accepting different levels of risk during the shared operation of the vehicle, and consenting to the associated risks and responsibilities. In such circumstances, consent must be voluntary, and made in the full knowledge of the nature and extent of the dangers, and the duties involved.

Obtaining proper consent In the context of autonomous vehicles and Smart Cities, is not a simple exercise. The individual should participate in the process of considering the consequences of their decisions. What if there is no valid alternative?

Despite the requirement in law for individuals to consciously engage in the decision to give consent, a chasm is being created between providing consent via a simplified platform and the process of making an informed and considered decision regarding personal data or responsibility. This can be demonstrated by comparing an individual’s acceptance of lengthy privacy terms on social media applications, compared to their stated view on privacy, or how a user of a ‘smart car’ selects ‘accept’ on an on-board computer, before driving away with little knowledge of what this acceptance means.

The complexities associated with the potential consequences of how users will provide informed consent in the context of Smart Cities and autonomous vehicle technology demands a reliable method to obtain consent in a meaningful way. It may be that this problem can be addressed in the form of a trust hub or dynamic consent model, as seen in the medical research sector. The complications of providing adequate information in order for an individual to give meaningful consent is a problem which has been examined in the field of medical research, where participants must understand an array of invasive actions only permitted on consent. Historically it has been difficult to alter consent, to withdraw consent, or for an individual to check what consents they have issued in the past. Taking lessons from this sector, it may be that a more helpful interactive computer-based platform could be developed to give road and vehicle users active control over their personal and sensitive data, and a more satisfactory understanding of their responsibilities when driving an autonomous vehicle.

17:15
Cybercrime and Cybersecurity in Africa: Saving African Business in Digital Age

ABSTRACT. In Africa, the last two decades have witnessed unprecedented technology revolution. Digital technology has altered the traditional ways of doing business and participation in social and political discourse. The internet technology is at the centre of various information communication technologies which transformed African commercial landscape. For example, there has been an increase in internet usage in countries like Ghana, Kenya, Nigeria Senegal and South Africa in the last decade whereas usage has plateaued in advance economies. While internet technology has supported the growth of business and investment in Africa, it has also provided a breeding ground for criminal activities. This is further enhanced by poverty and high rate of employment. As it was in the early stages of internet development in many developed economies, activities in the African cyberspace have been carried out without proper regulatory framework stipulating acceptable conducts or imposing punishment on offenders. National courts resorted to traditional criminal laws which were later deemed inadequate. As a result, many countries in the continent are now initiating cyber-focused legislation to counter the menace of cybercrime. These legislative activities are imperative considering that the continent may not just be a source of cybercrime, the continent could also be exposed to cyberattacks from other parts of the world. As more African corporations are integrating electronic commerce and other ICT technologies into their businesses, this paper will primarily concern itself with the protection of businesses under cybercrime focused legal frameworks in Africa. It will examine measures designed to deter financial crime and the duties imposed on financial institutions for execution of electronic transactions.

16:15-17:45 Session 6C: Financial and Tax aspects of Privacy and Data Protection
16:15
NotPetya and the Fog of “Cyberwar”: Can the “Cyber Battlefield” be Insured?

ABSTRACT. The defining feature of cyberwarfare is the fact that both the weapon and the target is within the cyber network itself. In 2017 the notorious malware known as NotPetya caused global mayhem affecting government agencies, critical infrastructure and utility providers such as power suppliers, healthcare providers and also other companies. The NotPetya sought out vulnerabilities and used the EternalBlue exploit, which generated one of the most financially costly cyber-attack to date. Among the victims of NotPetya’s was Mondelez, the U.S. food company. The malware rendered 1,700 servers and 24,000 laptops permanently dysfunctional.[1] Mondelez submitted a £76 million claim because their insurance provides coverage for “physical loss or damage to electronic data, programs or software, including physical loss or damage caused by the malicious introduction of a machine code or instruction.” Zurich rejected the claim and referred to a single policy exclusion which excludes, “hostile or warlike action in time of peace or war” by “government or sovereign power; the military, naval, or air force; or agent or authority”.[2]

In this article, we analyse if the NotPetya attack would fulfil the exclusionary clause requirements of being “hostile or warlike”. Firstly, we will question the scope of the terms, and secondly, we will address whether these definitions are applicable to cyber operations and fit for purpose for the current threats? Although war exclusions are standard in insurance policies to protect insurance companies from claims surrounding damage caused by war, however, the use is rare. Previous claims have only been based on conventional kinetic armed conflicts and reflected in the landmark American case of Pan American World Airways, Inc. v. Aetna Casualty & Surety Co.[3] The case defined ‘war’ and ‘warlike operations’ concerning the exclusionary clauses.[4] War was defined as “a course of hostility engaged in by entities that have at least significant attributes of sovereignty” and under “international law war is waged by states or state-like entities”. [5] However, the use of the war exclusionary clause has been unprecedented in relation to cyber-attacks.

We then look towards the Law of Armed Conflict (LOAC) to see if any clarification or guidance can be made in light of NotPetya. Cyber-attacks are inherently stealthy and have entered into the domain of corporate affairs. While there is a consensus that international law and more specifically in this case the LOAC applies in cyberspace, it is not clear how the existing law is applied directly which creates uncertainty surrounding the definitions. Whilst the NotPetya attack has been linked to the ongoing alleged international armed conflict between Russia and Ukraine, concerns are raised surrounding the attribution of the attack and if the required threshold would be met. This, subsequently, leads to the broader question of whether companies can insure against State-sponsored cyber-attacks. Furthermore, the juxtaposed nature of hostilities and war have connotations of severe devastation and loss of life compared to cyber-attacks that result in the loss of data and economic deprivation, which leads us to question whether the NotPetya attack would meet the high effects based thresholds under the LOAC and even evaluate if the LOAC is the best way to go forward about cyber damage?

Consequently, Zurich's use of this sort of exclusion in a cybersecurity policy could be a game-changer that will significantly influence the future of insurance policies. Leaving the obvious question “Was, ‘NotPetya,’ an act of war or hostilities or just another incidence of ransomware?”

[1] https://www.nytimes.com/2019/04/15/technology/cyberinsurance-notpetya-attack.html [2] US food giant Mondelez is suing insurance company Zurich America for denying a £76 million claim filed after the ‘NotPetya,’ attack. https://www.databreachninja.com/wp-content/uploads/sites/63/2019/01/MONDELEZ-INTERNATIONAL-INC-Plaintiff-v-ZURICH-AMERICAN-INSURANCE-COMPANY-Defenda.pdf [3] Pan American World Airways, Inc. v. Aetna Casualty & Surety Co 505 F.2d 989 (2d Cir. 1974). [4] Pan American World Airways, Inc. v. Aetna Casualty & Surety Co 505 F.2d 989 (2d Cir. 1974) at 1012. [5] Pan American World Airways, Inc. v. Aetna Casualty & Surety Co 505 F.2d 989 (2d Cir. 1974) at 1016.

16:45
Go Privacy Go: Lessons Learned for Data Protection by Design and Default from Designing a Privacy-Friendly GoPiGo Toy Robot

ABSTRACT. As smart products move between jurisdictions, their program code becomes subject to various and sometimes incompatible legal environments. Manufacturers are therefore required to create customized product variants for specific markets, which induces variance management overhead and undermines economies of scale. In our article we investigate how the legal environment of a smart product interacts with the programming of that product. Specifically, we are interested in how the General Data Protection Regulation (GDPR) principles can be mapped to legally relevant aspects of toy robots. These are of particular interest as they contain different kinds of privacy-sensitive sensors such as microphones and cameras, are continuously processing (personal) data, can easily be moved from one jurisdiction to another, and affect individuals, including vulnerable ones such as children, in their homes. The core goal of this article is to develop a methodology to map the GDPR’s principles to the program code of a GoPiGo3 toy robot. We describe this methodology and demonstrate a concrete mapping to GoPiGo3 (as a prototype). In this prototype, the robot’s functionality has been extended to include external face recognition services, as well as external data processing for direct advertising purposes, in order to apply within the research domain of privacy and especially privacy by design. In this article, we describe how the mapping can be done in principle and plan to make first steps towards automating the mapping process. The main research questions we analyze are: How can we describe data protection law’s core principles in a way that system and software engineers can implement such norms into device firmware? What difficulties arise and what implementation decisions have to be taken in order to enable encoding data protection principles into systems? What are the benefits and limits of our methodology to map the data protection principles into a device’s program code, specifically regarding the automation potential of this process? To answer our research questions, we start by sketching the data flow emanating from GoPiGo3 and the fictional, yet realistic, additional services within our application scenario. We then investigate upon what “lawful grounds” the data processing of the device takes place (Art. 5(1)(a) GDPR) to determine what consent - and by whom depending on the legislation of EU member states on children consent - must be given and which other legal grounds for processing can justify the processing (Art. 6 GDPR). The GoPiGo3 provides information and obtains consent from the user in accordance with Art. 13 of the GDPR given the robot and user context (e.g., location and applicable jurisdiction, user age, etc.). We dive into (legally) contested terminologies, such as the term ‘fairness’, and determine their mapping into GoPiGo3’s program code. We then determine which data items are collected by the software and for which purposes that data is actually processed in order to determine which data items are required and which ones are not. Upon this basis we discuss how the principles of purpose limitation, data minimization, and storage restrictions should be implemented in device code.

17:15
Loot boxes: children's opium? Private law remedies against loot boxes in video games

ABSTRACT. A loot box is a virtual item in a video game that contains a randomized selection of other virtual items that can be used in the game once the loot box is unlocked. These virtual items can take different forms, from cosmetic items (‘skins’) to items relevant for gameplay progress. In some games, loot boxes can be unlocked by paying real-world money, while in others, loot boxes are opened up using in-game (virtual) money or as a reward for in-game efforts. Loot boxes are widespread in modern video games and have become one of the principal sources of income for many video game companies, often far surpassing the income generated by the initial game sales.

When players of a video game purchase a loot box (either with real-world or in-game money), they usually do not know which specific virtual items they will receive. As recent psychological research has pointed out, many loot boxes present a number of substantial structural similarities with traditional forms of gambling. This raises the legal question as to whether or not loot boxes are to be categorized and regulated as gambling.

It is the province of the national gambling laws to determine whether loot boxes are legally equivalent to gambling. In some countries, the relevant public authorities have investigated whether loot boxes meet the national legal definition of gambling. The outcome of those investigations was divergent. While the Belgian and Dutch authorities, for example, ruled that some loot boxes legally constitute gambling, other authorities such as the French and the British held exactly the opposite.

In the worldwide societal and academic debate on loot boxes, only sparse attention has been paid so far to the question whether, and to what extent, private law can offer a framework to protect video game players who have spent money on loot boxes. Our paper aims to fill this gap, by exploring whether video game players can claim restitution of the money spent on loot boxes. Such restitution is conceivable when the purchase of a loot box qualifies as an invalid contract. In our paper, we will focus on two grounds of invalidity of contracts: incapacity (since loot boxes are often purchased by minors) and illegality (since, at least in some jurisdictions, loot boxes violate the national gambling regulation).

The availability of restitution in cases of money spent on loot boxes in video games will be the subject of a comparative analysis. For our paper, we will opt for jurisdictions in which the gambling authorities have already ruled on the issue of whether or not loot boxes are to be legally classed as gambling, viz. English common law and the civil law in Belgium, France and the Netherlands.

16:15-17:45 Session 6D: Meta-legal aspects of Privacy and Data Protection
16:15
Countering dual-use technology export to dictators by means of EU trade secrets law: Limitations to Member States’ extraterritorial obligations to uphold the right to freedom of expression under the ECHR and the ICCPR

ABSTRACT. Art.5(b) of EU Directive 2016/943 (informally known as “Trade Secrets Directive”) provides for the whistleblowing exception to trade secrets protection “for revealing misconduct, wrongdoing or illegal activity, provided that the respondent acted for the purpose of protecting the general public interest”, where the expressions “misconduct”, “wrongdoing” and “general public interest” are left to each Member State to specify, also in light of each Member’s other regional commitments and international obligations. For example, the explanatory memorandum to the draft German implementing law comments that “misconduct” includes unethical yet not necessarily illegal behaviour, such as business activities facilitating or taking advantage of phenomena like child exploitation, tax avoidance, or environmental pollution. In other words, whistle-blowers are not liable when they uncover corporate activities clearly violating the law, but also when those acts touch upon underregulated grey areas which may elicit moral disdain. At the same time, Annex I to EU Regulation 428/2009 (2017 Recast) lists high-risk technologies whose export is conditional upon EU authorisation. The interfaces between the trade secrets and export control regimes are relevant as most software and related technologies which form part of dual-use exports are protected as trade secrets rather than patented, due to non-disclosure preferences and/or because those items are not patentable. This means that by recourse to the Trade Secrets Directive, employees (or third parties) may misappropriate company secrets related e.g. to dual-use export technologies which are not (yet) listed in the Council Regulation’s Annex I, in order to sabotage such business operations and possibly refrain clients they deem “socially reprehensible” from receiving or making exclusive use of such devices. Against this backdrop, it is urgent to analyse to what extent said (employees’) stealing may be considered legitimate, what are its limits, and whether security-related public policy exceptions may constrain its operationalisation – although no provision qualifies the scope of the whistleblowing exception. Moreover, another exception, as per Art.5(a), states that trade secrets misappropriation is legal whenever executed “for exercising the right to freedom of expression and information as set out in the [Charter of Fundamental Rights of the European Union]” which, in its Art.11(1), mandates that everyone shall enjoy such a right “without interference by public authority”. One may wonder, at this point, whether the cumulative effect of the two exceptions contained in Art.5(a) and Art.5(b) makes a strong case for shielding from liability those employees that may suspect of their companies exporting dual-use technologies to dictators for the latter to violently quell protests, like in the case of Italian, French, Irish (…and so forth) software exported to Egypt, Iran, Syria, Bahrain and Tunisia during the Arab Spring and collateral uprisings. Scholars have suggested that States may violate human rights abroad either by direct extraterritorial conduct, or by domestic enactment of policies bearing extraterritorial effect; although doctrinal debates on the extent to which the Charter itself provides for extraterritorial obligations are still unsettled, its Art.52(3) further requires Member States to align their interpretation of the Charter’s rights to those incapsulated in the European Convention of Human Rights (as well as, arguably, in the case-law of its Court). Ultimately, the freedom-of-expression exception to trade secrets protection is therefore subjected to the limitations warranted by Art.10(2) ECHR and Art.19(3)(b) ICCPR; once reached this conclusion, what remains to be scrutinised is the degree to which the extraterritorial reach of ECHR and ICCPR limitations may interact with the trade secrets and export control regimes in the EU, as to apply to future scenarios modelled on the Arab Spring.

16:45
Cybersecurity in a post-data environment: Considerations on the Regulation of Code and the role of Producer / Consumer Liability in Smart Devices

ABSTRACT. The second decade of the 21st century saw the exponential growth of Smart Devices, and by the end of 2019 there were approximately 26 billion devices active globally. Smart Devices can be viewed as a great benefit to society as they can provide their owners with the ability to control their devices remotely and use powerful cloud algorithms to forecast personal, local and professional critical events. Crucially, Smart Devices can now, either on remote instruction or by algorithmically determined decision, manipulate property and interact physically with a person and as such, their actions are no longer limited to virtual manifestation.

The furious pace at which these devices have been developed, matched only by their rate of sales, has outpaced by many orders of magnitude both legislative efforts to ensure the security of such devices as well as the academic discourse on the topic of their cybersecurity. This paper outlines the threats which are posed by the compromise and subversion of Smart Devices, and how these threats fall outside of the scope of the mainstream legal cybersecurity research as well as legislative efforts.

It is also clear how most legal papers view hacks of systems as a homogenous outcome, and that hacks are simply a binary event (they either happen or they don’t), whereas the reality is that the compromise of a system is a much more subtle event which involves an analysis of the developer of the software, the tools of the hacker and behaviour of the user. In order to develop a framework to allow more robust discourse on cybersecurity, this paper examines the nature of Smart Device compromises and provides a rudimentary methodology to classify hacks as either preventable or not preventable, and where the hacks were preventable, the proposal of a systematic means to determine if the fault lies with the producer or the consumer. This is done in a robust manner, from a technologically agnostic perspective to ensure a persistent relevance in the face of unrelenting technological advancement.

It is also a well-understood issue that following a failure of cybersecurity, there are legislative impediments to seeking redress from the developers of the ‘defective’ software. Protections that a consumer could expect from ‘dumb’ devices are frequently not available for the owners of Smart Devices. Third parties, who have been harmed by a hacked Smart Device are also not able to seek redress from the device owner or developer. Using the proposed framework, the paper examines the shortcomings in both UK and American product liability legislation and proposes a number of high-level remedies which ensure that both consumers and producers of Smart Devices are treated in a manner which is consistent with the competing aims and objectives of product liability, consumer protection legislation and the principles of justice.

17:15
The perks of co-regulation: a standard institutional arrangement for online regulation?

ABSTRACT. The goal of this paper is to explore co-regulatory institutional arrangements as a solution to the aching question of who should be in charge of regulating online service providers. Given the different structures and procedures that can fall within this category, the paper aims at building on the existing literature to evaluate its varieties, their advantages and shortfalls. In this sense, the ultimate goal is to provide a better understanding of what exactly coregulation means and which of its forms would be more suitable for the digital sphere.

The efforts towards regulating online service providers have long inspired debates over the limits between public and private spheres of power. During the internet’s first years of expansion, self-regulation was pointed out as the appropriate and possibly only feasible solution , given the early realization that the Internet was a private virtual space regulated in itself . The regulatory pendulum swang back in the beginning of the 2010’s, when we witnessed a series of state led regulatory initiatives aimed at conforming the digital sphere to principles and enforcement logics that long based state intervention. Lately, there’s been a clear tendency to indicate co-regulation as the constitutionally responsive regulatory paradigm for services provided through the Internet content layer .

However, the general concept of co-regulation can encompass arrangements as different as the north American Intermediary liability provision (art. 230 DCMA) , the German NetzDG and the constitution of government supervised self-regulatory bodies. The need then arises to analyse literature and empirical experiences in order to differentiate their legal and institutional implications. This exercise should lead to a more accurate and therefore fruitful formulation of the concept.

The paper will be divided in three sections. After the Introduction, section 1 will bring the general theories that distinguish self, state and co-regulation, according to their traditional prescriptions. Section 2 will elaborate on the different online co-regulation experiences to date, and section 3 will analyze these experiences according to the specific challenges of online services regulation.

16:15-17:45 Session 6E: Global dimension of technology regulation
16:15
DIGITAL CHILDREN AND THEIR RIGHT TO PRIVACY AND DATA PROTECTION: Does Article on Conditions Applicable to Child’s Consent Under the GDPR Tackle the Challenges of the Digital Era or Create Further Confusion?

ABSTRACT. Today’s new devices and technologies fascinate many of us. Technology and the Internet have become a core component of our lives. Young people are particularly active users of these technologies. Over the last few years there has been a significant increase in the number of young people using the Internet. Not only they are using the Internet more, but many children begin using it at earlier ages.

These technologies offer great opportunities but they also bear some risks that cannot be ignored. Many concerns have been raised in recent years with the advent of internet-connected toys and wearables along with other smart devices and applications that were not actually developed for children. Leaving aside a number of observable and visible risks such as sexual abuse, insomnia, obesity, low self-esteem or addiction, there are other risks such as privacy invasions and data protection violations that occur due to data sharing, data collection and data processing, which are often overlooked. Children are exposed to privacy-invasive digital risks more due to the increasingly commercialised usage of information society services. Yet, children and their parents are often unaware of privacy and security compromises they make in order to use these new technologies and devices.

It is rather difficult for children accurately to know or predict all possible impacts of data processing, data linkage and data aggregation that may affect their rights and freedoms. However, growing up in a digital age it is almost impossible to isolate children from these technologies in order to protect them from possible harmful effects. In that regard, the European Union has given special attention to privacy concerns of information society services offered to children and incorporated specific provisions to strengthen the protection of children under the General Data Protection Regulation. Within this framework, this article examines the newly incorporated provisions and concepts in relation to conditions applicable to child’s consent under the General Data Protection Regulation and analyses whether the protection envisaged for children reflects the striking thinking of them compared to adults. To illustrate this point, the effectiveness of the provisions regarding processing of children’s personal data and challenges faced during implementation and enforcement of such provisions in practice will be analysed. Moreover, the effectiveness and efficiency of consent given or authorised by the holder of parental responsibility until the child acquires competence to consent to data collection and data processing will be assessed. These analyses will allow to evaluate whether the Regulation successfully mandates that innovations remain connected to the principles of privacy and data protection or fails to reflect the changing nature of technology.

16:45
Automated Decision-making in Automated Driving: Striking a Balance between Individual Autonomy and General Road Safety

ABSTRACT. In an attempt to increase road safety, car manufacturers turn their attention to the interior of the vehicle. If the driver falls asleep, or is intoxicated, this will be picked up by sensors and cameras inside the vehicle. If it is deemed unsafe for the driver to continue the trip, the vehicle will pull over and bring itself to a stop so as to prevent endangering other road users. This automated decision-making process is not only affecting the autonomy of the driver, it is also challenging law as it gives rise to many legal and ethical questions. Vehicles (will) collect data on their surroundings. These data form a crucial part of so-called Advanced Driver Assistance Systems (ADAS), which support the driver in the performance of the driving task by, for instance, making sure the vehicle stays within its lane and adjusts its speed to the vehicle travelling in front of it. This should increase road safety. However, there is a risk to ADAS: because the driver has to execute less tasks, he might pay less attention to traffic, possibly so much so that he falls asleep behind the wheel. This is where driver monitoring systems come into play. These systems – a combination of software and hardware, such as cameras and sensors – keep an eye on the driver by collecting data on the physical state of the driver. The General Data Protection Regulation (GDPR) defines these data as personal data. Driver monitoring systems register, for instance, if the driver falls asleep. That could set off an alarm (audio, shaking of the driver’s seat, etc.) to wake up the driver, or, in the most extreme case, could bring the vehicle to a stop so as to avoid endangering the driver as well as other road users. Volvo, for example, has announced that all of its vehicles produced from 2020 onwards will bring the vehicle to a stop when it detects sleepiness, distraction or intoxication in the driver. Recent legal literature on automated driving has mainly focused on civil liability problems concerning fully self-driving vehicles, leaving aside more pressing legal challenges regarding driver monitoring. The contribution of this research to legal scholarship consists of balancing the individual interests in personal autonomy and the general public’s interest in road safety. In doing so, this research will analyse Article 22 of the GDPR on automated decision-making. This Article sets boundaries to decisions that are solely based on automated processing whilst also providing for some exceptions. However, these exceptions focus on the individual data subject’s rights, not on the general public interest in road safety. The European Court of Human Rights has already found states to have a positive obligation in the context of road safety under Article 2 of the European Convention on Human Rights (ECHR). So when does the autonomy of the individual and its right to data protection weigh heavier than the public interest of road safety? This research aims to answer that question and fill the existing gap in legal literature.

17:15
A critical analysis of (the need for) legal safeguards regarding automated decision-making

ABSTRACT. Increasingly algorithms are being used by both public and private actors as a means of automated decision-making. This is especially relevant when such decision-making, within or outside of a legal relationship, affects the interests of counter or third parties, and even more so if and when the said algorithms use personal data (and subsequent profiling) of such parties. It is in this context that Article 22 of the GDPR regulates ‘automated individual decision-making, including profiling’, with the intent to offer safeguards to those ‘data subjects’ who (may) experience legal effects or who are otherwise significantly affected by such decision-making. Article 22 GDPR establishes a right of data subjects to not be subjected to such automated decision-making, albeit that it also comes with various caveats. In preparation of (Article 22 of) the GDPR and since its introduction there have been lively discussions both between practitioners and academics about the adequacy of the (scope of) protection of data subjects rights concerning automated decision making. This paper reflects on that discussion by taking a critical analytical view on what automated decision-making actually means, particularly in terms of basic legal concepts. It does so from the proposition that the notion of automated decision-making is, at least legally speaking, something of a container concept, which should be looked at with proper legal nuance, to both avoid over- and underregulation in search of legal protection. The paper recognizes that human actors (as natural and juridical persons) and technology become increasingly more interwoven, and that decisions whereby human actors aim to (legally) influence other human actors, increasingly involve the use of technology, with increasingly more ICT-based technology, with increasing levels of sophistication, if not of intelligence then certainly of informational scope (i.e. AI and big data). The said interwovenness opens up opportunities for designing-in modes of sophisticated automated decision-making, such as in high-tech objects (e.g. driverless cars, care robots), platforms (e.g. internet search engines and online sharing services), and decision-systems for implementing and enforcing public law rules (e.g. allocating subsidies, imposing sanctions, and smart traffic systems). The paper aims to flesh out how technology, considering various levels of sophistication, can be placed within the context of relationships between human actors, whether as a means to performing legal or mere factual acts, and whether to technologically regulate/enforce (on the sender’s side) and/or be technologically regulated (on the receiver’s side), and on that basis offer greater nuance to legal sensitivity of different cases of automated decision-making. In doing so the paper not only considers limits to technological perfection (i.e. technology not weighing relevant individual detail, and/or suspiciously categorizing) but also looks at coercion (i.e. technology not allowing any human behaviour escape). The findings from this more legal theory-based approach will next be used to revisit Article 22 GDPR and draw conclusions and (perhaps) make recommendations as regards its implementation and, if considered desirable, possible future change.

16:15-17:45 Session 6F: New Frontiers of technology regulation
16:15
Data Governance in the Cloud: Of Scarce Regulatory Resources and Tactical Delegated Enforcement

ABSTRACT. European data protection authorities (‘EU DPAs’) play crucial roles in protecting personal data rights. However, many EU DPAs do not have adequate access to resources in order to be effective data privacy protectors. Although the data privacy law literature recognises that many EU DPAs operate within such constraints, to date, there has been a dearth of empirical studies on how limited resources can impact on enforcement.

This article makes a modest attempt to address this empirical gap by analysing selected empirical findings of a recent project which examined the investigations of multinational cloud providers by EU DPAs (‘Cloud Investigations’). This article draws on the fields of socio-legal studies and regulation to interpret these empirical findings and advances three arguments.

First, due to their fiscal constraints, some EU DPAs often have to make tactical enforcement decisions about initiating Cloud Investigations as well as the foci and methods of Cloud Investigations. The decision-making process can be very complex for some EU DPAs as they have to not only consider but also at times balance a broad range of factors including external pressures, law and enforcement styles. Second, during Cloud Investigations, the ‘regulatory space’ can often be complex, diffuse and diverse as EU DPAs delegate certain regulatory tasks to private and governmental (other than EU DPAs) actors due to the limited resources. Finally, this article suggests that delegated enforcement requires careful thought and design in order to ensure effective and robust data governance. Suggestions are made on how the ‘regulatory space’ can be designed in order to promote accountability, trust, robust data protection and effective multi-actor collaboration.

16:45
From “Release and forget” to “Release and remember”: a risk-based licensing framework for disclosing anonymised data under the Freedom of Information Act 2000

ABSTRACT. The Freedom of Information Act 2000 (FOIA) gives individuals the right to request and receive access to information held by public authorities. Under the FOIA, a public authority releasing requested information has no post-release obligations to monitor any subsequent uses of that information, nor are any specific obligations imposed on the recipient of the information. The FOIA clearly specifies, however, that in most circumstances personal data (i.e. any information relating to an identified or identifiable living individual) will be exempt from freedom of information requests. In recent years, UK courts have considered the interplay between freedom of information requests and data protection law in several interesting cases. These cases have mainly focused on issues relating to the anonymisation of personal data. Under UK and EU data protection legislation, data that have been anonymised so that they can no longer be used to identify an individual are considered anonymous and thus not personal data. As anonymous data are not personal data, they are not exempt from freedom of information requests made under the FOIA. Operating under this premise, UK courts have begun to order public authorities to release datasets containing anonymous data to individuals who have requested access.

As the FOIA imposes no post-release obligations on the releaser or recipient of requested information, its mode of disclosure best described as a “release and forget” approach. In the context of anonymised data, however, this is problematic. Recent work undertaken in the field of anonymisation has established that infallible and irreversible anonymisation of personal data is not possible. Nominally anonymised data can often be de-anonymised, rendered “personal” once more, and used for potentially harmful purposes. As a result, the appropriateness of releasing anonymised data on a “release and forget” basis is highly dubious, and doubts have been expressed regarding the adequacy of the FOIA’s disclosure model. This paper proposes a new alternative model of disclosure based on the notions of privacy and data protection by design, data licensing, and risk. Under this new framework, obligations and restrictions would be attached to anonymised data via a licensing framework, depending on the level of risk associated with their disclosure, and travel with those data post-release.

17:15
‘Responsibilising’ the Data Subject? Reasonable Expectations of Privacy and the Status of Digital Rights in the UK post-Brexit

ABSTRACT. As we are moving towards the second anniversary of the EU General Data Protection Regulation’s (GDPR) entry into force, the United Kingdom is prepared to drift away from the rest of the EU at the end of January 2020. What will Brexit mean, however, for the protection of digital rights in the UK?

This paper unpacks this broad question using the example of the right to erasure of personal data under article 17 GDPR, also known as the ‘right to be forgotten’. While sufficient preparations have been made from a data protection law perspective in the short term, the long-term impact of Brexit on the status of digital rights shall not be underestimated. In the absence of EU law’s direct effect, digital rights will no longer be underpinned from the broad, informational self-determination-type notion of privacy that derives from European human rights law. The developing UK tort of misuse of private information, within which a narrower notion of privacy which relies on the ‘reasonable expectations’ of the data subject, will significantly inform the protection of such rights as the right to be forgotten. In this picture, data subjects will be legally constructed as highly responsible, well-informed and rational calculators of their prospective privacy harms, much more like the ‘average consumer’ in consumer law rather than the (at least potentially) vulnerable holder of a fundamental human right. I argue that this will be detrimental for the protection of digital rights due to the heroic nature of the individual responsibility attributed to the data subject with regard to foreseeing and assessing potential interferences with her right to privacy.

I elaborate on the implications of this transition in three steps. First, I examine the impact of Brexit on the applicable legal framework for the protection of the right to be forgotten in the UK. Drawing on data protection and human rights law, as well as on the UK tort of misuse of private information, I note that significant divergence from the pre-Brexit legal picture is not to be anticipated in the short term. The Data Protection Act 2018, as well as its application in recent case law (i.e. the High Court judgment in NT1/NT2), provide sufficient assurances for continuity of legal protection. Second, I argue that divergence is more likely to arise in ‘hard’ cases which involve balancing the right to be forgotten with other legitimate objectives or rights of third parties. I demonstrate how the UK notion of ‘privacy’ will substitute the European notion of privacy as the underpinning not only of the right to be forgotten, but of digital rights in general. Third, by reference to the literature on the economics of privacy, I sketch the particularly problematic nature of applying the standard of ‘reasonable expectations of privacy’ in an increasingly data-driven society and economy.

18:15-21:00Dinner at Eve and Awarding Ceremony Prizes