TILTING2017: TILTING PERSPECTIVES 2017
PROGRAM FOR FRIDAY, MAY 19TH
Days:
previous day
all days

View: session overviewtalk overview

09:45-11:15 Session 11A: Data Science 8 - Data Science Partnerships and Policy
Location: DZ8
09:45
Information Security of mHealth: Weaknesses and Possible Countermeasures

ABSTRACT. Information Security of mHealth: Weaknesses and Possible Countermeasures

Nattaruedee Vithanwattana School of Science and Technology Middlesex University, Hendon, London NW4 4BT Email: nv166@live.mdx.ac.uk

Dr. Glenford Mapp School of Science and Technology Middlesex University London, UK Email: g.mapp@mdx.ac.uk

Dr. Carlisle George School of Science and Technology Middlesex University London, UK Email: c.george@mdx.ac.uk

This paper focuses on mobile health or mHealth (i.e. “medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants (PDAs), and other wireless devices” [1]) and information security. The paper will discuss security requirements in mHealth systems, assets in mHealth systems that need to be protected, threats which need to be protect against, vulnerabilities and weaknesses in mHealth systems, and risks that may occur as a result of threats exploiting vulnerabilities. The paper will also propose possible countermeasures in order to secure the confidentiality of healthcare data in mHealth systems. The use of mobile and wireless technologies in healthcare systems has an enormous potential to transform healthcare across the globe [2]. In recent years, there has been a huge increase in the number of such technologies in mHealth, the new horizon for healthcare through mobile technologies. mHealth solutions include the use of mobile devices, such as mobile phones, body sensors, wireless infrastructures. These devices are used to collect clinical health data, and to deliver healthcare information to patients, medical professionals, and researchers. They are also used for real-time monitoring of patient vital signs, such as heart rate, blood glucose level, blood pressure, body temperature, and brain activities [3]. mHealth enables users to monitor their own health status and directly facilitates the access of healthcare data by healthcare professionals anytime and anywhere. It delivers more patient-focused healthcare and improves the efficiency of healthcare systems. It provides sustainable healthcare through better planning of patients’ treatment which reduces the number of unnecessary consultations. It also provides a well organized way of receiving guidance for treatment and medication from healthcare professionals. Moreover, mHealth solutions can help patients take more responsibility for their health through the devices which can detect and report their vital signs, as well as mobile applications that can help them to be more focused on their diet and medication [4]. Generally, mHealth offers smart solutions to tackle challenges in healthcare. However, there are still various issues regarding the development of mHealth systems. One of the most common difficulties in developing mHealth systems is information security. mHealth systems are still vulnerable to numerous security issues with regard to weaknesses in design and data management. In mHealth systems, sensors embedded in mobile devices collect healthcare data from users via Bluetooth. Collected healthcare data are stored in different databases located on mobile devices and in Cloud storage. Healthcare data is classed as sensitive personal data under European data protection legislation, and may reveal the state of an individual’s health which he/she may not want to share with everyone [5]. In order to secure this sensitive personal data, all databases require a high level of security to protect the confidentiality of healthcare data. A recent research study by Yahya, Walters, and Wills [6] focused on developing an appropriate security framework for cloud storage by exploring existing proposed security frameworks. The study identified which security requirements can be used as baselines to protect data in cloud storage. Security requirements mentioned in this study included: Confidentiality, Integrity, Availability, Non-repudiation, Authenticity, and Reliability. The study also reported that all security experts interviewed agreed that these security requirements are important. However, Confidentiality, Integrity, and Availability (CIA) are the most common security requirements of many security frameworks (See Table 1 below). Only a small number of security frameworks support other requirements such as non-repudiation which are important for security systems. Table 1 below compares various security framework to show which security requirements are incorporated in them.

Security Requirement Author Organisation Firesmith [7] Takabi, Joshi & Ahn [8] Brock & Goscinski [9] Zissis & Lekkas [10] Mapp et al. [11] CSA [12] NIST [13] ENISA [14] CPNI [15] ASD [16] Confidentiality √ √ √ √ √ √ √ √ √ √ Integrity √ √ √ √ √ √ √ √ √ √ Availability √ √ √ √ √ √ √ √ √ √ Non-repudiation √ √ √ Authenticity √ √ √ √ √ √ √ Reliability √ √ √ √ √ √ Table 1:Synthesis of Security Requirements [6]

In an attempt to specify reusable security requirements, Firesmith [7], developed a detailed specification with the aim of providing a comprehensive security framework. Firesmith also proposed Reusable Security Requirements Templates which one can argue are reusable for most applications and application domains including mHealth. Unlike typical functional requirements, security requirements can potentially be highly reusable, especially if specified as instances of reusable templates [7]. Although, this security framework has not been directly implemented in either the context of mHealth devices or cloud computing, it provides a detailed overview of the security requirements for any secure information systems [17]. Further, this security framework can be extended in order to develop an information security framework for mHealth systems. Every application, including mHealth, at the highest level of abstraction will tend to have the same basic kind of potentially vulnerable assets. In mHealth systems, assets include healthcare information, mHealth device databases, and cloud storage. Sensitive healthcare data stored on mHealth devices and cloud storage are vulnerable to various security threats (i,e. anything that can exploit vulnerability, either accidentally or intentionally, and will cause damage to an asset). In mHealth systems, threats include Malware Infections, mHealth applications, mobile devices, mHealth device’s users, healthcare professionals and third parties who may or may not have a right to access healthcare data. Due to the weaknesses in design and data management, current mHealth systems still have various vulnerabilities. These are weaknesses in the systems that can be exploited by threats in order to damage an asset. Unauthorised user access is one of the most serious security vulnerabilities since it can lead to privilege escalation and to data theft or loss [18]. From previous studies, several information security frameworks for mHealth devices as well as information security frameworks for cloud storage have been proposed. However, a major challenge is developing an effective information security framework that will encompass both mHealth devices and Cloud Storage. This paper will conduct a risk assessment of mHealth systems by identifying assets, threats, and vulnerabilities. As a result, possible countermeasures will be proposed as part of the development of an information security framework for mHealth systems.

References

1. European Commission (2014) GREEN PAPER on mobile Health (“mHealth”). [online] Available from: https://ec.europa.eu/digital-agenda/en/news/green-paper-mobile-health-mhealth [Accessed: 15 November 2016]

2. World Health Organisation (2011) mHealth: New horizons for health through mobile technologies. [online] Available from: http://www.who.int/goe/publications/goe_mhealth _web.pdf [Accessed: 15 November 2016]

3. Germanakos P., Mourlas C., & Samaras G. "A Mobile Agent Approach for Ubiquitous and Personalized eHealth Information Systems" Proceedings of the Workshop on 'Personalization for e-Health' of the 10th International Conference on User Modeling (UM'05). Edinburgh, July 29, 2005, pp. 67–70.

4. European Commission (2014) Healthcare in your pocket: unlocking the potential of mHealth. [online] Available from: http://europa.eu/rapid/press-release_IP-14-394_en.htm [Accessed: 15 November 2016]

5. Vithanwattana, N, Mapp, G. & George, C. (2016) “mHealth – Investigating an Information Security Framework for mHealth Data: Challenges and Possible Solutions” 2016 12th International Conference on Intelligent Environments, IEEE, London, 14-16 September 2016, p.258-261

6. Yahya, F., Walters, R., & Wills, G.B. (2016) “Goal-Based Security Components for Cloud Storage Security Framework” 2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security), IEEE, London, 13-14 June 2016, p.1-5

7. Firesmith, D. “Specifying Reusable Security Requirements” Journal of Object Technology. Vol.3, no.1, pp.61-75, 2004.

8. Takabi, H., Joshi, J.B.D., and Ahn, G.J. “SecureCloud: Towards a comprehensive security framework for cloud computing environments”. International Computer Software and Applications Conference, 2010, pp.393-398.

9. M. Brock, and A. Goscinski, A.“Toward a Framework for Cloud Security” in Lecture Notes in Computer Science, vol 6082, Springer Berlin Heidelberg, 2010, pp.254-263.

10. Zissis, D. and Lekkas, D. “Addressing cloud computing security issues”. Future Generations Computer Systems. 28(2012). P.583-592.

11. Mapp, G., Aiash, M., Ondiege, B.,& Clarke, M. “Exploring a New Security Framework for Cloud Storage Using Capabilities”. SOSE. pp.484-489, 2014.

12. CSA, “The Cloud Control Matrix V3.0.1 White Paper” Cloud Security Alliance (CSA), 2013 [online] Available from: https://cloudsecurityalliance.org/download/cloud-controls-matrix-v3/ [Accessed: 16 November 2016]

13. NIST, “Security and Privacy Controls for Federal Information Systems and Organizations” National Institute of Standards and Technology Special Publication 800-53 Revision 4, 2013.

14. D. Catteddu,& G. Hogben, Cloud Computing: Benefits, Risks and Recommendations for Information Security White Paper. European Network and Information Security Agency (ENISA), 2009.

15. CPNI, “The Critical Security Controls for Effective Cyber Defense V5.0 Report” Centre for the Protection of National Infrastructure (CPNI), 2014 [online] Available from: https://www.cpni.gov.uk/documents/publications/2014/2014-04-11-critical-security-controls.pdf?epslanguage=en-gb [Accessed: 16 November 2016]

16. ASD, “Australian Govenrment Information Security Manual Controls” Australian Signals Directorate (ASD), 2016. [online] Available from: http://www.asd.gov.au/publications/ Information _Security_Manual_2016_Controls.pdf [Accessed: 16 November 2016]

17. Yayah, F., Walters, R., & Wills, G.B. “Modelling Threats with Security Requirements in Cloud Storage” International Journal for Information Security Research (IJISR). Vol.5, Issues 2, June 2015.

18. Y. Cifuentes, L Beltran, & L. Ramirez. “Analysis of Security Vulnerabilities for Mobile Health Applications” International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering. Vol,9, No.9, 2015.

10:15
Website blocking in copyright cases: Russia's experience
10:45
Regulating land tenure = Regulating data flows ? A discussion of innovative tools for land administration

ABSTRACT. In the course of the past ten years the development of the Web2.0 and Web3.0 alongside advances in remote and mobile sensor devices contribute to increasingly large volumes of digital data about people and the spaces we inhabit in real-time, a trend accompanied by an increasing multiplicity of data providers and users; and flexible forms of data capture, integration and use. These trends are also visible in the context of land administration, where implementation of innovative data driven tools experience an additional thrust especially in developing countries due to the urgency and pressure on land governance actors in light of high rates of land conversion, urbanization, population growth and insecurities related to climate change. At the same time complex dynamics between ‘informal’ and ‘formal,’ ‘customary’ and ‘modern,’ ‘incremental’ and ‘master-planned’ practices of land use and change make it difficult if not impossible to establish a lasting permanent baseline of the status quo of land use, rights, and ownership; and many top-down, pre-designed styles of implementing information systems within the formal framework of land administration have stagnated or are considered failures. Recognition of the difficulties in and the high costs of implementing comprehensive, large scale land information systems together with new data technologies becoming available have led to a change in approaches being promoted, for instance ‘fit-for-purpose land administration’ and ‘crowd-sourced cadastres.’ The actors, technologies, and organizational assemblages behind these umbrella terms are diverse. They do, however, exhibit some commonalities. They leverage new mobile and Web2.0/3.0 technologies, including data capture from mobile sensors, cloud based data storage and most recently Blockchain technology; and emphasize the incremental identification of both targeted and emerging data needs at a given point in time. As explicit aim these approaches seek tenure security not only for individual private land owners, but also for marginalized and vulnerable land users and holders of diverse land rights. The initiatives are often initiated and driven by non-governmental (for and not-for-profit) organizations outside or inside the countries of implementation and by supra-national agencies. Hence, both in technical and organizational terms, these projects introduce an array of actors and technologies to more established frameworks of land administration by governments and official regulatory frameworks. This relationship is further complicated in land governance settings in the global south characterized by legal pluralism. Our aim for this presentation is to present an overview of innovative land tool initiatives outlining the actors and technologies involved, and their aims and rationales. Second, we elaborate on questions that arise from these initiatives related to sustainability in recording land data and future accountabilities pertaining to these records, transparency of the data flows and uses, and the emergence of (new) data categorizations that capture and influence the multi-faceted land-rights-person relationships.

09:45-11:15 Session 11B: PLSC 5A: Borgesius and Poort
Location: DZ6
09:45
Digital Expungement
SPEAKER: Eldar Haber

ABSTRACT. Digital technology might lead to the extinction of criminal rehabilitation. Due to big data practices, criminal history records that were expunged by the state remain widely available through commercial vendors (data brokers) who sell this information to interested parties, or simply through a basic search of the Internet. The wide availability of information on expunged criminal history records increases the collateral consequences a criminal record entails, thereby eliminating the possibility of reintegration into society. Acknowledging the social importance of rehabilitation, policymakers attempted to regulate the practices of data brokers by imposing various legal obligations and restrictions, usually relating to the nature and accuracy of criminal records and the purposes for which they may be used. These regulations have been proven insufficient to ensure rehabilitation. But regardless of future outcomes of such regulatory attempts, policymakers have largely overlooked the risks of the Internet to expungement. Many online service providers and hosting services enable the wide dissemination and accessibility of criminal history records that were expunged. Legal research websites, websites that publish booking photographs taken during investigation (mugshots), social media platforms, and media archives all offer access to expunged criminal histories, many times without charge, and all with the simple use of a search engine. Without legal intervention, rehabilitation in the digital age in the U.S. has become nearly impossible. This Article offers a legal framework for reducing the collateral consequences of expunged criminal records by offering to re-conceptualize the public nature of criminal records. It proceeds as follows. After an introduction, Part II examines rehabilitation and expungement as facets of criminal law. Part III explores the challenges of digital technology to rehabilitation measures. Part IV evaluates and discusses potential ex-ante and ex-post measures that could potentially enable rehabilitation in the digital age. It argues that while ex-post measures—such as the European Union’s right to be forgotten—are both unconstitutional and unrealistic for enabling digital expungement, ex-ante measures could be a viable solution. The solution which this Article promotes, is based on a graduated approach towards criminal history records, which would be narrowly tailored to serve the interests of rehabilitation-by-expungement. It is primarily based on reconceptualization of criminal history records as private information as a default, with set exceptions for specific reasons which rely, inter alia, on public safety. This approach requires a reexamination of current criminal offences (both State and Federal), and relabeling each based on the potential collateral consequences of making the record of a conviction under the offense publicly available. It suggests two steps: the first would be making non-conviction criminal history records private. In rare cases, where there is a strong public interest in publication of non-conviction criminal record that is evident and overrides the interest of rehabilitation, there could be full public disclosure. Under the second step, the state would differentiate between offences that could be eligible for expungement and those that could not. The latter will become public immediately. Specific offences that require community notification, like sex offences, could remain public. As for the former, the information will generally remain private, accessible only to governmental agencies. Upon release from prison, the state will decide whether the public interest necessitates the release of such information, depending mostly on the data subject’s eligibility for expungement. Finally, the last Part concludes the discussion and warns against reluctance in regulating expunged criminal histories.

10:15
Datafication and economic power in developing countries

ABSTRACT. The idea of datafication, intended as rendering many non-quantified processes into data, has become ubiquitous in business intelligence. Mayer-Schonberger and Cukier (2013) refer to big data as “a revolution that will transform how we live, work and think”, reflecting the idea that data have become a lens to see the world and frame it, and a means for profit-making vendors to operate in the market. Given the pervasive nature of datafication, it makes sense to ask whether and how it can affect markets in low- and middle-income countries, and the way these are structured and regulated.

As argued by Taylor and Broeders (2015), effects of datafication may transcend markets in the strict sense of the term, and affect the balance of economic power in the global South. When shared across the actors involved, data become a source of value for development policy, and the ways this affects economic power in developing nations need to be unpacked. If digitisation refers widely to the adoption of digitality in existing processes, datafication is a process in which data become the basis for regulation, hence yielding a deep rethink of the role of citizens/customers as bundles of data. This entails new ways of conceiving developing country markets and their actors, and equates power with a range of capabilities of data ownership, management and administration.

In this session, the role of datafication in low- and middle-income country markets will be discussed by three academic researchers with expertise in the field. Different geographic foci of research, as well as diverse theoretical perspectives, will lead us to rediscuss the existing orthodoxy on the effect of datafication on economic power, and propose alternative explanations to it. The session will aim at mapping existing knowledge on this timely theme and build cross-disciplinary synergies around it.

Suggested participants:

Linnet Taylor, Tilburg University l.e.m.taylor@uvt.nl Laura Mann, London School pf Economics and Political Science l.e.mann@lse.ac.uk Silvia Masiero, Loughborough University s.masiero@lboro.ac.uk

References:

Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Houghton: Mifflin Harcourt.

Taylor, L., & Broeders, D. (2015). In the name of development: Power, profit and the datafication of the global South. Geoforum, 64, 229-237.

10:45
Piercing the filter bubble bubble – factors and conditions of acceptance of news personalization
SPEAKER: unknown

ABSTRACT. Personalization has been the subject of intense scrutiny ever since Eli Pariser’s extremely influential book on filter bubbles. The basic argument of the book is that personalizing algorithms are locking people in interest-based filter bubbles, which has a number of adverse effects, including the reduction of diversity of information and opinions people are exposed to, the formation of echo chambers, the subsequent polarization and fragmentation of the public discourse, and the disengagement of certain social groups. These claims found their own echo chambers and led to an alarmist discourse on ‘filter bubbles’ in general, and the roles and responsibilities of online information intermediaries, such as google or Facebook in particular. In this paper we revisit and subject some of the assumptions of the original filter bubble argument to a systematic, evidence based scrutiny.

09:45-11:15 Session 11C: Special Session - Group Privacy: New Challenges of Data Technologies
Location: DZ1
09:45
Drones and robotic surveillance

ABSTRACT. The deployment of drones for surveillance purposes is already a reality: two years ago, Defense News published an article stating that in Italy the military Reapers would be used for “crowd monitoring”. In many other States, law enforcement and intelligence agencies have foreseen the use of small drones for surveillance and anti-terrorism. Such usages pose, however, severe threats to fundamental rights and freedoms. The Art. 29 Working Party (the European Data Protection Authorities Group) issued an Opinion on drones (Opinion 1/2015) stating, inter alia, that “one should also consider the possibility of interconnecting a number of drones in order to carry out surveillance on a large area. Swarms of drones, with real-time communication channels between them and external parties, trigger yet higher data protection risks, since they could easily enable coordinated surveillance, i.e. tracking movements of individuals or vehicles over large areas.” This “coordinated surveillance” can easily be fully automated: an autonomous drone swarm, equipped with facial recognition techniques, and connected to all available sources of metadata, may be able to track (and follow) any individual in a very large area. Such systems cannot be deemed as “very smart” flying CCTV cameras, but they have much more powerful capabilities. How can the law cope with such “smart” devices? The paper will deal with the new EU Regulation 2016/679 (on the protection of natural persons with regard to the processing of personal data and on the free movement of such data), which introduces the principles of “data protection by design” and “data protection by default” as build-in features of every device involved in the processing of personal data. The Regulation (Recital 78) clarifies that “when developing, designing, selecting and using applications, services and products that are based on the processing of personal data or process personal data to fulfill their task, producers of the products, services and applications should be encouraged to take into account the right to data protection when developing and designing such products, services and applications and, with due regard to the state of the art, to make sure that controllers and processors are able to fulfill their data protection obligations”. How can such principles be put in practice in the context of autonomous devices? Is there any way to make advanced biometric analysis performed via algorithm-controlled drones compatible with the data minimization principle? How can we embed such regulations into the code governing drone behavior? Can drone swarms be taught to respect the key principles of data protection? The answer to such questions is crucial for the next future, for policy makers and the civil society, in order to find the elusive balance between fundamental rights and security.

10:15
Data cultures at the grassroots: Alternative epistemologies and the tech

ABSTRACT. Datafication “reframes key questions about the constitution of knowledge” [1]. Big data have brought about a novel, powerful system of knowledge, with its own epistemology and specific ways of framing, packaging, presenting and activating information. They have fundamentally altered the conditions under which we make sense of the world and act upon it. But novel data countercultures emerge at the fringes of the datafied society, propelled by new forms of civic engagement and political action that I have termed “data activism”.

Data activism indicates the range of sociotechnical practices that interrogate the fundamental paradigm shift brought about by datafication. Data activism supports the emergence of novel epistemic cultures within the realm of civil society, making sense of data as a way of knowing the world and turning it into a point of intervention and generation of data countercultures. Data activism emerges from preexisting movements, such as the hacker and open source counter-cultures. This paper explores data activism as a producer of counter-expertise and alternative epistemologies, reflecting in particular on how these are articulated in relation to technology and software cultures such as the hacker ethics.

[1] boyd, dana, Crawford, K., 2012. Critical questions for Big Data. Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 15, 662–679.

10:35
Ankh-Morporkian Law Review: Legal and Administrative responses to Technological Change

ABSTRACT. Ankh-Morpork is Discworld’s largest, and (at least according to some Ankh-Morporkians) greatest city. Ankh-Morpork has a rich innovative tradition, mainly due to the presence of the Guild of Alchemists and the Unseen University. More recent technological innovations of a less magical and/or explosive nature, have nevertheless posed a complicated challenge to city administrators. Most notably these are the introduction of the printing press and clacks system. This paper examines how the office of the Patrician has responded to these challenges. Those responses seem to be tailored to its specific situation, but guided by coherent principles. By and large the Patrician chooses a self-regulatory approach, in keeping with Ankh-Morporkian legal tradition. He can quite forcefully intervene however, when ethical borders are crossed, as well as when the technology by itself is deemed to be to big of a (societal, health or magical) risk. In closing this article investigates whether these principles would also be useful in Roundworld, while acknowledging fundamental differences in both the legal structure of Ankh-Morpork as well as the nature of reality in our separate corners of the multiverse.

Nb. The Ankh-Morporkian Law Review is not associated with the Guild of Lawyers. Please refrain from informing mr. Slant, their president, of this fact.

10:45
Copyright, Technology and the CJEU: An empirical study
SPEAKER: Tito Rendas

ABSTRACT. Since the dawn of copyright law, technological change has been enabling new ways of using copyright-protected works. These technology-enabled uses entail the prima facie infringement of one or more of the exclusive rights conferred by E.U. copyright law – typically the reproduction right and the right of communication to the public. Where an exception to these rights applies, however, third parties may use the protected works without the rightholders’ authorisation. By limiting the scope of exclusive rights, copyright exceptions may provide some breathing space to uses of copyrighted works made possible by emerging technologies. Generally, two distinct legislative techniques may be used to establish exceptions to copyright: a closed list or a vague standard. It has become an established idea among copyright scholars that the closed list of exceptions laid down in article 5 of the E.U. Information Society Directive, together with other existing interpretative constraints, like the rule of strict interpretation of exceptions and the so-called three-step test, has been narrowing down this breathing space, by preventing courts in Europe from accommodating new technology-enabled uses of copyrighted works. The stereotypical argument is that the closed list system, while ensuring legal certainty, limits judicial flexibility in a time of rampant technological change – a time in which flexibility is badly needed. Nonetheless, the authors that make this claim simultaneously point out to cases in which Member State courts have refused to wear this statutory straitjacket, by resorting to ample interpretations of the applicable exceptions or to general doctrines and principles, such as abuse of right, in order to justify the accommodation of technology-enabled uses in specific cases. In other cases, courts apply the listed exceptions strictly, finding the technological uses at stake infringing. This dyad of approaches is apparent at the CJEU level as well: whereas sometimes the CJEU applies exceptions in a strict manner, other times it engages in ample and purposive interpretations thereof, disrespecting the foregoing interpretative constraints. A somewhat counterintuitive problem thus arises: the restrictive system of exceptions in the E.U. coupled with the courts’ occasional willingness to decide cases in a pro-technology fashion seems to create significant uncertainty over the outcomes of these cases. Are there any themes and patterns in this apparently muddled case law? In which types of cases are European courts willing to circumvent the existing interpretative constraints? What factors explain courts’ decisions? While scholarly efforts have been made in the U.S. to show that the oft-repeated idea that fair use case law is unpredictable is far from being true, no study has attempted to systematically understand the European judicial approach to copyright exceptions in the face of technological change. My article seeks to remedy this lacuna in the literature, by using CJEU case law on copyright exceptions and technology-enabled uses as a case study. The relevant cases will be analysed through a qualitative coding system, in order to pursue two fundamental tasks: i) to identify the situation-types in which the CJEU renders emerging technological uses of copyrighted works infringing and non-infringing, and, relatedly, ii) to discern the factors that are motivating the CJEU’s decisions on these cases. The purpose of the article is thus to rationally reconstruct the CJEU’s approach to new technology-enabled uses of copyrighted works, thereby making an essential and unprecedented contribution to copyright scholarship: to enhance certainty qua predictability within this corpus of case law.

09:45-11:15 Session 11D: IAPP (sponsored session)
Location: DZ7
09:45
Data sharing mechanisms and privacy challenges in data collaboratives - Delphi study of most important issues
SPEAKER: unknown

ABSTRACT. In this paper we explore the concept of “data collaboratives”, which stands for cross sector partnerships to leverage new sources of digital data for addressing societal problems. Many of these new sources of digital data – such as “data exhaust” from mobile apps, search engines, personal sensors and so on – are collected by companies. The data collaborative model invites the private sector to donate some of these data to help advance scientific research or policy interventions in relation to pressing societal problems. Data in a data collaborative scenario undergoes a certain data lifecycle: it is captured, stored, shared, analyzed, and visualized. At each of these stages of the data lifecycle certain privacy issues come into play, which may impact the extent to which the data can be shared in a data collaborative. Literature regarding privacy in this context is still sparse. The challenges highlighted in the literature so far concern: the lack of informed consent of users, the risk of re-identification of persons, ambiguous and country-specific legislation, increased risks in developing contexts, group privacy. The objective of this paper is to identify and define most important privacy challenges that need to be addressed in the context of data collaboratives. Most important challenges are identified using a ranking-Delphi method. In this paper we show the preliminary results based on input form two expert panels: 1) 20 international privacy experts, and 2) 20 international experts within data collaboratives domain. We discuss and compare the results from these two groups. In our future research we intent to extend our study by including other expert panels such as lawyers, data scientists, and open data experts. Our research aims to provide guidance on how data can be successfully shared in data collaboratives while respecting data protection interests. This paper contributes to the discussion about the challenges of using new sources of data for data science and evidence-based policy making.

10:15
The calculation of private copying levies – How much harm do rightholders suffer from private copying?
SPEAKER: Mina Kianfar

ABSTRACT. The CJEU found in the Padawan case that, in principle, the private end user who reproduces the protected work is responsible for financing the compensation to be paid to the rightholder. However, national laws may provide for a private copying levy chargeable to manufacturers or sellers of reproduction equipment under the condition that there are practical difficulties in identifying private users. Since the trade sector can pass on the amount to private end users in the sale price, it is in the end the consumer who bears the costs of private copying. This concept of indirect payment has led to a complex levy system in many Member States which have been the subject of a number of referrals to the CJEU, the first of which appeared in the Padawan case.

In this early decision the CJEU also made clear that the amount of the levy must be calculated on the basis of the criterion of the harm caused to authors of protected works by the introduction of the private copying exception. In subsequent decisions the CJEU gave further guidance on the calculation of the amount of the private copying levy and the distribution of funds received by way of the levy.

The court confirmed several times that rightholders suffer no harm where protected works are consumed via paid online services. In this regard, Advocate General Wahl just recently called the calculation of the amount of the private copying levy “a vexed question in today’s digitalized world” in his opinion in the Microsoft case. According to GA Wahl, “the levy system was put in place because, in the ‘offline world’, levies were the only way to ensure that rightholders were compensated for copies made by end users. That does not fully correspond to the digitalised online environment in which copyrighted material is used today.” […] “Today, however, as is well-known, it seems that private copying has at least partly (if not largely) been substituted by various kinds of internetbased services that allow rightholders to control the use of copyrighted material through licensing arrangements. Despite those technological developments and the arguably declining practical importance of private copying, the private copying exception is still widely applied in the European Union.”

In light of the concrete guidance provided by the CJEU on the calculation of copyright levies, one would assume that collection societies have long established an ingenious strategy to calculate how much harm rightholders suffer from private copying. The calculation of the exact amount of harm is in fact necessary in order to avoid overcompensation which according to the CJEU is not “compatible with the requirement, set out in recital 31 in the preamble to Directive 2001/29, that a fair balance be safeguarded between the rightholders and the users of protected subject-matter.”

It is difficult to understand the math behind the setting of tariffs since collection societies are not obliged to disclose any information on the methods they used in order to define the correct amount of compensation when they publish their tariffs. The Collective Management Directive which imposes information and transparency obligations on collection societies can help to shed some light on the issue. According to the Directive, collection societies at least have to define objective criteria on the basis of which their tariffs are determined.

This paper aims at comparing the criteria collection societies use for the setting of tariffs with the requirements formulated by the CJEU case law. Do tariffs provide for a fair compensation that corresponds with the amount of harm rightholders suffer due to private copying?

10:45
Investigating mHealth Privacy Frameworks for managing Chronic Diseases
SPEAKER: Farad Jusob

ABSTRACT. This paper reviews and examines the challenges of preserving user privacy in the context of using mHealth to manage chronic diseases. The paper first discusses mHealth, its importance in managing chronic diseases, and the associated privacy concerns . Secondly, the paper compares existing privacy frameworks applicable to mHealth. Thirdly, the key principles gathered from the frameworks are analysed in the context of their suitability for enabling adequate privacy when using mHealth for managing chronic diseases. Finally the paper will propose optimal specifications for developing a privacy framework for mHealth in the context of managing chronic diseases.

09:45-11:15 Session 11E: HC 4 - Social and economic dimensions of big data in health care
Location: DZ5
09:45
Reflections on the scope of mandate of European CMOs
10:15
A Reasonable Woman's Expectation of Privacy

ABSTRACT. Abstract: Various doctrines in the realm of privacy law involve a reasonable expectation of privacy. In the U.S. public-law context, Fourth Amendment protections get triggered only where there is a reasonable expectation of privacy. In the U.S. private-law context, three of the four traditional privacy torts — public disclosure of private facts, intrusion upon seclusion, and false light — ask whether a particular privacy invasion would be “highly offensive to a reasonable person.” Although less dominant, the reasonable expectation of privacy concept has also arisen in the European context.

Neither courts nor scholars, however, have directly grappled with the question of whether as a normative matter courts should take into account the gender of the individual whose privacy has been invaded and the corresponding gendered privacy norms in deciding what constitutes that individual’s reasonable expectation of privacy. Put differently, are there ever situations in which courts should take into account a reasonable woman’s expectation of privacy, to the extent that it might differ from a reasonable man’s expectation of privacy in light of different existing societal gendered privacy norms?

This paper explores that question by delving into the specific case study of monitored employee drug testing, a scenario in which the reasonable expectation of privacy may differ for women versus men at least in light of current gender norms. Descriptively, existing U.S. cases have taken different approaches to gendered privacy norms in these cases, but none have justified or explained their chosen approach. The paper therefore wrestles with five possible approaches to cases in which a woman’s expectation of privacy may differ from that of a man: 1) an express gender norm approach; 2) a silent gender norm approach; 3) a gender norm floor approach; 4) a gender-irrelevant approach, and; 5) a gender-blind approach.

This paper is situated within a number of literatures including the literature on workplace privacy and drug testing, the literature on gender and privacy in prisons, and the more general literature on gender and tort law more generally in the context of the reasonable man concept.

10:45
Copyright’s view on libraries in a connected world
SPEAKER: Vicky Breemen

ABSTRACT. A library is a library? The answer actually depends on the perspective, such as common understanding, library and information sciences (LIS), or copyright law. Against the background of European copyright reform, this paper aims to critically assess to what extent copyright’s view on libraries either reflects the concept’s evolvement, or sticks to a traditional understanding. The first, evolving view is signaled in LIS-literature: libraries are developing functionally and institutionally, meaning that they are disentangling from physical places, operating digitally and encompassing broader spaces. The second, traditional view derives from a common understanding of libraries as physical buildings, containing collections for use. In other words, one of the contemporary challenges stemming from technological developments surfaces in the context of libraries and copyright law, connecting the disciplines of LIS and copyright law. It presents the following legal and regulatory issue: does the present design of the so-called ‘library privilege’ in copyright law sufficiently cater to the current realities of a connected world? If not, how should it be changed?

To answer these questions, the paper scrutinizes libraries through a copyright lens, mostly from a European point of view but including comparisons where relevant. In doing so, it elaborates the following lines of reasoning. First, it briefly sketches different perceptions of ‘libraries’. It turns out that the notion can be characterized institutionally and functionally. The institutional/functional approach is chosen to distill recurring factors to facilitate the legal analysis. Second, the partly non-legal findings inform the copyright assessment, which will examine the scope and rationales of selected library exceptions in copyright law. These provide carve-outs from the exclusive rights. It is apparent that the European copyright legislator has recognized libraries’ digital developments, but principally failed to translate this to the law itself. As the law seems limited and vague due to heavily relying on physical boundaries, it is no longer clear to what extent the library exceptions in copyright law extend to digital library activities. Apart from the scope of exceptions, their envisaged beneficiaries raise questions: matters become even more complicated due to the rise of new actors in the networked information environment, who may fulfill similar access-related functions. Is a ‘library privilege’ then still justified, or especially warranted? Third, libraries and copyright law are measured against each other, connecting the domains of LIS and copyright law. The concrete design of the library exceptions in copyright law sometimes appears to work against their own purposes. Therefore, taking both the evolving library concept and the rationales of the present library privilege into account, the paper concludes by discussing how the contours of the ‘old’ library exceptions should be re-adjusted institutionally and functionally to better reflect the digital side of library functions.

In conclusion, the paper is based on a set of central assumptions, which inform the story explicitly or implicitly. First, it finds that libraries and copyright law have partly shared functions and goals regarding the organization and dissemination of information, furthering free speech and culture. It fosters the assumption that copyright law then should facilitate library functioning at least to some extent, also in the digital domain. Recent examples illustrate this assertion, such as the European Parliament’s non-binding resolution (2015) and the ECJ’s preliminary ruling in VOB/Stichting Leenrecht (2016). Second, it maintains that human rights color the functioning and role of copyright law and libraries. Third, it holds that both copyright law and libraries have always responded to technological developments, which in turn shaped their relationship. Fourth, taking the foregoing assumptions together, it contends that the ‘library privilege’ in copyright law should balance all interests involved, in that sense containing a ‘minimum safeguard’ for all sides. Fifth, as the privilege needs to be delineated to some extent, it concludes that a combined institutional and functional approach is inevitable in the context of the library privilege’s contours.

Biography: Vicky Breemen graduated cum laude from the Institute for Information Law’s research master (IViR, University of Amsterdam, 2012). For her interdisciplinary researchmaster's thesis on the library privilege in the digital environment, she was awarded the Victorine van Schaickprijs NVB 2012 and the University of Amsterdam Thesis Award 2013. Interested in copyright law, culture & law and freedom of expression, she is currently writing her PhD-thesis at IViR on the relationship between copyright law and libraries.

09:45-11:15 Session 11F: IP5: Panel: Copyright lawmaking in the EU
Location: DZ4
09:45
The canary in the data mine - the technological, legal, ethical and organizational infrastructures of research into algorithmic agents
SPEAKER: unknown

ABSTRACT. The objectives of our paper are two-fold. The first one is to describe one possible approach to researching the individual and societal effects of algorithmic recommenders, and to share our experiences with the academic community. The second objective is to instill a more fundamental discussion about the ethical and legal issues of tracking the trackers, as well as the costs and trade-offs involved, e.g. for the privacy of those users we are observing. Our paper will contribute to the discussion on the relative merits, costs and benefits of different approaches to ethically and legally sound research on algorithmic governance. We will argue that besides shedding light on how users interact with algorithmic agents, we also need to be able to understand how different methods of monitoring or algorithmically controlled digital environments compare to each other in terms of costs and benefits. We conclude our article with a number of concrete suggestions for how to address the practical, ethical and legal challenges of researching algorithms and their effects on users and society.

10:05
The Matrix Has You: Indoctrination, from Inception to Donald Trump and Contemporary Populism

ABSTRACT. A long-standing assumption in constitutional democracies, reflected in their legal systems, is that each individual has inalienable freedom of thought – the voter is the best-equipped to decide what is in their best interest, and the customer is always right. Recent findings in psychology and behavioral sciences have hollowed out these idealistic assumptions, and neuroscience is finishing the job. By playing on biases, pressing emotional buttons and hacking attention spans, propaganda and populism are eroding the bases of informed democracy. From manipulated Facebook feeds to the urban landscape filled with non-stop advertisements, minds are overloaded with information – but is it empowering as the utopians dreamed, or systematically structuring and framing minds, channeling them into predictable pathways?

Inception depicted how a complex idea needs to be boiled down to its most basic emotional imprint and thereby take root in a mind in order to control it. The Matrix was a visceral depiction of mass delusion, systematic control and the untrustworthiness of our own minds. Using these movies as references, I will build on my previous research concerning how religious indoctrination impairs autonomy, to explore the roles of communication technologies and sophisticated psychological manipulation in contemporary societies. This I will connect to the role of the law should be: 1) in re-examining its basic assumptions and 2) how to tackle the challenges posed by “dirty mind tricks” and “junk food for thought”.

10:15
Waking up John Spartan: Locating resistance in Demolition Man's "smart city"

ABSTRACT. It is often said that one day we will wake up and find ourselves in George Orwell’s 1984. Instead, I submit that we are sooner racing towards 2032, to a “smart city” closer to that seen in the science-fiction film Demolition Man.

The term “smart city” obscures as much as it reveals. Yet, what is clear is that we are approaching a time when data-driven machine decision making, automated law enforcement and hybrid cyber-physical architectures become an ordinary part of the lived experience. In this paper, I aim to scratch the surface of how regulatory dynamics are changing in these urban spaces by looking at the technologies encountered by John Spartan (Sylvester Stallone) as he rediscovers San Angeles after being cryogenically frozen. As decision-making power is delegated to and exercised by automated systems like these, the capabilities of the machine extend to realms of access and opportunity that may work for the many, but are just as likely to present friction and barriers to the few. Thus, the threat exists that decision-making processes are left to exercise power in ways that are unbeknown to and unchallengeable by people. As much as conceptual questions of governance arise to which legal scholarship must put itself, normative questions of how contestability and resistance are to operate also pervade the practice of (re-)making the city.

The exploration of what Demolition Man got right about the future is a playful one, yet contained within this sits some broader points for consideration. First, the need to accommodate methods through which to resist and challenge these new environments. Second, the question of how legal scholarship can contribute to building a critical discourse of the smart city and benefit itself from the exercise of ‘futuring’ the law.

10:25
Healthcare information systems as critical infrastructure: Toward a research agenda for cybersecurity in healthcare

ABSTRACT. Confidentiality in the medical encounter is crucial to providing adequate patient care. Health data is therefore privileged and protected by legal mechanisms. Health systems increasingly use electronic records and large-scale databases, while individuals also use IT to collect, store and share data about their daily lives and health behaviors. Sharing data via network-based systems or storing it ‘in the cloud’ produces multiple ‘digital selves,’ health ‘data doubles’ and ‘virtual patients.’ However, this produces much data in absence of clear governance structures, blurring the view of further use and handling of this data. The networked, distributed nature of health data collection and the increasing convergence of protected hospital systems with commercial collection and aggregation of data and consumer health technologies further exacerbates this problem. Moreover, it is unclear if (and how well) institutions are equipped to deal with system breakdowns – including both internal glitches and targeted external attacks. This brings the issues of confidentiality of the medical encounter and protection of patient privacy into the realm of cybersecurity. However, while health is often named as part of the “critical infrastructure” in need of strong cyber-protective mechanisms, few cybersecurity studies have specifically focused on protecting health systems. Conversely, where cybersecurity in healthcare is studied, scholars tend to limit their focus to social and organizational issues, such as professional passwords. This talk presents a research agenda for cybersecurity of healthcare information systems as critical infrastructure, exploring possible impacts on the governance of critical IT infrastructures and mitigation of threats, as well as related sociotechnical challenges to protecting of large-scale HIT systems.

10:55
Online Reviews as an Alternative to Information Duties in European Consumer Law – A Critical Analysis

ABSTRACT. Online reviews have changed our everyday lives – information is at one click away. Nowadays, consumers resort to online reviews when it comes to making practically any contractual decision: which hotel to book, which phone to buy or which mobile app to download. This is a prime example of how technology is changing society: Booking.com has had over 106 million reviews and TripAdvisor reports more than 385 million. But is technology also changing law? In a world where the current information paradigm in consumer law – the existence of mandatory information duties in contracts – is seen by a growing number of scholars as ineffective and unable to protect consumers in an increasingly technological society, online reviews have been suggested as a regulatory alternative. That option remains however unexplored. In this paper, I will answer the question ‘can online reviews be an alternative to information duties in European consumer contract law?’. I will do so by first providing a critical overview of the main arguments against the information paradigm in consumer law. Secondly, adopting an interdisciplinary perspective, I will characterize online reviews: what their social function is, what their main weaknesses are and how this type of reputation system has blossomed in the sharing economy. For that, I will draw from legal, economic and sociological literature. Finally, I will define online reviews as ‘advice-like information’ – opposed to ‘mandatory/technical information’ that presently rules in European consumer law – and explain how this is one of the strongest arguments towards an affirmative answer to the research question. This paper is a first exploratory approach to what would be a radical change in European consumer law – the possibility of replacing information duties with unregulated online reviews, designed and maintained by online platforms.

09:45-11:15 Session 11G: Privacy 7: Panel - Internet Privacy Engineering Network (IPEN)
Location: Black Box
09:45
mMoney, Mobility and Healthcare: Towards a Meta-Theory of System Diversity
SPEAKER: unknown

ABSTRACT. [NOTE with respect to the choice of track: This paper deals with ICTs more generally, but the fundamental idea does relate closely to issues indicated at the data science track. As this is work in progress, we are open to framing the work more closely towards data or positioning it towards another track if desired.]

The development and design of technology is a normative activity, informed by context, understanding and values. Epistemological awareness of the technological mediation of societies is needed for the demonstration of the normative social dimensions of technology (Scalambrino, 2016, Buskens and Reisen, van, 2016). There are at least two levels that can be distinguished to consider the normative nature of technology: 1. Context-dependent norms and assumptions of different kinds (legal, economic, cultural, political) which inform explicit and implicit requirements for technology; and 2. The choices made with respect to default operating circumstances

The understanding of technological design and development as a normative activity shows to be particularly pertinent when technologies and infrastructures from the so-called developed world are used for Development, in particular with ICT4D. ICT4D aims to “help in international development by bridging the digital divide and providing equitable access to technologies.” (Unwin, 2009: 9) Unwin defines development of ICTs for underdeveloped areas in terms of their inclusion in the global processes of growth and progress, identifying the sharing of ICTs as a global public good. Underlying the paradigm of ICT4D is the recognition that the inclusion of ICTs in the global world should be a basic human right. The principle design and development mode of ICT4D is the transfer of ubiquitous technologies designed and developed in developed areas to the settings that are defined as non-developed. Some authors have started to assess the (negative) impact of development under this paradigm for a localized understanding of the relevance of ICTs in their specific contexts (Gomez & Pather, 2012).

This article will demonstrate the fundamental flaw with the premise that technology can be transferred from one context to another and can be seen as a linear continuation of global design that can bring growth in another location. It will be argued that the ontology of a different infrastructure for ICTs needs to be understood and recognized as a basis for localized relevant research and design of ICTs (Johnson and Stam, van, undated). This requires a shift from the paradigm of ubiquitous computing to one that recognizes intellectually and consequentially the necessity of theories of locatedness of innovation (Dourish & Mainwaring, 2012).

The ontological variation of systems and how these differ in various settings can be described. In this article we define this as ‘System Diversity’. Constituent elements of a system in underdeveloped rural areas can include congestion, outmoding and latency (Johnson and Stam, van, undated) A denial of the variation of ontological and epistemological realities for ICTs in rural development settings is described as the result of a neo-colonial default setting (Dourish & Mainwaring, 2012; Mawere & Stam, van, 2015) that should be underpinned with alternative research paradigms (i.e. Galtung, 1971). A recognition of System Diversity can provide a basis of technological innovation at local level such, for instance, as rural internet networks (Greunen, van & Stam, van, 2015) or the use of TV White Spaces (Johnson and Makiki, 2016). Corporate strategies to circumvent some of the constraints resulting from System Diversity aim to promote a model of ubiquitous development in ICTs, recognizing system constraints in local settings (Brodkin, 2016).

In this article we explore how System Diversity is the result of different ontologies that determine the use of ICTs. The case-study of the development of mobile money and subsequent use of such technologies for cross-border remittances is changing the economic landscape in rural developing settings (Reisen, van et. al, 2016.). Identifying the use of remittances for health care demonstrates how deeply ICTs impact on such settings but in unexpected and unintended ways (cf also Reisen, et. al, 2016/2017).

This article will explore the question to what extent a different infrastructural ontology of ICT systems results in alternative design and development which serves local communities in ways that they understand and find useful. The example of mobile money and remittances can help to show how the diversity in the ontology, culture and norms in the systems at play (the ICT infrastructure, the local community, the technology, market mechanisms etc) must be fundamentally acknowledged as a theoretical paradigm. We believe that the discourse would be served by better conceptualization of these issues. We are working towards a meta theory of system diversity, which connects a description of different ontologies of ICT’s architectures with an understanding of the epistemological embedding of ICTs in rural settings in developing countries.

References:

Buskens, I. & Reisen, van, M. (2016) ''Theorising agency in ICT4D: Epistemic sovereignty and Transformation-in-Connection'' in Mawere, ed. Underdevelopment, development and the future of Africa, Langaa RPCIG: Bamenda, pp.394-432.

Brodkin, J. (2016) Can’t Stop Elon Musk. SpaceX plans worldwide satellite Internet with low latency, gigabit speed. SpaceX designing low-Earth orbit satellites to dramatically reduce latency. Art Technica (17.11.2016): http://arstechnica.com/information-technology/2016/11/spacex-plans-worldwide-satellite-internet-with-low-latency-gigabit-speed/

Dourish, P. and Mainwaring, S. (2012) Ubicomp’s Colonial Impulse. In: Ubi- Comp’12, 5-8 Sep 2012, Pittsburgh, PA, USA, 2012.

Galtung. J. (1971) A Structural Theory of Imperialism. Journal of Peace Research, 8(2):81–117.

Greunen, van. D. and Stam, van G. (2014). Review of an African Rural Internet Network and related Academic Interventions. The Journal of Community Infor- matics, 10(2).

Gomez, R. & Pather, S. (2012) ICT Evaluation: Are we asking the right questions? EJISDC (2012) 50, 5, 1-14 1 (Gomez & Pather, 2012).

Johnson, D.L. & Mikeka, C. (2017) Bridging Africa’s Broadband Divide. How Malawi and South Africa are repurposing unused TV Frequencies for rural high-speed internet spectrum. IEEE.org International (29aug 2016), available at: http://spectrum.ieee.org/telecom/internet/malawi-and-south-africa-pioneer-unused-tv-frequencies-for-rural-broadband

Johnson, D. & Stam, van, G. (undated) The Shortcomings of Globalised Internet Technology in Southern Africa (unpublished).

Mawere, M. and Stam, van, G. (2015) ‘Paradigm Clash, Imperial Method- ological Epistemologies and Development in Africa: Observations from rural Zimbabwe and Zambia.’ In: Munyaradzi Mawere and Tendai Mwanaka, editors, Development, Governance, and Democracy: A Search for Sustainable Democracy and Development in Africa, p. 193–211. Langaa RPCIG, Bamenda.

Reisen, van, M., Fulgencio, H., Stam, van G, Ong’ayo, A., Dijk, J van, D. (2016) mMoney Remittances: Contributing to the Quality of Rural Health Care. Paper presented at Africomm 2016. Africomm, December 2016.

Reisen, van, M., Gerrima, Z., Ghilazghy, E., Kidane, S., Rijken, C., and Stam, G.J., (2016/2017). Tracing the Emergence of ICT-Enabled Human Trafficking for Ransom. (Rijken ed.) Handbook for Human Trafficking. Routledge: London.

Scalambrino, F. (2016) Social Epistemology and Technology. Towards Public Self-Awareness Regarding Technological Mediation. Rowman & Littlefield. London, New York.

Stam, van G., (2013), “African Engineering and Colonialistic Conditioning” in Fifth International Conference on e‐Infrastructure and e‐Services for Developing Countries (Africomm 2013), Blantyre, Malawi

Unwin, Tim (2009). ICT4D: Information and Communication Technology for Development. Cambridge University Press. Cambridge. ISBN 9780521712361.

10:15
What are the Legal Aspects in the Use of Social Robots in Therapeutic Contexts?

ABSTRACT. Abstract In 2015, Prof. Dr. Albo-Canals was working on a project called Robotic Companions and LEGO® Engineering with Children on the Autism Spectrum from the CEEO Tufts University [1]. The main objective in the study was to measure the effect of LEGO engineering and its collaborative nature on the development of social skills in children and adolescents with Autism Spectrum Disorder (ASD). They used logger robots connected to a cloud system combined with the traditional recording and coding system as a facilitator of data acquisition. The robots participated actively in the classroom playing the role of master to help students work together and achieve classroom goals.

Along his research, Albo-Canals had several questions concerning how he could ensure the protection of the rights and the interests of the children while collecting all the sensitive data from them. That is how he met Fosch- Villaronga. Along some of the discussions, other questions beyond the collection of data arose: how often can we use robots with the children? Until what extent can the researchers push the creation of an emotional bond between the robot and the child? Can this therapy influence negatively the behavior of the child in the future? The most important question, in any case was: what are the legal aspects that we need to take into consideration in our project?

The truth is that at the same time that there has been an effort from the policy perspective to address the legal implications of drone and autonomous driving technologies, and several guidelines on their appropriate use have been released [2-7]; there is currently no specification on what are the legal issues around the use of service robots in therapeutic contexts nor what are the rules that govern them, even if a lot of research on the technical side has been provided [8]. Until now, any research on this domain falls under the Institutional Review Board approval who is in charge of ensuring that all the rights of the children are protecting while conducting the experiment. However, what does happen when the industry creates these robots?

These robots, also called Socially Assistive Robots (SAR), raise specific concerns due to their nature, precisely because they work on a different human- robot interaction basis than drones or autonomous vehicles (i.e. cognitive) and they do it with vulnerable parts of the society. To unleash the full potential of these technologies that are already hitting the market, while protecting the interests of the involved users at the same time, however, an appropriate framework should be established.

The closes example to provide guidelines at this regard is the Robolaw project, which in 2014 provided general guidelines on the regulation of emerging robotic technologies, including care robots [9]. They stated that definition, independent living, liability, safety and security, and privacy issues needed to be addressed in this domain. A generic framework including these aspects, however, might entail a lack of preciseness to particular cases, which will always need further concretization [10]. An example of this can be found in a very recent report with recommendations to the European Commission on civil rules on robotics: in May 2016 the European Parliament highlighted human contact in care-robot contexts is fundamental and that replacing the human factor with robots could dehumanize caring practices [11]. Leaving aside the fact that it is the only provision under the “care robot” section, the truth is that most of these technologies are focused on promoting human-human interactions. The Tufts University CEEO project, for instance, focuses on promoting human-human interaction among non-neurotypical children, as these children lack normally social skills. What safeguards should they apply then if they already meet with that clause?

Linked to this concretization, and due to their intended use, it remains unclear whether service robots in therapy are care or toy robots, or medical devices [12]. Current service standards cannot help either in the provision of a framework. The reason is while they focus on the physical human-robot interaction hazards [13], all those aspects concerning the cognitive human-robot interaction are overlooked: until what extent the robot can act upon the emotions of the user? Should robots be trustworthy? Can the robot force the human-human interaction? [14]

This article provides a legal dimension to what has been covered up to now from the ethical point of view [15]. We will identify precise risk scenarios that could challenge the rights of the users (from data protection to autonomy or dignity), and we will give content to these principles and rights. Keeping the bottom-up approach in mind, the article is oriented towards creating the basis of a future framework on the use of service robots in ASD researches, which currently lacks even if the number of European projects related to it is growing [16-17].

References 1. See the project at http://roboautism.k12engineering.com/?page_id=2 2. Pillath, S. (2016) Automated Vehicles in the EU. European Parliament Research Service. Available at: ec.europa.eu/transport/themes/its/studies/doc/2012-its-and-_personal-data- protection_-_final_report.pdf. 3. G7 declaration on automated and connected driving ec.europa.eu/commission/2014-2019/bulc/announcements/g7-declaration- automated-and-connected-driving_en 4. Automated Driving Roadmap (2015) from the European Road Transport Research Advisory Council Available at: www.ertrac.org/uploads/documentsearch/id38/ERTRAC_Automated- Driving-2015.pdf. 5. For a compendium of legislation on self-driving vehicles in the U.S. visit the website of the National Conference on State Legislation: www.ncsl.org/research/transportation/autonomous-vehicles- legislation.aspx. 6. For drones, see European Aviation Safety Agency EASA (2015) Introduction of a regulatory framework for the operation of drones at www.easa.europa.eu/system/files/dfu/A-NPA%202015-10.pdf. 7. For its American version, see the Small UAS Rule (Part 107): Federal Aviation Agency (August 2016) Operation and Certification of Small Unmanned Aircraft Systems. Available at: www.federalregister.gov/documents/2016/06/28/2016- 15079/operation- and-certification-of-small-unmanned-aircraft-systems 8. Scassellati, B., et al. (2012). Robots for use in autism research. Annual review of biomedical engineering, 14, 275-294. 9. See Palmerini, E., et al. (2014). Guidelines on Regulating Robotics. RoboLaw Project, Deliverable 6.2. 10. Feil-seifer, D., & Matarić, M. J. (2011). Ethical principles for socially assistive robotics. 11. Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) from May 2016. Available at: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=- //EP//NONSGML%2BCOMPARL%2BPE- 582.443%2B01%2BDOC%2BPDF%2BV0//EN 12. Albo-Canals, J. (2015) Toy Robot versus Medical Device. Newfriends 2015, Conference Proceedings. 13. ISO 13482:2014 Robots and Robotic Devices – Safety Requirements for Personal Care Robots. 14. Fosch-Villaronga, E. et al. (2016) An Interdisciplinary Approach to Improving Cognitive Human-Robot Interaction - A Novel Emotion-Based Model. What Social Robots Can and Should Do 290, pp. 195-205. 15. Coeckelbergh, M. et al. (2016). A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: Ethical acceptability, trust, sociability, appearance, and attachment. Science and engineering ethics, 22(1), 47-65. 16. Enigma project, see www.autismeurope.org/activities/projects/project-de- enigma-multi-modal-human-robot-interaction-for-teaching-and- expanding-social-imagination.html 17. See the project http://www.dream2020.eu

10:45
Assessing Information Security Regulations for Domestic and Industrial Cyber-Physical Systems

ABSTRACT. Security incidents facilitated by networked devices are occurring more frequently, across a range of different settings and cyber-physical systems. Recent instances, include distributed denial of service (DDoS) attacks mediated, in part, by botnets of compromised IoT devices [1] and industrial control system (ICS) hacks compromising physical safety in factories and power plants [2]. In this paper, we will unpack where emerging regulatory responsibilities on information security lie in both domestic/personal and infrastructural/industrial contexts. We will analyse how engineering solutions can emerge that reflect the regulatory landscape, and the interests of different stakeholders, particularly users. Accordingly, this paper will adopt a multidisciplinary approach, drawing on both IT law and computer science perspectives.

We will map security vulnerabilities arising from domestic and industrial cyber-physical systems like smart homes and the industrial internet of things/industry 4.0. [3] We will analyse the role of the new EU Network and Information Security (NIS) Directive 2016 in establishing a regulatory framework around security of both critical infrastructure and domestic IT services. We are particularly interested in how this framework impacts different stakeholders, from users and technology manufacturers and the surrounding ecosystem of service providers. NIS has provisions for both settings, for example measures in Art 16 NIS around cloud services are relevant for domestic IoT. In conjunction, we will also assess the nature of new elements of the EU General Data Protection Regulation (GDPR) 2016 on data security for personal data, especially with breaches and notifications.

Most importantly, we will use our analysis to propose organisational and technical responses to address such risks, as per Chapters IV and V of NIS 2016, and the GDPR 2016 Art 30-32. In particular, the technical responses involve a turn to network and security engineers, and our paper will propose solutions that reflect both legal and technical dimensions of information security for cyber-physical systems. Accordingly, we will draw on concepts such as security by design and usable security.

As a final consideration, both EU instruments (NIS and GDPR) are set to come into force in May 2018. Given the recent UK EU referendum result, we will also look at to what extent the new UK Cyber Security Strategy 2016-2021 [4] aligns with both these legislative instruments, and the wider EU Cybersecurity Agenda [5].

References:

(1) Krebs, B. (2016) ‘Hacked Cameras and DVRs Powered Todays Massive Internet Outage’, Krebs on Security at https://krebsonsecurity.com/2016/10/hacked-cameras-dvrs-powered-todays-massive-internet-outage/ ; Koene, A and McAuley, A (2016) ‘Could Your Kettle Bring down the Internet?’ The Conversation at http://theconversation.com/could-your-kettle-bring-down-the-internet-67650

(2) Zetter, K. (2015) ‘A Cyber-Attack Has Caused Confirmed Physical Damage for the Second Time Ever’ Wired.com at https://www.wired.com/2015/01/german-steel-mill-hack-destruction/ ; Zetter, K. ‘An Unprecedented Look at Stuxnet: The World’s First Digital Weapon’ Wired.com at https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/

(3) Barnard Wills, D. et al (2015), Threat Landscape for Smart Home and Media Convergence, ENISA at https://www.enisa.europa.eu/publications/threat-landscape-for-smart-home-and-media-convergence

(4) UK Government (2016) ‘Government Cybersecurity Strategy 2016-2021’ at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/567242/national_cyber_security_strategy_2016.pdf

(5) EU Cyber Security Strategy (2013) ‘An Open, Safe and Secure Cyberspace; at http://www.eeas.europa.eu/policies/eu-cyber-security/

09:45-11:15 Session 11H: Alumni 1
Location: DZ2
09:45
The Privacy Bell: Re-Thinking Privacy Law and Loss

ABSTRACT. “What do you mean by privacy loss?” While the issue of privacy loss is central to privacy law scholarship, a clear definition of the concept remains elusive. We present a simple model that defines privacy loss in a way that can be applied to policy evaluations. In this model, when a person attempts to learn personal information about a target person, he observes a random draw from a distribution centered at the “true value” of that information. After aggregating these draws with his prior, he forms a subjective distribution, which represents his new, better-informed beliefs. Privacy is defined as the standard deviation of the distribution. The utility of the target person is assumed to be an increasing concave function of the standard deviation of this distribution: people like having more privacy and, as with most things, privacy has decreasing marginal utility. This framework presents the advantages of taking privacy preferences seriously while maintaining tractability for policy analyses. Because it sees privacy as a continuum rather than a binary, it is also more realistic than other models of privacy loss, and is well suited for evaluating “grey areas.” We then discuss three applications of this framework to contemporary legal and policy issues. First, we apply it to Posner’s criticism of privacy as an inefficient increase in asymmetric information, and show how privacy can actually increase social welfare. Next, we apply our model to the common law privacy tort, and argue that it both helps clarify current law and suggests a simple framework for judges to use in providing remedies going forward—mainly by distinguishing privacy interests from reputational interests in privacy law. Finally, we apply it to the third party doctrine, to show that many of its problems and insufficiencies stem from that it assumes a binary and too narrow concept of privacy (secrecy). The attempts to fix it, such as the existence of options and the concept of information fiduciaries, have been useful but insufficient.

09:45-11:15 Session 11I: Privacy 8: Privacy and Technology
Location: DZ3
09:45
Panel: Covert Policing in the Digital Age
SPEAKER: Gregor Urbas

ABSTRACT. Covert policing has adapted to the digital age, taking advantage of the anonymity and potential for identity manipulation associated with the online environment. To paraphrase the old cartoon line: ‘On the Internet, nobody knows you’re a cop’, the investigation of child sex predators has led police to use “sting operations” by posing as children using chat rooms and other social media, justified by the need to identify threats to children in a pre-emptively. In investigating online child pornography and other kinds of criminal networks, honey pot sites have been used and peer-to-peer networks have been infiltrated, through hacking or with an informant’s identity. Some non-government organisations have taken the techniques further, as in the “Sweetie Project” developed by the Terre des Hommes using an avatar-chatbot posing as a 10-year-old Filipina girl, targeted for webcam sexual services. Some of these investigations have led to prosecutions and convictions, requiring courts to consider the legality of covert techniques and the admissibility of evidence thereby obtained. Furthermore, there is the additional problem that arises when ‘policing’ action is self-started by the public, especially when they feel that the police have failed with regard to morally sensitive crimes such as sexual abuse involving children. There is a growing body of evidence to show that some of these latter actions have not only compromised official covert police operations, but have also led to the suicide of suspects. Clearly there is a balance to be reached between lawful and effective actions that may also involve engaging with the public to achieve acceptable policing outcomes. This panel presentation reviews the development of covert policing, its recent online manifestations, and the legal and ethical acceptability of covert online investigations. Viewpoints from several different jurisdictions will be presented by the international panel of three cybercrime experts, as well as an invited local discussant.

11:15-11:30Coffee Break
11:30-13:00 Session 12A: PLSC 6A: Cofone and Robertson
Location: DZ6
11:30
UNMANNED FUTURES

ABSTRACT. With the ongoing developments in artificial agency we are experiencing the development of a new agent in society. The impact of having artificial agency (AA) in society is uncertain and difficult to predict. One of the expected side effects is a gap in accountability for the actions of AA. This paper looks at the legal framework and current state of AI and the future with the assumption that AI will develop into AA that work, play and interact with non-artificial agents. Is the legal framework AI/AA proof !?

The artistic aspect of the research includes visual material that I would love to show in an accompanied exhibition if possible!

11:40
Watching our neighbours: The negotiation of privacy in neighbourhood watch messaging groups
SPEAKER: Anouk Mols

ABSTRACT. Neighbourhood watch groups monitoring and patrolling neighbourhoods have become increasingly popular in the Netherlands. Yet this has been surpassed by the emergence of mobile messaging neighbourhood watch groups which both supplement and are separate from more formal neighbourhood watch initiatives. In these messaging groups, neighbours are connected through WhatsApp (or a similar communication application) in order to exchange warnings, concerns and information about incidents, emergencies and suspicious situations in their neighbourhood. However, beyond increasing (perceptions of) neighbourhood safety and community, neighbourhood watch messaging groups can also lead to unwanted intrapersonal surveillance, ethnic profiling, vigilantism, increasing anxieties, communication overload and tensions among participants. Following prior research framing these groups in relation to neighbourhood safety, this paper provides a multidimensional understanding of the personal consequences of neighbourhood watch messaging groups. Based on in-depth interviews with moderators of these groups and focus groups with participants, this research provides an understanding of users’ practices and experiences and their relation to concerns about data, surveillance and privacy. It focuses on practical and mundane (privacy) activities, and our preliminary findings indicate that neighbourhood watch messaging groups are diverse in nature and content, and are experienced as both helpful and precarious. Users struggle to come to terms with the home as a private space alongside an interconnected online neighbourhood. This paper prioritises a user-oriented account of the relationship between neighbourhood safety, privacy and intrapersonal surveillance within in a relatively new communication phenomenon.

12:10
Cybersecurity of small and medium business networks in the world of the internet of things

ABSTRACT. The current regulatory tendencies towards self-regulation with regard to the commercial internet networks including components of the internet of things may provide a flexible landscape for creation of general industrial standards and best practices for specifically designed and integrated systems. However, the adoption of new technologies is not exclusive to the large businesses, as there are numerous business opportunities for small and medium scale businesses to implement these features into their company networks as well. Yet contrary to the large-scale industrial internet projects specifically designed to meet the needs and best practices for the particular industrial solution, the small and medium business solutions are in a large part represented by the user assembled systems. These systems are a set of connected parts, rather than integrated structure. Due to the lack of comprehensive approach to the network architecture, the security and interoperability of such systems depends mainly on the high quality design of the assembled components. It is thereby crucial for integrator of such a network to be able to relay on the component producer to take the potential security, interoperability, privacy, or resilience issues into consideration in the design phase.

Even if the production standards meet the best practices, the combination of components from numerous vendors leads inevitably to increased attack surface. This can be further accented through incorporation of legacy devices with weak security capacity or outdated security measures due to insufficient security expertise or limited cybersecurity budget. All this leads to probability of widespread cybersecurity issues among the small and medium sized businesses, who are pushed by the market forces to incorporate the new technologies into their company networks with limited resources devoted to their network design. Such security vulnerabilities might not present attack vectors only for the company network, but could further provide a foothold for larger scale illicit activities by becoming part of remotely operated botnet, opening the possibilities of denial of service attacks, cryptocurrency mining, or even prolific short distance malware distribution.

With the widespread implementation of internet of things on all levels of the economy arise many new security, privacy and interoperability issues. They need to be tackled in a way that does not prohibit the existence of small and medium businesses. Optimal regulatory framework should not only lead to increased security and protection of the rights and interests of affected parties, but should reflect on the limits of small and medium businesses. In this sense, industrial self-regulation might not always be an effective tool.

The contribution is aimed at presenting work-in-progress on my project aimed at internet of things in company networks and systems with specific focus on the regulatory framework in relation to the needs and possibilities of the small and medium businesses. The project features multidisciplinary aspects from the area of IT law, cybersecurity and law & economics. The first part of the contribution is devoted to the identification of specific aspects of small and medium business networks with integrated internet of things devices, which should to be taken into consideration by the regulator. The anticipated issue of insufficient capacity of the small and medium businesses to comply with general rules for network security represents the core of the argument. The following part then focuses on current regulatory approaches to the internet of things in company networks, reflecting upon the role of industrial self-regulation, its benefits and drawbacks. The closing remarks are devoted to the definition of factors, which could improve the regulatory settings to better account for the specifics of the security of the modern small and medium business networks.

12:40
VPN citizenship
SPEAKER: Fran Meissner

ABSTRACT. This paper will commence with an ethnographic observation of a lunch time conversation about VPN addresses and watching Netflix. Based on those observations and other examples the paper develops the notion of VPN citizenship. Citizenship has long been associated with various forms of membership and in particular with questions of political, civic and social rights. To connect the idea of citizenship with the ability to alter one’s VPN address in spite of one’s physical location is, I propose, an action that can offer participatory possibilities or be understood as a right necessary for certain forms membership (even if the alteration of a VPN tends to be thought about a semi-legal). Considering the VPNs as part of the (virtual) built environment I commence the paper by asking whether there may be parallels to forms of insurgent citizenship in cities (see Holsten, 2001) which theorise the ability to appropriate the urban environment in ways that might not have been intended as a crucial component of substantive (rather than just formal) citizenship.

11:30-13:00 Session 12B: Data Science 7 - Data Science, Civil Rights and Activism
Location: DZ8
11:30
Mobile Phones as Surveillance Devices: Finding a Balance between Privacy and Protection
SPEAKER: Gregor Urbas

ABSTRACT. Mobile phones can be used to record conversations or film events covertly, meaning that they may constitute surveillance devices. Increasingly, courts hearing civil and criminal cases are dealing with evidence of privately recorded conversations, having to rule on the legality and admissibility of covert mobile phone recordings. Surveillance devices legislation enacted to regulate law enforcement investigations with strict warrant requirements does not adequately deal with covert recording by private citizens in a range of personal and workplace situations. In particular, the balance that has to be struck between privacy and the protection of lawful interests, including prevention of victimisation, requires different considerations from the policing context. This presentation reviews Australian laws and recent cases on surveillance devices, suggesting reforms for a comprehensive legislative framework.

12:00
Privacy, proceduralism and self-regulation in data protection law

ABSTRACT. This contribution problematizes the decisional competence of corporate controllers under the risk-based approach in the General Data Protection Regulation (GDPR). To do so, it first conceptualizes data protection law, in relation to privacy, as pertaining to the way in which the boundary between the public and the private is drawn. The GDPR regulates processing operations through a number of material and procedural rules, the latter of which accord decisional competence to three actors in particular: supervisory authorities, data subjects, and controllers. These actors are allocated a role in the decision whether or not a certain processing operation is permissible, or whether it constitutes an undue encroachment upon the private sphere of the individuals whose data is being processed.

Marking a new generation of data protection law, the GDPR attempts to provide more substantive protection through a number of extensive meta-regulatory accountability obligations. With the focus on the controller’s accountability and the risk-based approach, the GDPR awards a greater discretion than before to privately held controllers, empowering them to make decisions about the legitimacy of their processing operations. We do not have one big brother to watch over privacy. Instead, we rely on the very entities which are liable to “misuse” data, expecting them to regulate themselves wisely under the watchful eye of both supervisory authorities and individuals. It is pertinent to ask: is the risk-based approach is evidence of a liberal, free market ideology, or of weak legislatures suffering from regulatory capture? The turn towards self-regulation may very well be a failure on the part of the legislature to tame computers and the way in which they are used.

12:30
The Death of 'No Monitoring Obligations": A Story of Untameable Monsters

ABSTRACT. In imposing a strict liability regime for alleged copyright infringement—it was actually parody—occurring on YouTube, Justice Solomon of the Brazilian Superior Court stated that "if Google created an 'untameable monster,’ it should be the only one charged with any disastrous consequences generated by the lack of control of the users of its websites.” In order to tame the monster, the Brazilian Superior Court had to impose monitoring obligations on Youtube. This was not an isolated case. Recent case law has imposed proactive monitor obligations on intermediaries—such as Delfi decided by the ECHR, Allostreaming in France, the Max Mosely saga in multiple European jurisdictions, Dafra in Brazil, RapidShare in Germany, or Baidu in China. In this context, however, notable exceptions—such as the landmark Belen case in Argentina—highlight also a fragmented international response to intermediary liability. These cases uphold proactive monitoring across the entire spectrum of intermediary liability subject matters: intellectual property, privacy, defamation, and hate/dangerous speech. Legislative proposals have been following suit. As part of its Digital Single Market Strategy, the European Commission, would like to introduce filtering obligations for intermediaries to close the “value gap” between rightholders and online platforms allegedly exploiting protected content.

In this paper, I suggest that we are witnessing the death of “no monitoring obligations,” a well-marked trend in intermediary liability policy. Current Internet policy—especially in Europe—is silently drifting away from a fundamental safeguard for users freedom of expression online, which has been guarding against any “invisible handshake” between rightholders, online intermediaries, and governments. In this respect, this paper would like to contextualize the recent proposed EU reform within the re-emergence of a broader move towards turning online intermediaries into Internet police. A EU “notice and stay-down” regime might soon be a further step in that direction. Meanwhile, the EU Digital Single Market Strategy apparently endorsed voluntary measures as a privileged tool to curb illicit and infringing activities online.

As I argue, the intermediary liability discourse is shifting towards an intermediary responsibility discourse. While private parties’ self-awareness of their own duties and obligations might appear a laudable goal to achieve, the retraction of the public is not. It is for "the People"—through delegated enforcement agencies or the judiciary—to decide what should be online and what should not, rather than private parties. This process might be pushing an amorphous notion of responsibility that incentivizes intermediaries’ self-intervention to police allegedly infringing activities in the Internet. Due process and fundamental guarantees get mauled by algorithmic enforcement, silencing speech according to the mainstream ethical discourse—and bringing about novel forms of cultural imperialism. Ironically, the ongoing European reform process might end up achieving the opposite goals than pushing a European culturally independent Ditial Single Market. It might promote globalized control enforced by algorithms developed and controlled by major Silicon Valley companies. The upcoming reform—and the border move that it portends—might finally slay “no monitoring obligations,” rather than the untameable monster. Proactive monitoring, voluntary measures and intermediary responsibility seem a dystopian way to go.

11:30-13:00 Session 12C: Privacy 9: Self-Incrimination, Rehabilitation, and the Right to be Forgotten
Location: DZ1
11:30
The role of citizen in distributing responsibility around Smart City technologies
SPEAKER: Merel Noorman

ABSTRACT. In recent years policymakers, companies, government institutions, and citizen organizations in Europe have argued that Smart Cities can only fully harness their potential when its data-infrastructures are sustained through the voluntary active cooperation of citizens. At the same time, few citizens have the leverage or the technical, organizational and cultural knowledge and know-how to actively participate in and affect decision-making about how their data may be processed or used. Instead, policy makers, corporate financial and marketing departments, technical specialists and academics have dominated discussions about problem definitions, design and implementation of data technologies, future developments and the social and ethical concerns that these technologies raise. One such concern is the distribution of responsibility. This paper addresses the question of whether and how citizens can gain more leverage in negotiations about the distribution of responsibility around Smart City technologies and what that means, in particular, for the accountability of data technologies.

11:50
Thinking of copyright works as black boxes. A solution for TDM and machine learning activities
SPEAKER: Marco Caspers

ABSTRACT. As a basic principle of copyright law, ideas are not protected. It is rather the original expression of ideas that copyright law seeks to protect. This principle prevents ideas being monopolised by authors of works, thereby ensuring the free flow of ideas in society. Other individuals may reuse those ideas and express them in their own – original – way. With the advent of computer technology, the ideas expressed in digital works are not easily ascertainable without the use of a device, which itself requires the making of a reproduction. As that reproduction falls within the scope of the copyright owner’s exclusive right, then it becomes literally impossible to gain access to the ideas at the root of a work without infringing the owner’s copyright, unless the act of reproduction is authorised by law or by the owner.

This problem particularly arises in the context of text and data mining (TDM) and machine learning activities. TDM can be seen as ‘reading’ by a machine. However, TDM – and thus machine reading – cannot be carried out without any creation of reproductions. Since these reproductions fall within the ambit of the broadly defined reproduction right, such ‘reading’ is subject to the exclusive control of the right holder(s); current exceptions under (European) copyright law do not sufficiently cover such reproductions. Therefore, new ways of discovering knowledge – than can only be extracted from large amounts of works, which is impossible for humans to deal with – are restricted. Hence, facts and ideas that can only be found using these technologies are controlled under copyright law and potentially monopolised by right holders.

In this paper, we will approach this issue from a new point of view, which is slightly related to the concept of non-display uses [1]. We propose that when machines ‘consume’ a copyright work, that work should be regarded a black box and its use should not fall under the exclusive rights of the author [2]. In the case of TDM, the miner – or the machine – is only interested in the knowledge underlying the work. In fact, without the human ever taking notice of the expression, the machine reads the work and extracts the facts and ideas therefrom. So for the machine, the work is nothing more than a black box: the machine provides input, e.g. in the form of an algorithm, and receives a certain output from the (collection of) work(s). On the contrary, in the context of human consumption, a copyright work can never be regarded a black box, because he or she reads, watches, listens or otherwise perceives the expression of a work through his or her cognitive system. If the expression is poorly written or otherwise constructed, a human might choose not to consume it. For a human, a good research paper is not only a good research paper because of the message it conveys – its output; it rather needs a proper expression, without which the paper loses its value.

This analogy does not come out of thin air. What is more, this concept is already present in the European copyright framework since 1991, namely in the Software Directive (currently 2009/24/EC). Article 5(3) of the directive explicitly permits to carry out black box analysis on a copyrighted computer program, subject to a few criteria. Now that technology enables us to carry out black box analysis on other copyright works, it seems to time to extend this fundamental concept to the general European copyright acquis. We can design such an exception for copyright works in general, adapting the criteria of Article 5(3) Software Directive to apply for copyright works in general. In our paper, we will elaborate on these criteria and explain why and how we can modify these criteria to fit copyright works that do not constitute software.

Thereby, this paper will set out a framework that will not only constitute an appropriate exception for TDM – which is already being proposed by the European Commission – but also for other current and future means and technologies of machine reading in the field of inter alia automatic information retrieval and machine learning. At the same time, it ensures that the whole copyright framework (better) complies with its underlying idea-expression dichotomy.

[1] Borghi, Maurizio, and Stavroula Karapapa, ‘Non-Display Uses of Copyright Works: Google Books and beyond’, Queen Mary Journal of Intellectual Property, 1 (2011), 21–52 https://doi.org/10.4337/qmjip.2011.01.02.

[2] Hargreaves, Ian, Lucie Guibault, Christian Handke, Peggy Valcke, Bertin Martens, Ros Lynch, and others, Standardisation in the Area of Innovation and Technological Development, Notably in the Field of Text and Data Mining. Report from the Expert Group, 2014 https://doi.org/10.2777/71122.

12:20
Insecurity in Humanitarian Cyberspace: technologies, practices and neighbors TRACK Cybersecurity and data governance

ABSTRACT. To get a better understanding of the emergent humanitarian cyberspace as a field of social, political, economic and symbolic action, this paper argues that it is necessary to forge a more comprehensive analytical link between data security (privacy and data protection) and cybersecurity on one hand, and human security on the other. To that end, this paper takes stock of cybersecurity in the humanitarian sector by mapping out three clusters of cyberinsecurity knowledge in the domains of technology, social practices and neighborly relations. I begin by surveying the literature on cybersecurity in humanitarian action in order to map out the particular risks associated with the use of specific humanitarian technologies and data. I then look at three aspects of what humanitarians think and do: The first concerns the circulations of rumors about alleged or imminent catastrophic failure resulting in harm to humanitarian operations and beneficiaries. The second concerns risk perceptions in the humanitarian community and how these perceptions can be read in light of the particularities of the humanitarian space (physical danger to beneficiaries, aid workers and material objects) and the more generalized concerns of the cybersecurity field (cyberattacks, surveillance issues, technical aspects of connectivity). The third part examines humanitarian cybersecurity preparedness on the level of policy, advocacy and field implementation, again by paying attention to humanitarian action, cybersecurity and the interface between the two. Finally, I compare and contrast perceptions of data collection, cybersecurity and risk in adjacent human security/rescue fields, including development, human rights and international criminal justice.

12:50
GRUMPY CAT IZ CLUELESS – techno-regulation is not the solution to GloPhoneDriving accidents
SPEAKER: Ronald Leenes

ABSTRACT. title says all (for now).

11:30-13:00 Session 12D: HC 5 - Privacy and Information Security in mHealth: lessons from case studies
Location: DZ5
11:30
Panel: Exploring Data Portability

ABSTRACT. The goal of the panel is to put the current regulatory interventions in the area of data portability in the wider content. What economic models does the data portability respond to? Which existing areas of law enhance portability of data? And how? In the series of short contributions, the panelists will explore how data protection, competition law, consumer law and intellectual property law try to achieve portability of data and with what purpose. They will provide high-level overview of the individual areas with the goal to draw up a map of how economic theories and the existing legal instruments support or regulate different notions of data portability.

11:30-13:00 Session 12E: IP6: Data Mining and DRM
Location: DZ4
11:30
Privacy in the Era of Binge-Watching Videos Online

ABSTRACT. Nick Gross Ph.D. candidate University of North Carolina, Chapel Hill

Streaming TV and movies online has become an Internet phenomenon as a result of subscription video-on-demand (“SVOD”) services like Netflix, Hulu, and Amazon Prime Video. Indeed, on-demand video and audio streaming dominates peak period internet activity on fixed networks, consuming around 71 percent downstream bytes in 2016, with Netflix, Hulu, and Amazon Prime together accounting for about 42 percent of this traffic flow. As approximately 60 percent of households have SVOD services, consumers will spend approximately $6.62 billion on such services in 2016.

Yet, these consumers pay more than just cash for the access and convenience of online streaming in the form of sensitive personal information, such as their video preferences, social media activity, and geolocation. In Netflix’s streaming-only blockbuster House of Cards series, the Machiavellian politician Frank Underwood says, “There’s a value in having secrets . . . After all, we are nothing more or less than what we choose to reveal.” While SVOD customers may not necessarily have secrets to hide, their video-watching habits may reveal more personal or sensitive information than they would expect or prefer to divulge. For instance, nearly half of adults have expressed concerns regarding the privacy of their online activity on video sites.

In the analog era of brick and mortar video stores, concerns of personal privacy in regard “to the rental, purchase, or delivery of video tapes or similar audio visual materials” spurred Congress to pass the Video Privacy Protection Act (“VPPA”) in 1988. Specifically, the VPPA aims to protect video-consumers’ intellectual freedom and growth—namely, their intellectual privacy—by prohibiting video service providers from knowingly disclosing a consumer’s personally identifiable information, except in specific, limited circumstances. Even though the VPPA reflects the concern that knowledge of what an individual watches, reads, or thinks is “nobody’s business,” such knowledge is, in fact, a large, ever-expanding business comprised of advertisers, marketers, and data brokers in the digital ecosystem. While the VPPA applies to Internet-based on-demand video service providers, the extensive collection, use, and sharing of sensitive information about consumers’ video consumption habits by SVOD services and their affiliates or partners undermines the VPPA’s relevancy and safeguards in the digital age. To ensure that the VPPA reflects modern business practices and optimizes privacy protections for online video streamers, this paper examines two weaknesses in the VPPA and proposes two solutions based on apparent circuit court splits.

First, the meaning of personally identifiable information (“PII”) under the VPPA should include IP addresses and other unique static digital device identifiers in light of technological advances that more readily ascertain Internet users’ real-world identities. This argument examines the conflicting decisions on the scope of PII in the First Circuit case Yershov v. Gannett Satellite Information Network, Inc., and the Third Circuit case In re Nickelodeon Consumer Privacy Litigation. Second, courts should broadly interpret the definition of “subscriber” under the VPPA to encompass users of video streaming services even where the users do not pay, or technically register, for the app or web service, because consumers pay for the app or service with their personal data that the provider monetizes through advertising and other uses related to big data. This argument analyzes the disagreement over the meaning of “subscriber” between the Eleventh Circuit in Ellis v. Cartoon Network, Inc. and the First Circuit in Yershov. In short, just as video lag reduces the quality of consumers’ streaming experience, the lag in certain courts’ interpretation of the VPPA may undermine online streamers’ privacy interests, from their intellectual privacy to their control over their own personal data.

This paper provides the only in-depth scholarly analysis of the VPPA, a statute that is arguably growing in importance given technologies that can monitor consumers’ media consumption across devices, track their geolocations of where they view content, and potentially gauge their interest in that content. The paper explores the VPPA and its legislative history, the scant literature on the VPPA, the SVOD market and its practices in regard to personal data, including the role and operation of digital identifiers, the benefits and risks of Big Data in the SVOD market (i.e., the four “V’s”: volume, velocity, and variety, and videos), and the few U.S. federal cases to address the VPPA in relation to digital streaming services. Finally, in light of the online monitoring by edge-providers, third-party advertisers, and ISPs, this paper briefly examines how the VPPA fits within the sectoral approach to privacy in the U.S. and whether the U.S. should enact omnibus privacy legislation, as in the EU, in order to protect consumers’ personal data.

12:00
Towards Identity Bankruptcy

ABSTRACT. The growth of the Internet, of data storage, and of information search capabilities have combined to make large numbers of facts about the lives of ordinary people in industrialized countries far more accessible than ever before. This has had unanticipated consequences, several of which appear on balance to be undesirable. This article proposes the concept of “Identity Bankruptcy.” Identity Bankruptcy would be a formal judicial or administrative procedure, inspired by existing personal bankruptcy law, that would allow some persons upon good cause shown to make a fresh start as to their electronic identity on social media, the internet more generally. Identity Bankruptcy would be a means to combat the potentially life-time consequences to employment, housing, and credit that can be caused by identity theft, cyberstalking, revenge porn, or the documentation of childhood—or even possibly adult—indiscretions. A declaration of Identity Bankruptcy would forbid employers, lenders, landlords, and other market and financial participants from making decisions about financial and market transactions based in any part on the matters covered by the declaration. If successful, the concept might be extended to cover other problems caused by the permanence of data and easy accessibility of information about people. The Identity Bankruptcy procedure, which targets usage of online information in particular economic contexts rather than the existence of online information, might be feasible in the US (unlike the right to be forgotten) as it offers a compromise between free speech and privacy concerns.

Relation to the literature:

The Identity Bankruptcy proposal is a somewhat US-centric attempt to achieve some of the benefits of the right to be forgotten (RTBF) while avoiding any infringement of the strong form of the First Amendment protection of speech.

While close to the RTBF literature, it is not of it, but rather a counterpoint to it -- or, if you support RTBF, a very pale shadow of it, since the rights granted are much weaker. The article also draws lightly on scholarly work about bankruptcy and discrimination. But it does not fit perfectly in any existing literature other than general attempts to achieve privacy goals in the difficult environment of U.S. law.

12:30
Copyright Infringement and Website-Blocking in India

ABSTRACT. My paper is divided into three sections.

First, a section explaining the law on website-blocking in India. Here, I will discuss provisions of the Indian Copyright Act and Information Technology Act, contrasting the grounds for blocking websites on copyright versus other grounds (discussed in MySpace v T Series).

Second, an analysis of the spate of "John Doe" orders passed against pirated websites by Indian courts in recent years. Here, I will discuss tensions between copyright and civil liberties, such as instances where entire websites were blocked instead of infringing URLs. I will refer to a recent decision of the Bombay High Court highlighting some of these tensions.

Third, a discussion of whether website-blocking is really an effective anti-piracy strategy by the Indian film industry. I will refer to the view of industry on why it sees website-blocking as necessary to curb piracy. I will argue that numerous loopholes make the strategy ineffective. For example, many such websites are based overseas, where enforcement costs can be expensive. Here, I will refer to a demand by the Indian government to the US government, during talks between President Obama and Prime Minister Modi, to place curbs on YouTube and Dailymotion. I will also examine a case study where Viacom India filed a complaint with police authorities in Latvia.

My conclusion, therefore, is that website-blocking cannot exist in isolation of other strategies such as making film content more accessible to the public in India, and that the industry needs a separate strategy to enforce its rights abroad.

11:30-13:00 Session 12F: Privacy 10: Money, Media Consumption, and Video Analytics
Location: DZ3
11:30
Limiting text and data mining exception: interplay between law and technology
SPEAKER: Matej Myska

ABSTRACT. In the proposed Directive on copyright in the Digital Single the Commission introduced a new exception to the exclusive rights of reproduction and extraction for text and data mining (“TDM”). Despite the positive fact, that such exception was even created as such, its actual wording was criticized for being too limited and too narrow in scope and actually not bringing much legal certainty.[1] A further negatively observed feature [1,4] of the exception is the proposed ability of the rightholders to limit the access to the mined content in order “ensure the security and integrity of the networks and databases where the works or other subject-matter are hosted” (Art. 3 para. 3, Recital 12 of the proposed Directive). Even though such measures shall “not go beyond what is necessary to achieve that objective” (Art. 3 para. 3 of the proposed Directive) a clearer limitation of their employment is missing in the proposed normative text. Furthermore, the recital 7 and Art. 6 of the proposed Directive provides for further application of the rules of the InfoSoc Directive on technological protection measures ("TPM") (Article 6(4) of Directive 2001/29/EC) as well as the relatively vague[2] limitation thereof. As was already found out, the TPM and lack of limitation thereof, have not in general helped to achieve the balance in application of the exceptions to the exclusive rights[3] and it could be assumed that this will be the case also in the area of security and integrity measures (“SMI”). On the other hand, the link between the TPM and SMI is not clear enough and leaves open the question, whether the (albeit vague and not unambiguous) limitations of TPM shall also apply to SMI. Furthermore, the rightholders and research organizations shall be encouraged by the Member states to agree on definition of SMI in best practices concerning their application (Art. 3 para. 4 of the proposed Directive). Such voluntary solution of SMI limitation however does not provide enough incentive for the Member States (and the stakeholders) to actually do so and does not provide for any other substitute solution (such as the Art. 6 para 4 of the InfoSoc Directive). As a consequence, such broadly and vaguely construed SMI could lead to factual neutralization of the exception[4] and in the long run to decreasing of the usage of TDM techniques. In the proposed paper I argue in favor of explicit limitations of the SMI. A well-balanced wording of the exception should provide for more legal clarity and certainty for the researchers applying the TDM. In the descriptive first part of the paper I analyze the proposed wording of the TDM exception and its interplay with the TPM in the InfoSoc Directive. Building on the existing studies dealing with the transpositions of the Art. 6 para. 4 InfoSoc Directive in the Member States,[5] I evaluate, how the limitations of SMI could be practically implemented and further analyze, whether a seamless application of TDM could be actually ensured. Next, I compare and analyze the current subscription agreements of the major scientific publishers (Wiley, Springer and Elsevier) as regards to the regulation of TDM and potential technical limitations contained therein. These major scientific publishers were chosen due to their impact on the market practice. Finally, I examine in detail the suggested solution to the SMI problem[6] - namely analogical implementation of the safeguards regulated in the Art. 3 para. 3 of the Telecoms Single Market Regulation [EU 2015/2120][7] - and how it should be reflected in the current wording of the TDM exception, the future commonly-agreed best practices and the analyzed subscription agreements of the publishers.

[The proposed paper builds upon and extends my work on this topic that has been already presented at the 2016 Annual Meeting of the Association for Information Science and Technology, Copenhagen, Denmark (Oct. 14-18, 2016) and the 9th Conference on Grey Literature and Repositories, Prague, Czech Republic (Oct. 19, 2016). In these presentations I focused on the legal aspects of the proposed TDM exception in general (Copenhagen) and on the TDM of grey literature in general]

12:00
Smart ethics for algorithmic surveillance and big data: The case of behavioural video analytics
SPEAKER: unknown

ABSTRACT. Recently, there has been a increase of research and development of video surveillance systems that deploy algorithms to automatically spot 'abnormal' or 'suspicious' behaviour. The behavioural video analytics technology is said to help security and law enforcement officers spot threats before they develop by detecting abnormal or suspicious behaviour in real-time. This technology can be situated in the larger trend towards predictive or pre-emptive policing and security (van Brakel & de Hert, 2011) and has emerged as a response to a number of issues identified with traditional video surveillance such as lack of effectiveness in crime prevention, ‘operator overload’, interoperability issues and quality of the images and human rights issues

The main purpose of this paper is (1) to discuss the different existing types of behavioural video analytics systems and look for meaningful characteristics and differences amongst these systems (2) to discuss technical differences between these newer systems and older video surveillance in order to (3) update and complement traditional analysis of these technologies in literature (Warnick, 2007; Norris and Armstrong, 1999 and Marx, 1998) and (4) to explore which ethical safeguards need to be put in place to minimise the negative impact of video analytics systems on civil liberties and society.

12:30
Expert systems and medical malpractice: reframing the notion of negligence

ABSTRACT. Expert systems promise to radically change the way medical consulting is performed. Indeed, such devices using artificial intelligence and complex data mining techniques promise to provide an accurate diagnosis of the illness affecting the patient by elaborating relevant medical information, together with data – both structured and not – accessible to the system, ultimately elaborating a curing strategy. Producers claim that such AI-systems on the one hand will reach an overall accuracy above 95% in their diagnostic capabilities; on the other hand, however, they maintain that they will not replace medical practitioners in their role as doctors and care providers. This in turn implies that medical doctors will remain the only person in charge and thence subject liable for all – or at least most – negative consequences suffered by the patient. To determine what will happen to medical practice as a consequence thereof it is necessary to proceed along two different lines. Firstly, we shall describe in sufficient details how such applications are expected to function in a technical perspective, in order to question whether they can really be deemed as a mere diagnostic tool, similar to those already commonly employed in current medical practice, otherwise identifying the changing point, forcing a different kind of analysis. We believe that such element might be identified in the possibility to process complex – also non-structured data – as well as in providing a complete diagnose with highest levels of accuracy. Secondly, it is necessary frame this technological characteristic within existing liability rules, in particular negligence. To do so we will refer to general principles of negligence, applicable to most – if not all legal systems – while concrete examples will be derived from EU legal systems (the Italian, French and German systems) and US tort law. We will focus on the ascertainment of negligence conceived as a two-prongs test, whereby it is necessary to establish first whether even just one single man in the world possessed the knowledge that might have avoided the plaintiff suffering harm, and second determine whether such knowledge could have been demanded to the specific agent. With respect to the first prong of the test it is likely that access to an expert system might play a relevant role, completely modifying the standard against which the single practitioner is to be measured; while, with respect to the second prong, the high level of reliability of the system – once deployed onto the market as promised by manufacturers – could determine strategic conducts on the side of the doctor. The analysis will thus take into account how liability might be apportioned in light of existing rules, namely negligence and product liability, tracing responsibility back to the manufacturer (or service provider) who deploys the system and how this would influence the strategic behaviour of both. Finally, we will consider how reform might operate replacing current rules with clear-cut solutions pointing either to the doctor or to the manufacturer, assessing what incentives such alternatives would provide and whether those would be desirable, leading to some policy considerations.

11:30-13:00 Session 12G: Gikii 1
Location: Black Box
11:30
Migrant mortality rates along southern EU external borders, 1990-2013
SPEAKER: Tamara Last

ABSTRACT. People have been dying while attempting to cross the external borders of the EU for 3 decades. The paucity of data available on deaths and arrivals over this period has meant that the mortality rates cited by academics, NGOs, journalists, and government and EU offices is based on news media-sourced death data and Frontex apprehension data. The Deaths at the Borders Database is the first officially-sourced dataset on migrant deaths along the southern EU external borders. Although it captures only part of the total death toll only, case-by-case comparison with the two main media-sourced datasets provides a measure of the degree to which fluctuating media attention to the issue affects reporting of deaths. Meanwhile, a meta-analysis of irregular migration flows across the southern EU external border provides comprehensive estimates of the population at risk of death. The result of these data analyses is that this paper presents more accurate mortality rates for the period 1990-2013, on which further analysis and discussion can be based.

12:00
The Googlization of health research. Challenges and possible ways forward

ABSTRACT. As digital data take on an increasingly central role in biomedicine, the health research ecosystem is expanding to include new types of data, new methods of research and new stakeholders. This expansion has opened the way for large consumer tech companies – including Google, Apple, Amazon, IBM, even Facebook – to enter the health research domain in significant ways, by virtue of their data expertise. This “Googlization of health research” may advance biomedical research in an unprecedented way. But as a unique socio-technical phenomenon that lies at the intersection of data-intensive science and digital capitalism, it also raises new challenges: How do the methods for collecting, storing and analyzing data in new models of mobile-technology enabled research introduce new biases? How open will these new datasets be? And how may the emergence of new power asymmetries in this space shape future research agendas? Drawing on critical data studies and bioethics, this talk maps out the main concerns in the Googlization of health research and begins outlining some ways of addressing them.

11:30-13:00 Session 12H: Alumni 2
Location: DZ2
11:30
The design of privacy by design

ABSTRACT. A key ingredient of EU reform of data protection rules is privacy by design, a methodological approach that supports the inclusion of privacy into the whole lifecycle of IT systems, from the early stages to their ultimate deployment. While several scholars have pointed out that privacy by design principles leave many open questions about their concrete application, computer scientists have been attempting to design privacy-preserving techniques for many years, for example, by introducing privacy concerns in data release and processing, figuring out different settings (social networks, adversarial scenarios, privacy aware learning, etc.) and developing general theoretical framework, such as k-anonymity and differential privacy. All these efforts have produced an exceptional stock of tools for those who would be willing to take up the privacy by design approach and, considering the increased awareness about privacy issues within computer science community, we would expect further progress on this camp. However a growing literature on the social dimensions of privacy is making clear that privacy is a far richer notion than the one often implied in computer science research. Drawing on areas as diverse as social science, anthropology and philosophy several scholars have pointed out that privacy is a social process whose functionality is inextricably tied to the development of society and specifically to the ability of managing interpersonal boundaries in different spheres of actions. This would pose several questions to the articulation of privacy by design: Could privacy problems be properly approached from a purely computational perspective? Should we take the computational approach as the default method for incorporating privacy into information and communication technologies? Starting from these provoking questions we will argue that there are more fundamental reasons than practical failures (e.g., those pointed out by re-identification attacks) to rethink the design of privacy solutions. As we will see, one of the main challenge underlying this critical review is how to frame the design of privacy by including both technical and social constraints, that is how to combine the social requirements of privacy with the design of IT systems. To explore this issue we will make reference to some conceptual tools pointed out by the aforementioned literature, such as Helen Nissenbaum’s theory of Contextual Integrity and the notion of “networked privacy”, and try to understand how these could extend and improve the application of privacy by design. Our ideas will be further evaluated in the light of specific case studies of privacy by design applications in specific data-driven technologies.

12:00
Regulating Inscrutable Systems
SPEAKER: Solon Barocas
12:30
EU law perspectives on the blurring line between health goods and services and corresponding challenges to ensuring the protection of personal data

ABSTRACT. The internal market embraces four mutually exclusive fundamental freedoms. In many cases, the distinction between goods and services can easily be drawn. In some situations, more than one fundamental freedom could be triggered the applicable fundamental freedom is then determined by the center of gravity assessment. However, there also are situations when the determination of applicable freedom is a policy choice the CJEU makes due to such factors as the genuine nature of the trigger issue and the relevance of applicable legal requirements. In the course of the provision of health services in a healthcare setting, the privacy of an individual generally is safeguarded by the confidentiality obligation as well as by data protection measures. However, the advances in the technology and medicine allow providing services that traditionally had been offered in a clinical setting, though the information communication technology. The situation when healthcare is provided outside a clinical setting through the ICT may entail a switch of the applicable fundamental freedom, as well as present challenges to ensuring the protection of privacy if the technology users. This research examines how direct-to-consumer genetic testing, a healthcare service that has obtained a considerable goods dimension, transforms the protection of an individual’s spatial and informational privacy. This analysis is carried out from the EU internal market, public health and data protection perspectives.

11:30-13:00 Session 12I: PLSC 6B: van Hoboken, Fahy, van Eijk, Liccardi, and Weitzner
Discussants:
Location: DZ7
11:30
Collecting data through public Wi-Fi: big data, privacy and data protection in smart cities

ABSTRACT. Internet broadband policies constitute a central issue of research agendas on smart cities. However, policymakers and scholars often put much emphasis on the single goal of providing broad coverage by integrating so-called ‘anchor institutions’ (universities, libraries, hospitals, schools, etc.) with public broadband infrastructure, but say little of the data that may be collected via public Wi-Fi and its potential for enhancing the provision of public services. On one hand, big data expands policymakers’ capacities for mapping trends and planning deployment of public services, greatly contributing for the development of smart cities; on the other hand, this raises worries that the sigil and privacy of users’ data may be at risk – which is aggravated, in Brazil, by the lack of a consistent data protection legal framework. In São Paulo, the largest city of Brazil, the “Wi-Fi Livre SP”, implemented in 2016, refrained from collecting any data from users – and, by doing so, refused to address this conundrum. How may government employ big data in enhancing public services in smart cities? How can municipal administrations conciliate policies such as these with securing users’ privacy and ensuring data protection? With these questions in sight, this essay aims to expose the potentials of collecting big data through public Wi-Fi policies in São Paulo, the opportunities that this might represent for enhancing its potential for becoming a smart city, as well as its main challenges related to data protection and users’ privacy. Part I builds a framework from a review of state of the art literature on the conjunction of public Wi-Fi, big data, privacy and data protection. Part II goes from the reflections elaborated in Part I to analyze the solutions given to these problems by foreign privacy law, as well as foreign data protection frameworks. Part III presents a case study on two specific public services in São Paulo that have been integrated with public Wi-Fi, analyzing both how they may benefit from using big data and the data protection and privacy-related worries that might arise from this use: (i) Wi-Fi provided in public transport and (ii) Wi-Fi provided in public libraries. Finally, Part IV draws brief conclusions on the case study.

12:00
‘It is known’: power, pornography and the drawbacks of the Dothraki referencing system as an ethical guide for data analytics
SPEAKER: Linnet Taylor

ABSTRACT. The concept of what is open data is rapidly evolving due to innovations in the way people search for and analyse information. These shifts in what is considered ‘known’ will increasingly create problems both for data governance and for data ethics more broadly. Is accessibility the same as openness, and does visibility equate to consent to view? In some cases, such as revenge pornography, it is clear that information’s visibility cannot be equated with its openness. However there is a growing area of overlap where data produced with no expectation of universal visibility has been made visible by changes in technology or the law. One example is the archive of tweets acquired by the US Library of Congress, which will result in a public archive holding, though agreeing not to share, both deleted and private tweets. This paper interrogates the use of the Dothraki referencing system – the axiomatic statement that certain information ‘is known’ and is therefore both true and useable – by the public sector, and its implications for data ethics and governance. I draw on two cases: first, the broad definition of ‘volunteered’ data such as social media content, which is now used as a supplement to sensor data for public-sector predictive analytics; and second, the emergence of new search engines for the Internet of Things, which make anything from power plants to refrigerators searchable. I argue that public-private partnerships’ use of the Dothraki referencing system, rather than a ‘slow data’ model based on acknowledging sources, constitutes an assertion of power over data and its producers, and an assertion of immunity from the data protection principles of privacy and nondiscrimination that will, over time, prove problematic for public-sector partners in such projects.

12:10
Fake Surveillance, Fake News and Real Surveillance, Real News: Some Thoughts on the rich connections between Tools and Truth
SPEAKER: Gary T. Marx

ABSTRACT. Gary T. Marx will discuss “Fake Surveillance, Fake News and Real Surveillance, Real News: Some Thoughts on the rich connections between Tools and Truth”.

The paper will offer some conceptual distinctions to help in understanding the relationships between new surveillance and communication technologies as they apply to truth and meaning. While the concepts will be illustrated with examples from current political rhetoric, the issues go much deeper to fundamental questions about the nature of knowledge and truth and how these are discovered and communicated. A related question involves he ironic vulnerabilities and unanticipated (and often even unimagined) consequences of new surveillance and communication tools.

13:00-14:00Lunch Break
14:00-15:30 Session 13A: PLSC 7A: Klabunde
Location: DZ6
14:00
Making decisions fair and square? On regulating algorithms.

ABSTRACT. These are times of uncertainty and complexity. Several interrelated global developments make it hard to know what to expect and prepare for, even in the rather near future (e.g. globalization, rapid technological developments and their impact on everyday life and society, climate change, the role of religion, shifting regimes). On different levels, people are looking for ways to regain epistemic control and trust. People use mechanisms on different levels to deal with this, some consciously (building models or narratives, applying theories), some unconsciously (such as cognitive biases - e.g. Kahneman, Slovic and Tversky, 1982).

Trust can be conceptualized as a strategy to reduce complexity and deal with risks: "Trust begins where predictions end" (Keymolen 2016). On the other hand, the paradigm of so-called Big Data promises a strengthening of predictive power through the increasing availability of data and data processing power. This should help us take better decisions under risk and uncertainty, by helping us discover patterns without the need for a full understanding in terms of a traditional scientific theory. But this introduces another level of complexity and uncertainty: how can we trust these algorithms? Are algorithm-based decisions indeed better, and if so: in which sense?

With the increasing role algorithmic data analysis plays in decision processes across society, the call for regulation of these algorithms gets louder (WRR 2016, WHR 2016). Behind this lies the recognition that focusing on data collection and data management alone, as most of existing regulation does, falls short of addressing all types of negative consequences these practices may have for citizens, consumers and principles underlying our modern society as a whole. In particular, the use of the COMPAS Recidivism-algorithm in the American Justice system and the resulting racial bias, has recently sparked a discussion: is COMPAS a biased algorithm (Angwin et al, 2016), or is the algorithm itself unbiased but so powerful, it requires stronger ethical principles (Gong, 2016)?

In this contribution, I propose an integral typology of the different types of bias and unfairness associated with (semi-)automated decision making: their interaction, interpretation and possible solutions based on emerging literature in this area (cf. FATML 2016).

I conceptualize a data-driven decision making model, and clarify the terms bias and unfairness. I will use bias as a more technical term indicating some systematic inclination, and separate it from a potential moral load. For example, in text-based analysis, a bias when looking at word embeddings can be seen as implicit meaning to those words. Whether or not this connotation of the words is "correct" or fair, is a separate question. (Caliskan-Islam et al., 2016).

I analyze the relationship between data, algorithms and humans in algorithmic decision making. I will argue that there is no decision without humans as actors, be it in producing the data, designing the algorithm, interpreting its outcome or deciding how to act upon it. As O'Neil (2016) puts it with respect to the predictive power of data: "Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that's something only humans can provide." A consequence of the necessary human involvement, is that some way or another, human prejudices and cognitive biases may always slip into the decision. I will highlight how this may manifest in different types of decisions with different degrees of automation.

Given the conceptual analyses above, I will reflect on what it would mean to regulate algorithms, and to which extent it may help mitigating unfairness from algorithm-based decision making. Whereas the above suggests no algorithm can completely eradicate human bias from decision making process, algorithms can play an important role in highlighting their existence and impact (even when they are black box: Ribeiro, Singh and Guestrin 2016).

REFERENCES

Kahneman, Daniel, Slovic, Paul, Tversky, Amos (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press. See also the Cognitive Bias Codex infographic by John Mahnoogian III, available at https://betterhumans.coach.me/cognitive-bias-cheat-sheet-55a472476b18#.4himerqam (last consulted 20 november 2016)

Esther Keymolen, Trust on the line, a philosophical exploration of trust in the networked era. PhD thesis, Erasmus Universiteit Rotterdam, april 2016.

Wetenschappelijke Raad voor het Regeringsbeleid (WRR): Big Data in een vrije en veilige samenleving, April 2016.

White House Report: Cecilia Mu{\n}oz , Megan Smith, DJ Patil: Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights, Executive Office of the President of the United States, May 2016

Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, Machine Bias - There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica May 23, 2016. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (last consulted: 20 november 2016)

Abe Gong, Ethics for Powerful Algorithms (1, 2 and 3 of 4). Medium.com, 12 July (1 and 2), 2 August (3) 2016. See https://medium.com/@AbeGong/ethics-for-powerful-algorithms-1-of-3-a060054efd84#.2hce3ylo9

FATML 2016: Fairness Accountability and Transparency in Machine Learning, New York University 18 November 2016. See http://www.fatml.org/schedule/2016

Aylin Caliskan-Islam, Joanna J. Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora necessarily contain human biases. ArXiv, Draft date August 30, 2016, avalaible online: https://www.princeton.edu/~aylinc/papers/caliskan-islam_semantics.pdf

Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144

Cathy O'Neil Weapons of Math Destruction - How big data increases inequality and threatens democracy. Crown 2016

14:30
Governing Data Flows in the Logical Layer of Internet Governance: The case of the Next Generation Registration Directory Services to replace WHOIS

ABSTRACT. The logical layer of internet governance, that is to say the code that subtends to the core internet infrastructure, nests and triggers a number of data flows. In particular, the domain names system (DNS) and the corresponding ‘community’ mechanisms for the allocation of Internet Protocols (IP) and numbers, as well as internet standards, harbor specific ideas (and ideologies) about how (and by whom) these data should be managed. These ideas and rules are in continuous evolution, even more so after the Snowden revelations and the global fight against terrorism.

This paper focuses on the logical layer of the internet harbored by the Internet Corporation for Assigned Names and Numbers (ICANN). It explores the privacy and surveillance implications of data flows as they follow from names and numbering, looking in particular at the ongoing Policy Development Process on the Next Generation Registration Directory Service (RDS). RDS is designed to replace WHOIS, the global databases of registered users of an Internet resource such a domain name, which over the years has attracted the concerns of privacy advocates. The ongoing bottom-up policy development, initiated in January 2016, aims at proposing a model for registration directory services for general top-level domain names able to simultaneously "address accuracy, privacy, and access issues". The paper tracks the current controversies around RDS, and proposes three scenarios for its future.

15:00
Intellectual property, human rights and indigenous peoples’ traditional cultural heritage in a connected world: legal diversity and interconnectedness
SPEAKER: Kelly Breemen

ABSTRACT. Increasing globalisation and growing interest in the heritage of indigenous peoples, such as their traditional knowledge (TK) and traditional cultural expressions (TCEs), has led to concerns and discussions over their protection. This paper aims to bring together a conceptual and legal diversity approach to analyse TCE protection in a connected world.

Essentially, many discussions and challenges concern disputes of ‘meaning’. That is, the meaning of such concepts as protection, the public domain, cultural heritage and sharing. These meanings differ not only per legal angle, but also per societal angle: mainstream understandings on the one hand, and indigenous understandings on the other. Protection, for example, extends beyond intellectual property-like exclusivity to also include preservation and promotion of TCEs and indigenous peoples’ cultures. The understanding of the public domain depends on the perception of protection conditions and acceptance of their applicability, for example the duration of protection in copyright law. This specific condition is generally problematic for indigenous peoples, who do not perceive their works as in the public domain. The concept of cultural heritage, and culture generally, can be understood in diverging ways. It can, for example, be perceived as either static cultural manifestations from the past, the material output or the dynamic present of a cultural group. Alternatively, it can be seen as a mix of all three. Indigenous peoples often emphasise the dynamic nature of their cultural heritage and their heritage as manifestations and crucial aspects of their present ways of life. Globalisation and out-of-context use might compromise or replace the meaning of the heritage that indigenous peoples have traditionally assigned to it. Sharing, together with access, are other concepts that are prone to different interpretations depending on worldview.

For a long time, the protection of indigenous peoples’ TK and TCEs has been analysed primarily from an intellectual property (IP) perspective. This perspective is generally found to be problematic, as many characteristics, rationales and requirements of this legal domain are not compatible with the main issues at stake in TCE protection. Regardless of this persistent ‘default setting’, however, upon closer inspection the issue clearly involves cultural heritage and human rights aspects as well. Such aspects include dynamic preservation of (intangible) cultural heritage and cultural diversity, self-determination over cultural development and participation in and exercise of cultural (ways of) life. In other words, the protection of TCEs clearly involves a significant degree of legal diversity with various legal domains that are intertwined. The issue takes on different nuances depending on the legal angle it is assessed from. What is more, to a certain extent all three legal areas are interconnected and share various central underlying values. So much so that this paper argues the following: TCE protection is a multi-dimensional issue and requires a likewise approach. Identifying shared central values can give direction to this approach.

In sum, the structure for the paper will be as follows. First, there will be a conceptual analysis of various (legal) key terms, their diverse interpretations and the consequences for the protection question of struggles over meaning. Next, the legal diversity involved with the topic will be set out on the basis of three relevant legal domains: copyright law, cultural heritage law and human rights law. More specifically, their strengths and weaknesses and, ultimately, their interconnectedness are highlighted. Finally, this interconnectedness is tied together with the identification of shared central values.

Biography – Kelly Breemen is a PhD candidate at the Institute for Information Law (IViR, University of Amsterdam). She graduated from the Institute for Information Law’s (IViR, University of Amsterdam) Research Master’s programme in Information Law (cum laude, 2012). Her master’s thesis on sui generis rules for the protection of traditional cultural expressions (TCEs) was nominated for the UvA Thesis Award and her PhD thesis concerns the protection of TCEs from three legal perspectives. Her research interests lie in the sphere of art, culture and law, copyright law and freedom of expression. She aims to bring together topics at the intersection of the legal domains of copyright law, cultural heritage law and human rights law.

14:00-15:30 Session 13B: Data Science 9 - Epistemologies of Data: Media, Markets and Research
Location: DZ8
14:00
Smart toys, privacy and trust

ABSTRACT. Almost all children in the Western world are online—and so are increasingly more and more of the artifacts that surround them. Not merely computers—including smart phones and tablets—but the mundane physical things that surround us, such as thermostats, television, and washing machines, are increasingly connected to the network of networks to make our lives more fun and convenient. These interconnected artifacts develop into what are called “smart devices” that are capable of predicting individuals’ preferences and needs based on their online and offline behavior. The internet of things—as we call this phenomenon—is extending also to children’s toys, i.e. smart toys. Smart toys are WIFI-enabled toys that connect to the internet and interactively engage with children while they are playing with the toy. The most well know example to date is Hello Barbie, which is an internet-connected Barbie doll that talks to children and answers question that they may have. The smart play experience can be augmented by apps on smart phones and tablets. Obviously, smartifying toys influences children’s experiences of play, which can have an impact on the privacy of children, and their (and their parents) trust in these technologies. The purpose of this paper is to provide a philosophical and legal exploration of how the smartification of children’s toys impacts the concept of trust by using as conceptual lenses the four C’s—i.e. Context, Construction, Curation, Codification as put forward in the book Trust on the line by Esther Keymolen.

14:30
Democratic governance of public health challenges via health data sharing: an opportunity for the citizens?

ABSTRACT. Last year, the first Europe-wide citizen science campaign was launched: the iSPEX-EU Project for engaging citizens in the measurement of air pollution. The attractive slogan of the project announced: “iSPEX is a great innovation, involving citizens in air quality and health. By measuring air pollution, you are contributing to scientific insights and solutions”. The idea was to transform personal smartphones into optical sensors able to assess tiny atmospheric particles for evaluating their impact on human health. New opportunities for citizens’ participation created by technological progress, like the iSPEX Project, ask for new ways of conceiving the traditional concept of democratic governance (Franck, 1992). If this principle implies that the public’s contribution must influence the decision, then it is worth to question how citizens are actually shaping policies, e.g. by means of their data sharing. Arguably, in a near future it could even become appropriate to reformulate the Athenian word “demokratia” from “rule by the people”, in a more up to date “rule by the data produced by the people”. A perfect example of a data-driven community is the so-called “smart city”. One of the central promises of the smart city rhetoric is the tackling of pressing urban issues via the participation of more informed and empowered citizenry (Cohen, 2012). In this scenario of smart communities, democratic governance is assuming new appealing forms. The pervasiveness of sensors, devices and systems spreading all over our cities has the potential to create new channels for citizens’ e-participation, ultimately leading to radical transformations of our urban space and its governance (Fuller, 2008). In particular, I investigate how the increasing connectivity, defined as any-everything connectivity, of our living environments and our bodies is reshaping the concept of democratic governance (Part I). In parallel with the city governance transformation, the way of monitoring the human body is changing, and becomes more and more based on self-tracking and on patience-generated data. A daring comparison could be depicted between how the smart city concept is revolutionizing the city governance, and how eHealth and mHealth are altering the treatment of illnesses. These two processes are somehow interrelated if we consider that one of the smart city’s pillars is “smart health care”. Moreover, both the approaches share the element of the person (citizen/patient) empowerment and their engagement in shaping a process, in one case the city decision-making, in the other the course of the care. The democratic and participatory value enshrined in this person-centric shift is striking (Part II). The central question of this study is how the use of health devices on people’s bodies, of health apps, and of sensors in the urban setting affects citizens’ participation in shaping public health policies. The study explores the way data streams could become tools for people taking decisions within and with the city, in combination with an analysis of the traditional right to democratic participation contextualized in the smart city. This right serves as the theoretical framework for assessing the forms of citizen engagement and drafting models representing the levels of citizens’ participation (Part III). I envisage a great potential in the combination of smart health care and smart environments. Indeed, IoT powered devices inserted in the city and around/on the citizen generate every day impressive volumes of data both on human and environmental conditions that could inform public policies. This process is supported by people’s growing aptitude to share information on their health and on their surroundings. At the same time, they are being increasingly involved not only for what regards their own treatment, but also for broader public health decisions. The shift from the individualistic conception of health data, to the belief that they could be treated as a collective property (health data commons, Purtova, 2016), shared with public actors and researchers for community benefit, encourages participation. Indeed, the willingness to participate relies on the individual perception of the importance their contribution to society could have in creating a better living environment (Aicardi, Del Savio, et al., 2016) (Part IV). A sound scenario for people’s contribution in orienting public health policies is the governance of environmental caused diseases. In this scenario, health data deriving from personal self-tracking on human impacts of external factors are shared with public bodies through smart devices and apps. The citizens are also able to monitor efficiently environmental hazards present in the urban settings, being those who know better the local reality. Combining the data coming from the smart city apparatus (e.g. data on environmental factors) with the citizens reported information (e.g. health consequences of such factors) a more aware and participative approach to public health challenges could be achieved (Part V). Overall, citizens are producing data that become part of the way the city is governed (Taylor, Richter, et al., 2016) and the democratic potential of this trend is indisputable. Nevertheless, the scenario presents some dark sides that must be analysed. Firstly, new paths for participation do not necessarily mean more opportunities for all. Indeed, there is the risk that the new channels for contributing via advanced technologies could even worsen the inequality in the representation of citizens already present in our cities. This study will investigate whether these connecting technologies create an equal playing field for contribution for every citizens or only for the smart connected citizens. Secondly, the data shared by the citizens could lead to governments’ abusive exercise of the power derived by the data, e.g. undue control over peoples’ behaviours. It is worth to question if the people are shaping policies by their data sharing, or instead the governments/other actors are orienting citizens’ choices via pervasive devices and apps, converting a democratic smart city system into a technocratic one (Part VI). All these adverse outcomes (misrepresentation/ technocratic abuses) are exactly the opposite of the concept of democratic governance. Given the consideration of the implicit risks of democratic deficits and technocratic abuses in the smart city, it is urgent to reflect on regulatory measures aimed at protecting a genuine civic engagement from detrimental effects and promoting technological progress in the city as an arena for democratic and responsible innovations.

15:00
Copyright Protection and Fair Use: The Evolution of Institutions Governing Music-file sharing in 1997-2017 America
SPEAKER: Kunbei Zhang

ABSTRACT. Technology has made content sharing between consumers viable since the early 1980s, when video and audio recording on tape became affordable to consumers. Early copyright issues were immediately discussed and settled: time shifting (for individual use) became part of the fair use exception in the USA. Yet the content-sharing issues became adamant again in 1999, when Napster saw light in a networked world. Since then, IP-regulation of file-sharing behaviors in a networked world has tried to keep up with the pace of technological innovations. We look at how they co-evolved in the USA.

In practice, content-sharing technology has linked the world together through the distribution of files. And the networked technology behind it has broken through territorial barriers.In law, content-sharing in a connected world is mainly regulated in intellectual-property laws. In the USA, judges have, roughly speaking, two choices in copyright issues: (i) protect as property, (ii) allow as fair-use behavior. These two choices have been observed in our study of a comprehensive data base with fair-use case law. We translate these observations into hypotheses about the evolutionary trajectories of the two strategies as they emerged out of the innovation of file-sharing technology and -services, in the 1990-2017 period. Particularly, we focus on the relationships between judicial choice and the social environment, such as the culture (sharing as cooperation - sharing as theft), the technology (multiple facilitator innovations) and the economy (the collapse of the traditional music industry, the rise of the streamers) Thus we try to understand the underpinning of different choices and judicial opinions -- not simply by looking at the rules governing copyright issues, but also at the driving forces that influence how the fair-use principle is understood and determined. (Perhaps most sobering to the legal scholar will be our findings on fair-use oriented litigation frequencies).

With our initial empirical descriptions we thus have set the stage for the follow-on analysis, on what regulatory choices have been made, what forces have induced them and what efficacies have emerged from them. A critical political factor is the degree to which the copyright law is committed to or bound by new technologies. Situations that technology can readily revise differ significantly in their implications for the law's performance from situations where technology is not subject to frequent revision. The more likely it is that the technology will alter, the lower the expected returns from copyright exploitation, and judges will anticipate this.

Our study aims to make three contributions to the copyright literature. Our first is theoretical: based on empirical evidence we will show that there are difficulties in viewing the social and technical environments as entirely ‘exogenous’ influences on the legal discipline, and that it is more useful to see the legal system (at least the legal IP-arrangements in the USA during the 1990-2017 period) as both shaping and being shaped by environmental factors, an approach we label ‘co-evolutionary’.

Our second aim is methodological: we demonstrate how the combination of empirical description with agent-based model simulations can be useful for legal theory (see below).

Our third aim is normative, in the sense of evaluating how far evidence and analysis of this study could or should be used to inform the process of legal reform to this principle.

Considering the methodology mentioned, we adopt a (to legal research) innovative approach, agent-based modeling and simulation, in order to investigate how the hypotheses mentioned will play out in action. We first make a model and calibrate it so that it replicates in a useful manner the empirical data that were collected and used in the first part. Then we perform simulations under different (environmental) parameter values. Finally, we analyze the results for answers to the question what the judiciary could have done better, considering both the enormous potential of content-sharing and the chilling effects of defensive business models that prioritize the rigid protection of copyrights (and the in non-USA eyes often excessive protection, viz. the application of statutory liability).

Conclusions. First: it is our impression that legal scholars have a natural tendency to overestimate what influences the law on file-sharing actually has on file-sharing behavior. Second: co-evolutionary processes are essentially dynamic and have quite diverse cycle times that need be taken into account. Third: the resulting knowledge concerns also counterfactuals. This is an asset, even when not rocket science, as it can be used as base-line (boundedly) rational argumentation in legal policy making.

14:00-15:30 Session 13C: Health Care Panel - Comparative reflections on health policies and health research
Location: DZ1
14:00
Can privacy engineering eliminate data subject’s rights under the GDPR?

ABSTRACT. The paper explores the assumption that the GDPR introduces an incentive for controllers to circumvent their obligations with regard to the rights of the data subjects, by letting them create processing operations designed to allow the controller to demonstrate that she cannot identify individuals according to Article 11. To this end, the paper analyses the concepts of “data protection by design” and “identifiability” under the GDPR and the interaction between Articles 11 and 25. “Privacy by design”, “data protection by design”, “privacy engineering” and “privacy enhancing technologies” have been discussed as ways of using technology, which is often seen as the main driver for increasing risks to privacy and other fundamental rights and freedoms, in order to control and mitigate these risks or even find solutions that improve privacy for the individuals whose data is processed in IT systems. This paper focuses on the concept of “data protection by design” as laid down in Article 25 of the GDPR. In its first part, it will briefly analyse the elements of this concept and the guidance the legislator has provided for its application, as well as its relationship to the data protection principles and safeguards for which the GDPR provides. A second section will look more closely at the concept of identifiability. It analyses the modifications to the definition of personal data in Article 4(1) of the GDPR compared to that in Article 2(a) of the previous EU Data Protection Directive 95/46/EC, as well as the relevant recitals (26) of both instruments. It then assesses the possible interpretations of Article 11 of the GDPR which exempts controllers from the obligation to honor requests relating to data subjects’ rights if they can demonstrate that their processing operations do not allow identification of the individuals. A third section will try to determine whether Article 11 could effectively work as an incentive for controllers to design systems for processing operations in such a way that they can fail the test of identifiability in order to relieve controllers from their obligations towards data subjects. It will strive to determine which safeguards the GDPR provides to prevent deliberate circumvention of the data subjects’ rights by “identifiability engineering”. In its concluding section, the paper will develop a consistent interpretation of the apparently contradictory provisions of the GDPR and determine some principles for an engineering approach to data protection by design.

14:00-15:30 Session 13D: PLSC 7B: Macenaite
Location: DZ7
14:00
Panel: Ownership of non-personal data: the impact on scientific research
SPEAKER: Matej Myska

ABSTRACT. Panel proposal for the Conference TILTing Perspectives 2017: 'Regulating a connected world' 17-19 May 2017, Tilburg University, Tilburg, the Netherlands

Panel title: Ownership of non-personal data: the impact on scientific research Co-organized by: Institute for Information Law, NL; Institute of Law and Technology, CZ; Open Knowledge International, UK

Confirmed speakers - input presentations: Prof. Andreas Wiebe - ownership of raw non-personal research data - de lege lata & de lege ferenda Dr. Lucie Guibault - re-using of non-personal raw research data - de lege lata & de lege ferenda

Panel: Prof. Radim Polčák Matěj Myška, Ph.D. Chris Hartgerink, MSc Freyja van den Boom, LLM

Moderator: Marco Caspers, LLM

One sentence abstract: The proposed panel shall discuss the issue of non-personal data ownership and the re-use thereof in scientific research.

Expected time-plan: 90 minutes Input presentation - 15 minutes Input presentation - 15 minutes Remarks by the panelists - 15 minutes Panel discussion - 45 minutes

Abstract: Data is often labelled as the “oil for the 21st century”[1] and currently regarded as basis for innovation.[2] In the area of research, data are the cornerstone of any scientific activity, or as Hanson et. al pertinently put it “science is data and data are science”.[3] Even though the legal issues concerning IPR protection of raw data have been discussed already 20 years ago,[4] the development of ICTs and digital economy reinvigorated the discussions for legal protection of data on syntactic level.[5] A thriving data-driven economy shall, according to the European Commission “contribute to the well-being of citizens as well as to socio-economic progress through new business opportunities and through more innovative public services.” However, in its Communication “Toward a thriving data-driven economy” the Commission identified the legal aspects of cross-border free flow data, data ownership and data access as worthy of further research and consideration. In the area of research data the issue of “owning” raw data this issue is even more pertinent, albeit regarded as grey.[6] In this specific field the Commission sees the open access to raw research data as a prerequisite for “optimal circulation, access to and transfer of scientific knowledge”.[7] The current debate revolves around the question whether there is a need for specific protection merely for data created during the scientific research as such, i.e. data where no substantial investment was made into its obtaining, verification or presentation, but merely in its creation (C-203/02 British Horseracing Board v. William Hill Organization, C-46/02 Fixtures Marketing Ltd v. Oy Veikkaus Ab). For the sake of argument, if a sui generis data right is to be introduced, there are many ways to design and construct such a protection regime. In particular, thoughts should be given to the scope and limits of the framework, e.g. the duration of rights, the exclusivity of rights (i.e., individual exclusive rights, collective licensing, etc.), the protected subject matter, the (initial) right holder, as well as the threshold for protection (and what will be the test). The task for this panel is to discuss what we can learn from other IP rights in this context. To what extent can we apply the idea-expression dichotomy to data rights as well? If originality is the test for copyright, and substantial investment that for sui generis database rights, what test should apply to sui generis data right? Furthermore, this panel will discuss the design of such sui generis data right in the context of promoting current and future technologies enabling the re-use of research data, for example for discovering new knowledge and insights from data by e.g. data mining and machine learning. Also, it is relevant to discuss whether a distinction should (and could) be made between ‘public’ research data and industrial data. Currently, the use of content mining is significantly lower in Europe than in some countries in the Americas and Asia.[8] This can partly be explained by the fact that copyright laws in Europe are very restrictive towards TDM.[9] The UK has created a copyright exception for non-commercial research in 2014, but the legal certainty this is supposed to induce is not without its own uncertainties. This highlights the importance of a well-thought-out regime for the possible future protection regime for non-personal data. Finally, another problematic question is, how should such sui generis right work and co-exist “with the idea that scientific data should be freely (re-)usable by the scientific community”[10] and also with the open research data concept as supported by the Commission.[11]

Suggested reading for the participants: KERBER, Wolfgang, 2016, A New (Intellectual) Property Right for Non-Personal Data? An Economic Analysis. Gewerblicher Rechtsschutz und Urheberrecht, Internationaler Teil (GRUR Int). 2016. Vol. 65, no. 11, p. 988–998. WIEBE, Andreas, 2016, Protection of industrial data – a new property right for the digital economy? Gewerblicher Rechtsschutz und Urheberrecht, Internationaler Teil (GRUR Int). Vol. 65, no. 10, p. 877–884. ZECH, Herbert, 2016, A legal framework for a data economy in the European Digital Single Market: rights to use data. Journal of Intellectual Property Law & Practice. 1 June 2016. Vol. 11, no. 6, p. 460–470. DOI 10.1093/jiplp/jpw049. ________________ [1] See e.g.: TOONDERS, Joris and TOONDERS, Yonego Joris, [2014], Data Is the New Oil of the Digital Economy. WIRED [online]. [Accessed 20 November 2016]. Available from: https://www.wired.com/insights/2014/07/data-new-oil-digital-economy/ [2] See e.g.: OECD, 2015, Data-Driven Innovation: Big Data for Growth and Well-Being [online]. Paris : OECD Publishing. [Accessed 20 November 2016]. ISBN 978-92-64-22934-1. Available from: http://www.oecd-ilibrary.org/science-and-technology/data-driven-innovation_9789264229358-en. [3] HANSON, Brooks, Andrew SUGDEN a Bruce ALBERTS, 2011. Making Data Maximally Available. Science [online]. 11. 2., vol. 331, no. 6018, p. 649–649. ISSN 0036-8075, 1095-9203. Available from: doi:10.1126/science.1203354 [4] See e.g.: REICHMAN, Jerome H. and SAMUELSON, Pamela, 1997, Intellectual Property Rights in Data? Vanderbilt Law Review. 1997. Vol. 50, p. 49–166. [5] See e.g.: KERBER, Wolfgang, 2016, A New (Intellectual) Property Right for Non-Personal Data? An Economic Analysis. Gewerblicher Rechtsschutz und Urheberrecht, Internationaler Teil (GRUR Int). 2016. Vol. 65, no. 11, p. 988–998. WIEBE, Andreas, 2016, Protection of industrial data – a new property right for the digital economy? Gewerblicher Rechtsschutz und Urheberrecht, Internationaler Teil (GRUR Int). Vol. 65, no. 10, p. 877–884. ZECH, Herbert, 2016, A legal framework for a data economy in the European Digital Single Market: rights to use data. Journal of Intellectual Property Law & Practice. 1 June 2016. Vol. 11, no. 6, p. 460–470. DOI 10.1093/jiplp/jpw049. [6] CLEARY, Michelle, JACKSON, Debra and WALTER, Garry, 2013, Editorial: Research data ownership and dissemination: is it too simple to suggest that “possession is nine-tenths of the law”? Journal of Clinical Nursing. 1 August 2013. Vol. 22, no. 15–16, p. 2087–2089. DOI 10.1111/jocn.12140, p. 2087. [7] COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS A Reinforced European Research Area Partnership for Excellence and Growth. COM/2012/0392 final. [8] Cf. HANDKE, Christian, GUIBAULT, Lucie and VALLBÉ, Joan-Josep, 2015, Is Europe Falling Behind in Data Mining? Copyright’s Impact on Data Mining in Academic Research. In : New Avenues for Electronic Publishing in the Age of Infinite Collections and Citizen Science: Scale, Openness and Trust (Proceedings of the 19th International Conference on Electronic Publishing) [online]. Washington, DC : IOS Press. p. 120–130. [Accessed 20 November 2016]. ISBN 978-1-61499-562-3. Available from: http://www.medra.org/servlet/aliasResolver?alias=iospressISBN&isbn=978-1-61499-561-6&spage=120&doi=10.3233/978-1-61499-562-3-120 [9] For example, see: TRIAILLE, Jean-Paul, MEEÛS D’ARGENTEUIL, Jérôme de, FRANCQUEN, Amélie de, EUROPEAN COMMISSION, DIRECTORATE-GENERAL FOR THE INTERNAL MARKET AND SERVICES and DE WOLF & PARTNERS, 2014, Study on the Legal Framework of Text and Data Mining (TDM) [online]. Luxembourg : Publications Office. [Accessed 20 November 2016]. ISBN 978-92-79-31976-1. Available from: http://bookshop.europa.eu/uri?target=EUB:NOTICE:KM0313426:EN:HTML; CASPERS, Marco and GUIBAULT, Lucie. 2016, Deliverable D3.3: Baseline Report of Policies and Barriers of TDM in Europe. [online] FutureTDM. [Accessed 20 November 2016]Available from: http://www.futuretdm.eu/knowledge-library/?b5-file=2374&b5-folder=2227. [10] As observed in: GUIBAULT, Lucie, MARGONI, Thomas and SPINDLER, Gerald, 2013, Conclusions and recommendations. In : Safe to be open study on the protection of research data and recommendations for access and usage [online]. Göttingen, Germany : Universitätsverlag Göttingen. p. 161–165. [Accessed 20 November 2016]. ISBN 978-3-86395-147-4. Available from: http://webdoc.sub.gwdg.de/univerlag/2013/legalstudy.pdf, p. 162. [11] EUROPEAN COMMISSION, 2016, Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 [online]. European Commission. [Accessed 20 November 2016]. Available from: http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-pilot-guide_en.pdf.

14:00-15:30 Session 13E: IP7: Technology, Courts and Copyright
Location: DZ4
14:00
The interaction among health promotion, soft technology, urban planning and the legal system for the building of healthy cities*

ABSTRACT. Introduction In 2050, the world population will surpass 9 million of individuals. According to United Nations Organization (UN), several problems will come out including difficulties to access to the water that affects the eating and food quality. The urban and community gardens, may contribute, in the long run, to strategies for poverty reduction, increasing the individual income and the access to the healthy food, leading to a social inclusion and consequently improving the health social determinants. The orchards also meet 5 of the 17 Millennium Development goals (UN, 2015) including (i) End poverty in all its forms everywhere( Goal 1); (ii) End hunger, achieve food security and improved nutrition and promote sustainable agriculture (goal 2); (iii) Ensure healthy lives and promote well-being for all at all ages (Goal 3); (iv) Make cities inclusive, safe, resilient and sustainable (Goal 11) and (v) Ensure sustainable consumption and production patterns (Goal 12). Those five among other objectives depend on the cities and countryside structures, as well as on the local political capacity to be implemented and to produce public policies that meet the millennium objectives. Additionally, the urban and community gardens might represent a multidisciplinary opportunity to meet, in a collective and capilar way, the local needs, while respecting the knowledge of the rural tradition. It also promotes the life, protects the health, prevent diseases besides being an urban planning tool that revitalizes the social usage of the cities while occupying the urban voids.

Objective Describe the implementation of urban community gardens in a Brazilian county focusing on both the right to health and the right to city aiming at developing healthy cities.

Methodology The study is based on the review of the National Policy for Health Promotion, the National Health Policy, City Statute and on the registration monitoring process of the urban community gardens. Among 60 experiences in Brazil, coordinated by the LABINUR - FEC - Urbanism and Architecture - UNICAMP - SP and by the Potentially Healthy Counties Network, it will be presented one case study in the Planalto Community Garden (GUARNIERI, 2013) in the city of Conchal, Sao Paulo State - Brazil, a member of the Potentially Health County Network (SPERANDIO, 2015) with 25 thousand inhabitants. Interviews were conducted and pictures series were taken during the field visits.Through a quality methodology and based on the mentioned documents, the following aspects, principles and values were elected as priorities for the case study: transversality, equity, respect for diversity, appreciation of culture, sustainable and healthy local and human development, social participation, intersectoriality, individual and collective empowerment and production of knowledge.

Expected Results It has been evidenced by the study methodologies that the urban community gardens might contribute in the choice for a healthy life style by means of a healthy eating while stimulating the wellbeing as per the City Statute (Brazil, 2001), improving the life quality within specific city spaces and promoting the local development in accordance to the principles of National Policy for Health Promotion (Brazil, 2014) and ti the stated by World Health Organization (2010). The case study has identified that through the intersectoriality and the social participation, the urban community gardens were implemented as a healthy and sustainable public policies contributing to the environment protection, for the modeling of physical spaces, for the promotion of social relationships and for health promotion. The urban gardens initiative has lasted for different political administrations being proved to be resilient and sustainable. The project also identified the decrease of the chemical medicines utilization referred in the interviews with the participant population; the belonging to the place feeling; the building of an unified governmental agenda; the strengthening of networks; the social responsibility related to the project; the intention to create new orchards; the intensification of intra neighborhood relationships, ; the promotion of alliances; and the happiness coming from the participation in the project, the wellbeing and income generation. The implementation of urban community gardens is a soft and low cost technology that demonstrates how the interaction among the legal system, the urban planning, the health promotion might contribute for the achievement of healthy cities principle (SPERANDIO, 2016)

Bibliography BRASIL. Estatuto da Cidade – Lei n. 10257, de 10 de julho de 2001. Regulamenta os arts. 182 e 183 da Constituição Federal. Estabele diretrizes gerais da política urbana e dá outras providências. Diário Oficial da União. Brasília, DF, 10 jul. 2001. BRASIL. Ministério da Saúde. Secretaria de Vigilância em Saúde. Secretaria de Atenção á saúde. Política Nacional da Promoção da Saúde: PnaPS: revisão da Portaria MS/GM nª 687, de 30 de Março de 2006, Ministério da Saúde, Secretaria de Vigilância em Saúde, Secretaria de Atenção à Saúde – Brasília: Ministério da Saúde, 2014. 32 p. GUARNIERI, J. C. Convergências das políticas de planejamento urbano e saúde na construção de espaços urbanos saudáveis. Dissertação de Mestrado UNICAMP, 2013. SANTANA, P. A Geografia da Saúde da população: Evolução nos últimos 20 anos em Portugal Continental. CEGOT, Universidade de Coimbra, Portugal, 2015. SPERANDIO, A.M.G., et al. Política de promoção da saúde e planejamento urbano: articulações para o desenvolvimento da cidade saudável. Ciência e Saúde Coletiva, 21(6):1931-1937, 2016, DOI: 10.1590/143-81232015216.10812016 SPERANDIO, A.M.G., et al. Ocupação de vazio urbano como promotor do planejamento para cidade saudável. PARC Pesq em Arquit. e Constr., Campinas, SP, v.6, n.3, p.205-215, set. 2015, ISSN 1980-6809. UNITED NATIONS. Sustainable Development Goals. 17 Goals to Transform our World. 2015. Available 222.un.org/sustainabledevelopment/

* Prof Dr and Researcher in Urban Planning and Healthy cities at Urban Investigation Laboratory – LABINUR, Civil Engineering, Architecture and Urbanism Faculty - Unicamp – SP – Brazil, and coordinator of Interdiciplinar Study and Research Center – NEP – Jaguariuna Faculty – Jaguariuna – SP - Brazil

14:30
Self-incrimination and authentication: on something you are in knowledge-based world

ABSTRACT. Encryption took role as a crucial part of protecting one’s privacy in todays interconnected and information-driven world. On the other hand, it also presents an increasing threat to efficacy of law enforcement agencies. We are currently witnesses to growing pressure on communication service providers to succumb master keys to various authorities under key disclosure laws in various countries. At the same time, we see increasing pressure from LEAs to ban the use of specific end-to-end encryption measures that are capable of successfully preventing LEAs from acting upon issues warrant. Warrants may become useless pieces of paper and, in terms of FBI’s Director James Comey, LEAs risk actually Going Dark. This issue is strongly set within current affairs and it challenges the most fundamental conception of effective investigation of crimes and prevention of terrorist threats. Moreover, this issue concerns both data-in-motion and data-at-rest.

The proposed paper predominantly focuses on the issue of encryption of data-at-rest and its interplay with the right against self-incrimination. In case the data are encrypted, what are the options of LEA? We have seen various approached to materialize in face of this question within the U.S. practice, ranging from In re Boucher (U.S. District Court for the District of Vermont, 2009 WL 424718) to United States vs. Doe (11th U.S. Circuit Court of Appeals, 670 F.3d 1335 ). Both cases interpreted the 5th Amendment to Constitution differently when facing the need to decrypt potential source of evidence. The interpretation is unclear even when distinguishing between various authentication mechanisms, where some Judges seem to be distinguish between motion to compel the password and motion to compel biometric based authentication (such as fingerprint) – as seen in Commonwealth of Virginia v. David Charles Baust (docket No. CR14-1439).

The proposed paper aims to transpire this discussion to Europe and answer the following questions: 1) What is the exact scope of protection granted self-incrimination laws facing the encryption? 2) Is this scope different with regard to various authentication methods in use, such as passwords compared to various biometrics?

Both questions are increasingly important in today’s world, since the right against self-incrimination seems to be aiming to something we know and not something we are. Getting answer to this question right can and will have significant impact upon the field or personal encryption.

The paper is not intended to answer any of these questions with regard to specific national law. It is intended to provide for desk-based theoretical overview of existing tendencies as summed up under the European convention on Human Rights, specifically under its Article 6 in its extended scope as established by Funke v. France in 1993 and shaped by subsequent case-law.

15:00
Electronic payments and privacy: in need of a digital right to cash?

ABSTRACT. Electronic payments have undeniable benefits. Payments are processed fast and secure, there is no longer any need to carry a wallet full of cash and small change, and they drive e- and m-commerce. However, electronic payments also tend to leave trails of personal information. More and more, financial actors are applying data mining techniques on their customers’ financial data. One example is a bank providing a visual representation of its clients’ spending habits, allowing clients to see what percentage of their income they spend on housing, food, or shopping. At the same time, financial actors are looking to monetize this data. There are, for instance, several examples of banks selling this data to advertisers to provide advertisements tailored to these users’ spending habits. Furthermore, the European Commission has turned its attention to the few remaining methods of (semi-)anonymous electronic payments: prepaid cards and virtual currencies. While cash has always provided a certain degree of privacy, electronic payments will do no such thing. As a reaction, many are trying to defend their ‘right to use cash’. However, it is clear that the usability of cash will only decrease in a world driven by electronic transactions. This paper aims to first analyze the precise scope of this so-called ‘right to cash’. While what is referred to as cash does mostly equate legal tender, this does not necessarily mean that there is a fundamental right to use legal tender in any scenario. Second, this paper aims to assess whether the ‘right to cash’ could be translated to the digital realm. Here, specific attention must be paid on how to frame this in recent legislative developments against anonymous electronic payment instruments.

14:00-15:30 Session 13F: Gikii 2
Location: Black Box
14:00
Data Governance in Healthcare:Investigating data quality dimensions within a big data context
SPEAKER: Suraj Juddoo

ABSTRACT. With the increased use of networks, sensors, transaction processing systems and social media amongst others, organisations are facing a deluge of data which is estimated to reach a worldwide volume of 40 ZB by 2020 [1]. The term ‘Big Data’ characterizes data by the volume of its use, but also by its velocity and variety. The velocity aspect refers to the speed with which data collected is analysed such that timely use is made out of it whereas variety refers to the different formats, both structured and unstructured, of data being collected and analysed. Data Governance concern measures to manage and control use of data and should enhance quality, availability and integrity of data. Data governance measures should provide adequate protection to data, and simultaneously not be too restrictive to prevent organizations to unlock the power of the data [2]. One of the major challenges in big data governance is to manage quality and uncertainty of data [3]. Data quality is a key aspect of the most well-known data governance frameworks, where the standards for data quality with respect to dimensions such as accuracy and timeliness are well-established [4]. In the health industry, the use of data is of growing importance, in particular since data standardization has become the norm in e-health contexts. Results coming from more complex and elaborate analysis are bringing multi-fold benefits to various stakeholders of the industry ranging from patients, pharmaceuticals companies, insurance groups and doctors amongst others. Gradually, the health industry is adopting Big Data [7]. Examples are the Google Flu system and the use of IBM Watson for medical diagnosis. Data quality for Big Data is a much debated field; initially, it was thought that analytics from Big Data would apply only to the principle patterns emerging from the data, such that improper data impacts would be insignificant [5]. However, more contemporary research studies indicate that the impact of poor quality of data depends upon its quantity and type [8]. The type and description of the quality of data is expressed in terms of dimensions of data quality. Well known dimensions are accuracy, completeness and consistency amongst others [6]. As data quality depends upon how the data is expected to be used, the most relevant and adequate data quality dimensions are contextual. Thus the specific industries and technologies revolving around data production are two major factors impacting upon which quality dimensions are most relevant, taking into account impact and consequences (from an ethical perspective) of health data as ‘big data’. This paper investigates the most important data quality dimensions for big datasets used in the healthcare industry. An inner hermeneutic cycle approach is used to review the literature related to data quality for big health datasets in a systematic way and a list of justified data quality dimensions is provided. These dimensions can potentially be used to inform future big data governance frameworks specifically for the health industry.

References:

[1] Jurate Daugelaite, Roy D. Sleator Aisling O’Driscoll, "‘Big data’, Hadoop and cloud computing in genomics," Journal of Biomedical Informatics, pp. 774-781, 2013. [2] Paul P. Tallon, "Corporate Governance of Big Data:Perspectives on value, risk and cost," IEEE Computer Society, 2013. [3] P Malik, "Governing Big Data:Principles and Practices," IBM Journal of Research and Development, 2013, vol 57, no. 3/4. Doi: 10.1147/JRD.2013.2241359 [4] Vijay Khatri and Carol Brown, "Designing data governance," Communications of the ACM January 2010, vol. 53, no. 1., 148-152. Doi: 10.1145/1629175.1629210. [1] Jurate Daugelaite, Roy D. Sleator Aisling O’Driscoll, "‘Big data’, Hadoop and cloud computing in genomics," Journal of Biomedical Informatics, pp. 774-781, 2013. [2] Paul P. Tallon, "Corporate Governance of Big Data:Perspectives on value, risk and cost," IEEE Computer Society, 2013. [3] P Malik, "Governing Big Data:Principles and Practices," IBM Journal of Research and Development, 2013, vol 57, no. 3/4. Doi: 10.1147/JRD.2013.2241359 [4] Vijay Khatri and Carol Brown, "Designing data governance," Communications of the ACM January 2010, vol. 53, no. 1., 148-152. Doi: 10.1145/1629175.1629210. [5] Sunil Soares, "Big Data quality," in Big Data Governance: An emerging imperative.: MC Press, 2012, pp. 101-112. [6] Merino, J., Caballero, I., Rivas, B., Serrano, M., & Piattini (2016) A Data Quality in Use model for Big Data. Future Generation Computer Systems, Vol. 63, Issue C, October 2016. 123-130. Elsevier Science Publishers B.V. Amsterdam, The Netherlands. Doi 10.1016/future 2015.11.024 [7] Sonja Zillner, Heiner Oberkampf, Claudia Bretschneider, and Zaveri Amrapali, "Towards a Technology Roadmap for Big Data Applications in the healthcare domain," in IEEE IRI 2014, San Francisco, 2014. [8] De Sushovan, Hu Yuheng, Chen Yi, and Kambhampati Subbarao, "BayesWipe: A Multimodal System for Data Cleaning and Consistent Query Answering on Structured BigData," in IEEE International conference on Big Data, 2014.

14:30
Inspector Clouseau at the Olympics: Was the Pink Panther Doped?
SPEAKER: Mara Paun

ABSTRACT. The Pink Panther’s methods in winning the Olympics in 2010 [1] were at least suspicious, given the stringent control WADA has over athletes. The Pink Panther, as a professional athlete, has to submit to doping controls in accordance with the World Anti-Doping Code (WADC). This means the Pink Panther has to provide whereabouts information and submit to in- and out-of-competition testing by Doping Control Officers (starring: Inspector Clouseau). There have been ongoing debates among fans on whether the Pink Panther is male or female – an athlete blood passport and blood sample analysis could certainly shed some light over this question and much more. The new WADC and its additional five international standards have been condemned as violating the privacy and principles of personal data processing in the EU, by requiring an excessive amount of information, as well as being particularly intrusive during sample collection. Thus, issues have arisen for Member States that have to comply with both the WADC and data protection legislation.

[1]Pink Panther and Pals: Gold, Silver, Bronze and Pink (2010) https://www.youtube.com/watch?v=ZoYPAPjYVz0

14:40
Trusting the Trust Environment: The Human Rights Glitch in the Sharing Economy Trust Dataset
SPEAKER: Gail Maunula

ABSTRACT. The various sharing economy business models and the platforms from which they operate feed on and produce an enormous amount of data. The data generated by this economic, digital, social trifecta not only contribute to these platform’s success, but also provide useful knowledge and insight about their users. However, none of this success and information would be possible if the data were not first used to build a level of trust between strangers that makes them comfortable interacting in such a personal way. While data drives the sharing economy, trust is the fuel that powers the engine. Without this level of trust, the broad, modern form of the sharing economy we enjoy could not exist.

Sharing economy giants like Uber and Airbnb have created an effective trust environment by requiring certain data from users. This trust dataset includes user profiles, photographs, links to other social media profiles (such as Facebook) and reputational data through a comprehensive rating and review system. While these elements are definitely smart uses of available technology to unveil the user behind the screen, there may be a tendency for some users to use this data to implicitly and explicitly discriminate against others. The paradox that this innovative use of technology branded with monikers like sharing and collaboration could somehow play host to discriminatory behaviors that jeopardize human rights seems strikingly incongruous. Surprisingly, the innovative nature of this industry has not helped to advance society beyond some of the social ills we historically face. Even industry developers admit that they did not foresee the scale of the problem when designing these platforms. The result is that many users cannot place their complete trust in the digital trust environment. It is possible that the structure cannot be trusted to protect their fundamental human rights, barring them from full access and enjoyment of these services.

When the data requirements to facilitate trust become the data exposure that facilitates discrimination, the necessity for a new strain of regulatory analysis emerges. The current regulatory attention directed towards the sharing economy relates to the issues of data privacy law, labor law and competition law. This paper looks at the resulting discriminatory behavior facilitated by the trust dataset in this socio-digital environment through the lens of protections afforded under EU anti-discrimination laws, and assesses the ability of these laws to evolve to address the potential problems that arise from the unique structure of the sharing economy and other socio-digital environments. The impact of this behavior has shaken the market, created a public relations nightmare and resulted in a class-action lawsuit. More important than the industry’s reputation, however, the growing problem of discrimination demands that focus realigns with the obligation to protect the human rights of these human actors. Under the current conditions in the sharing economy, a real threat to human rights exists; a threat that cannot be left to the hope that platform reputational concerns and market indicators will correct.

Discrimination is a problem for the sharing economy. It is not a small problem and it is not a passing problem. Racial, ethnic, gender and religious discrimination have all cropped up in sharing economy transactions. The problem may stem from the structure of the necessary trust dataset that makes its existence possible. But, this dataset may also prove valuable in the fight against discrimination in the EU. There is great reciprocal value in sharing economy platforms’ heightened awareness and commitment to track and combat instances of discrimination. The data collected on discrimination in the sharing economy environment could assist the European Commission in its renewed commitment to fight discrimination by strengthening the effectiveness of the anti-discrimination legal framework. The Commission has recognized a deficit in the available data necessary to assess the scale and nature of discrimination. This sharing of data could enable the Commission to better respond to contemporary challenges.

15:00
Exhaustion of Copyright in Digital Objects
SPEAKER: Dan Burk

ABSTRACT. Copyright produces a curious condition of dual ownership: physical objects in which the copyrighted work is embodied are typically treated as personal property, but the expressive work fixed in the medium constitutes a separate article of intellectual property. Because restraints on the alienation of chattel property are disfavored, copyright law has long provided for purchasers of copyrighted works to dispose of their lawfully purchased copies by resale or other transfer, effectively limiting the entitlement of the copyright owner. This doctrine of “exhaustion” or “first sale” has created opportunities for development of secondary markets are common for used copies of books and other expressive tangible goods.

But the trend to toward digitization of books, music, and other expressive works wreaks havoc with this solution. It is unclear what it might mean to "resell" an intangible digital copy, and businesses hoping to deal in used digital goods have encountered legal obstacles under a copyright regime designed for the transfer of material goods. Courts on both sides of the Atlantic have wrestled this issue, producing a welter of formalist and purposive decisions with no coherent policy trajectory. In this paper I examine recent developments in the United States and in the European Union that will determine whether there can be a market for used digital products, analyzing the comparative approaches of the courts that have engaged this issue, and offering some more sensible policy frameworks for considering exhaustion of rights in digital objects.

14:00-15:30 Session 13G: Alumni 3
Location: DZ2
14:00
Cross Device Tracking: Measurement and Disclosures

ABSTRACT. Internet advertising and analytics technology companies are increasingly trying to find ways to link behavior across the various devices consumers own. This cross-device tracking can provide a more complete view into a consumer’s behavior and can be valuable for a range of purposes, including ad targeting, research, and conversion attribution. However, consumers may not be aware of how and how often their behavior is tracked across different devices. We designed this multidisciplinary study to try to assess what information about cross-device tracking (including data flows and policy disclosures) is observable from the perspective of the end user. Our paper demonstrates how data that is routinely collected and shared online could be used by online third parties to track consumers across devices.

This paper was published in PETS 2017 volume 2 early this year.  We have also conducted follow-up research, including research into data collection and disclosures by Smart TVs, so my talk will also discuss how they factor into cross-device ecosystem.

14:30
Bringing a child’s home to the Hospital. The use of Virtual Reality in a hospital environment
SPEAKER: Frank Rutgers

ABSTRACT. This paper looks into the use of virtual reality (hereafter: VR) by (young) patients in a hospital setting. A Dutch hospital is currently experimenting with the use of VR-systems to give young patients the feeling of being at home whilst lying in the hospital. The first outcomes of the experiment suggest that patients recover faster when being able to virtually be at home. This paper looks into the legal and sociological implications of bringing the home of patients to the hospital by means of VR. One of the impacts of bringing the home of a patient to the hospital is that the hospital is also brought to the home of a patient. This paper raises the question: what could the implications be of bringing the hospital to the home of patients?

The first part of the paper will examine the privacy and data protection issues surrounding this topic. The central questions of interest are: is Dutch healthcare law robust against the use of this innovation in a hospital setting? And what are the privacy implications of a camera in the middle of a living room? The second part will elaborate on the implications that the use of VR could have on the people closely related to the patient, doctors and other patients in a hospital.

15:00
Mobile health apps, privacy and autonomy: An empirical, legal, and ethical analysis
SPEAKER: Marijn Sax

ABSTRACT. In this paper, we look at privacy and autonomy in the context of mobile health apps, and we examine to what extent the General Data Protection Regulation (GDPR) and consumer protection law can foster these values. Mobile health apps could enhance autonomy as they help people to manage their health and fitness. But mobile health apps could also decrease autonomy. For instance, commercial companies could tell, or nudge, people how to manage their health. These companies may have other interests (such as profit) in addition to an interest in helping people stay healthy (see Peterson et al. 2015). In addition, mobile health apps raise privacy questions as such apps often enable companies to collect large amounts of data about app users.

We develop a taxonomy of different facets of autonomy and privacy in the mobile health context. We make the taxonomy on the basis of (i) empirical findings from a survey we conducted among a large representative Dutch sample (N > 1,000) on users’ concerns regarding mobile health apps, and (ii) an ethical investigation into the concept of autonomy in the health context.

Next, we examine whether and how the GDPR could foster privacy and autonomy. For instance, the GDPR aims to ensure that personal data processing are only processed fairly and transparently, and contains stricter rules for health-related data. In addition, we examine to what extent consumer protection law, in particular the Unfair Commercial Practice Directive, can help to protect privacy and autonomy.

We integrate legal, ethical, and empirical research to identify potential problems associated with mobile health apps. We explain why these problems are normatively important, and explore how the law deals with these problems. The overall research question is: How can the GDPR and consumer protection law contribute to safeguarding autonomy and privacy of mobile health apps users?

We contribute to the literature in three ways. First, we take an interdisciplinary approach. The paper is written by legal scholars, a philosopher, and a communication scientist. We combine normative and legal analysis with rigorous survey research. Such empirical-legal research is rare, especially in the context of health apps. Second, this is among the first papers that assesses the relevance of consumer protection law for mobile health apps (see for an exception: Norwegian Consumer Council 2016). Third, few authors have assessed the relevance of European consumer law for privacy and autonomy-related problems (see for exceptions: EDPS 2014; Helberger 2016; Rhoen 2016; Zuiderveen Borgesius 2015).

Structure of the paper

We start by explaining what mobile health apps are and what they are technically capable of. Mobile health apps are, especially when used in combination with wearables such as the Fitbit activity tracker. Moreover, big data analytics can be used to extract valuable insights from the data that mobile health apps collect.

Next, we introduce the concepts of autonomy and privacy and explain their value in the context of health. We substantiate our normative claims with empirical data from the large representative Dutch sample we collected. Based on our normative arguments in this section, and the discussion of mobile health app technology in the previous section, we map out the most important concerns regarding privacy and autonomy.

Then we turn to law. We assess to what extent the GDPR and the Unfair Commercial Practices Directive can protect privacy and autonomy of mobile health app users. We conclude with policy suggestions and suggestions for further research.

1. Introduction 2. Mobile health apps 3. Privacy and autonomy 4. Mobile health apps: promises and concerns 5. Data protection law 6. Consumer protection law 7. Suggestions for further research 8. Concluding thoughts

Literature

Consumer Council Norway (2016). Fitness wristbands violate European law, 3 November 2016, http://www.forbrukerradet.no/siste-nytt/fitness-wristbands-violate-european-law

Daly, A. (2015). The Law and Ethics of 'Self Quantified' Health Information: An Australian Perspective. International Data Privacy Law (2015), 5(2), 144-155.

Dutch Data Protection Authority (2015). ‘Nike modifies running app after Dutch DPA investigation’, 10 November 2015. https://autoriteitpersoonsgegevens.nl/en/news/nike-modifies-running-app-after-dutch-dpa-investigation

EDPS (2014). European Data Protection Supervisor’s report on privacy and competitiveness in the age of big data. EPDS, Privacy and competitiveness in the age of big data: The interplay between data protection, competition law and consumer protection in the Digital Economy https://secure.edps.europa.eu/EDPSWEB/webdav/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf

Helberger N (2016) 'Profiling and Targeting Consumers in the Internet of Things–A New Challenge for Consumer Law' (Digital Revolution: Challenges for Contract Law in Practice Nomos Verlagsgesellschaft mbH & Co. KG, 2016) 135.

Petersen, C., Adams, S. A., & DeMuro, P. R. (2015). mHealth: Don’t Forget All the Stakeholders in the Business Case. Medicine 2.0, 4(2).

Rhoen, M. (2016). Beyond consent: improving data protection through consumer protection law. Internet Policy Review, 5(1). DOI: 10.14763/2016.1.404

Zuiderveen Borgesius FJ (2015), Improving Privacy Protection in the Area of Behavioural Targeting, Kluwer Law International 2015.

14:00-15:30 Session 13H: Privacy 11: Tracking, personalisation and transparency
Location: DZ3
14:00
Should the use of DRM systems to protect lawful consumption of digital works remain absolute in scope?

ABSTRACT. From a strict technological perspective copyright holders can today truly enforce their rights ex ante by creating closed environments in which the consumption of digital works is strictly controlled by DRM systems. The scope of such control is however usually attributable to a given business model rather than legal compliance because the copyright framework (and particularly Art. 6 InfoSoc Directive or Art. 11 WCT) does not regulate the use of self-enforcement technology other than generally prohibiting others from circumventing it. Although such protection of DRM systems is, arguably, needed to deter piracy and encourage rightholders to digitally distribute content, once a digital work has been acquired from a (legitimate) source, there is little to nothing that an acquirer (consumer) can do to arrange consumption around their own preferences. Instead, their consumption is subject to complicated, sometimes equivocal, terms regulating the legal relationship and device limitations which are imposed by the DRM system. The reasonable expectations of acquirers can often get misaligned with what they are actually allowed to do (both legally and technologically) and potential remedies available vary across the EU. Copyright law, both international and EU, does currently little to address this and to maintain a balance between the legitimate interests of rightholders and the legitimate interests of bona fide digital consumers (note for example the discretionary character of Art. 6(4) paragraph 2).

Against this background is a framework of consumer protection law whose underlying aim is indeed to maintain a balance between contracting parties where one is in a clearly weaker (bargaining) position. The intersection of copyright and consumer protection law is however especially problematic because not only is there insufficient, or a general lack of subject-specific, legislation but, high costs of litigation in conjunction with small sums relating to disputes do not incentivise pursuing legal action. This potentially creates market failures on many levels and reinforces the unrestricted use of DRM systems, even though certain informational requirements were introduced by the Consumer Rights Directive in 2011.

Although the EU Commission's copyright reform in this context is commendable to the extent that it purports to encapsulate digital consumers into the consumer protection framework (proposal for a Directive on the supply of digital content) and clearly fill an existing gap, the question that transpires is whether the response ought lie in the 'simple' modernisation of consumer protection law only or instead in the clearer alignment of consumer protection law and copyright law (such as is indirectly done in the current proposal for a Regulation ensuring the cross-border portability of online content services), for example through more direct regulation of the use of DRM systems.

I am first addressing consumer expectations and the change in the consumption pattern before the arrival of the 'digital market proper', the functioning and use of DRM systems in the 'digital market proper', the regulation of the use of DRM systems in copyright law, and finally the EU Commissions recent proposals in light of some of the failures of the legal framework concerning the use of DRM.

See generally:

Kubesch, A.S., Wicker, S., Digital Rights Management: The Cost to Consumers (2015) 103(5) Proceedings of the IEEE 726

Dussolier, S., The protection of technological measures: Much ado about nothing or silent remodelling of copyright? In Dreyfuss, R.C. and Ginsburg, J.C. (eds), Intellectual Property at the Edge. The Contested Contours of IP, Cambridge University Press (2015), pp.253-268

Loos, M., et. al., Analysis of the applicable legal frameworks and suggestions for the contours of a model system of consumer protection in relation to digital content contracts, Final Report Digital Content Contracts for Consumers, CSECL, IViR and ACLE, 2011, Chapter 6

Bradgate, R., Consumer Rights in Digital Products, Research and Analysis Report, Department for Business Innovation and Skills, 2010

European Commission, Staff Working Document, Report to the Council, the European Parliament and the Economic and Social Committee on the application of Directive 2001/29/EC on the harmonisation of certain aspects of copyright and related rights in the information society SEC(2007) 1556

Guibault, L., et. al., Study on the Implementation and Effect in Member States’ Laws of Directive 2001/29/EC on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society, IViR, 2007, Chapter 4

Helberger, N., Hugenholtz, P. B., No place like home for making a copy: private copying in european copyright law and consumer law  (2007) 22(3) Berkeley Tech. L.J. 1062

Guibault, L., Accommodating the Needs of iConsumers: Making Sure They Get Their Money’s Worth of Digital Entertainment (2008) 31 J Consum Policy 409

Helberger, N., Digital Rights Management from a Consumer's Perspective [2005] 8 IRIS Plus

 

 

14:30
Privacy Impact Assessment for m’health HIV prevention and education apps: the case of Brazil
SPEAKER: unknown

ABSTRACT. I - Summary of the Proposal The usage of technology in the field of health (m'health) is not new. On the opposite, doctors have been assisted by technology for many years, but now, they can use apps exclusively designed to inform about diseases, help them to diagnose patients through smartphones or even remotely monitor their patients. M'health also allows patients to have an active role in their health. For example, it allows them to keep up with diet and exercise programs, check symptoms and even self-diagnose. Also, there are apps designed for sick consumers that allow them to self-manage their conditions, such as hypertension, diabetes, asthma or AIDS. (HELM; GEORGATOS, 2014). Specially in a country of dimensions and diverse challenges, like Brazil, m’health has the promise to enhance the universality of its public health service system, i.e., through the mobile assistance by health agents, access to basic diagnosis and technologies. Since the 90’s, Brazil has a broad public national health service (SUS), that grants universal access and treatment for free. As a consequence of this, the country has one of the most effective public policies to fight AIDS, being one of the first non-developed countries to provide for its citizens free treatment and having dropped the number of deaths caused by HIV virus (MINISTÉRIO DA SAÚDE). Even though the Brazilian numbers on the fight against AIDS are good -especially compared to the numbers worldwide-, there are still some groups within society (e.g.: drug users, prostitutes, MSM) which are less exposed to prevention and treatment. In this sense, they are the focus of many public policies.To fight HIV, besides regular projects, Federal and Local Governments have begun investing in medical mobile phone apps. Those initiatives aim to inform population about the disease, help HIV positive people to control their virus and prevent infections in general. While some apps only present information to users, one can argue that the access to this information is sensitive per se; meanwhile others apps collect highly sensitive personal data and may present threats to privacy and data protection. The aim of this article is to analyse mobile health apps developed by the Brazilian government in order to prevent HIV and help HIV people in their treatment from the perspective of data protection, assessing whether their data collection is adequate and if patient's data are adequately secure, as well as the privacy impact of such policies. The spillover effects of misuse and abuse of the collection and treatment of such data are more significant, hence the sensitivity of this data.

II - Methodology Brazil has no comprehensive privacy nor data protection regulation. We started by mapping the rules governing privacy and data protection in the sector and information security. How doctor’s confidentiality norms and bioethic codes actively protect patient’s privacy rights and whether they are efficient and enough to guarantee rights in the digital sphere. Since health data falls under the “sensitive data” category existent in many data protection frameworks, therefore, having a special regime, this was a general guideline for the designing the compliance of the services with fundamental rules.

ii. Selection of the apps The app "Tá na Mão" was designed by Metasix, an enterprise hired by the São Paulo's municipality. In June 2015, the application was available for download in the app store. Its main goal is to raise awareness of sexual activities that can transmit the HIV virus. The app also features an "HIV FAQ" section, in which it presents its users with answers on general questions about HIV, where to find free condoms, where can someone get tested for the disease, where to initiate treatments and how to use male and female condoms. Besides those features , it offers a "quizz" mechanism to calculate the user's risk of being exposed to the virus. The algorithm is fed by personal data informed by the user. The "Viva Bem" app was created by the Federal Government of Brazil in 2015, aiming to improve patient's adherence to HIV treatment. Its features are to remind users of taking their anti-retroviral, as well as other medicines (as birth pills, headache medicines, amongst others) that can be registered in the app, if the user wishes to do so. “Viva Bem” will warn the user to take his/her medicine, using a notification system. It is a simple application, that can also function as a diary, allowing the user to monitor his/her exams of CD4 and viral load, using graphics. The app also notifies the user if he/she has scheduled a medical appointment. “Close Certo” campaign started in July 2016, by the Federal Government of Brazil, in partnership with the dating app “Hornet”. By infiltrating volunteer health agents, the campaign aimed to promote online instructions about prevention, testing and HIV treatment amongst the most vulnerable group of society: young gay/MSM people during Rio’s Olympic Games. The Brazilian government analysed HIV numbers and found out that the number of HIV positives has had a giant increase between the younger population. Therefore, it considered public policies that could would have greater impacts between young people. "Close Certo" seeks to raise awareness of post-exposure prophylaxis, the need of condom usage, and other sexual diseases, by sending Hornet users informational messages. Moreover, in order to solve any doubts regarding the project or HIV transmission, there are 18 young users prepared to talk to users.

i. Analysis We divided the analysis of data collection and treatment into two: (1) the data collected and treated under the purpose of the public policy and (2) the data automatically collected and treated by the developer, for two apps - “Viva Bem” and “Tá na Mão” and a campaign called “Close Certo” in partnership with the app “Hornet”. i. a) FoI requests In the absence of any Privacy Policy and Terms of Use, we issued three requests under Law n. 12.527/11 - the Brazilian Freedom of Information Act - demanding information on: (i) the data collected, (ii) sharing and interconnection of data base, (iii) data retention. i. b) Technical Analysis We studied how the apps collect and treat data, technically: whether they hold in a cloud service, its clients, the APIs, possibility of trackers and permissions inside the phone.

15:00
The Use of Knowledge and the Role of Free Market Mechanisms in the Era of Big Data

ABSTRACT. After the appearance of the seminal article of Friedrich Hayek “The Use of Knowledge in Society” in 1945 the superiority of the market economy over the centrally planned system has often been explained by the fact that knowledge and information are distributed between numerous economic actors and do not exist in a concentrated form what makes impossible their efficient use by any single central authority. From this point of view, market mechanisms facilitate decentralised decision making process and coordination of actions of individuals, and, thereby, solve the problem of the utilisation of the dispersed bits of knowledge and information. Among these data that had been understood as unreachable for any centralised planner are not only individual preferences of economic actors, available resources and technologies, but also knowledge about particular circumstances of place and time that could be considered as insignificant but that, nevertheless, play an important role in the functioning of the entire system. Moreover, economic analysis often underlined the costs of acquisition and communication of information and it also led to the conclusions that the decentralisation is able to provide better results than bureaucratic hierarchical systems. The modern technology that leads us to the era of the Internet of Things and Big Data openly challenges the highlighted above assumptions. Information becomes more and more concentrated and it is quite probably that even “knowledge of particular circumstances of place and time” will soon be collected in centralised datasets. The costs of acquisition and communication of information are rapidly decreasing and ubiquitous sensors of the Internet of Things will transform this procedure into the entirely mechanical process. This picture of the new world represents something completely different from that that was used in the economic analysis of the XX century and rises questions about our understanding of what the market economy is and what might be the best way of its governance. Meanwhile, the new “centralised authority” of our economy is represented by the group of private entities and it induces reasonable concerns from pundits and academic society, but can the market itself provide any feasibly response when even neo-classical arguments support the establishment of the new order? If we understand the free market not only as the laissez-faire system, that, actually, never governed the new economy, but as an indistinguishable part of Adam Smith’s concept of “obvious and simple system of natural liberty”, then it is possible to argue that even in the era of Big Data, when the arguments about "dispersed bits of knowledge" become less relevant for the explanation of the superiority of the market economy, the market mechanisms are still necessary for the solution of not only economic, but also social issues. From this point of view the issue might be represented as a problem of power concentration, and, thus, the ways to avoid or to reduce this concentration in the new economy should be analysed.

14:00-15:30 Session 13I: IP8: Ownership of non-personal data: the impact on scientific research
Location: DZ5
14:00
Implementation of Data Protection by Design and by Default: Framing guiding principles into applicable rules

ABSTRACT. Data Protection by Design and Data Protection by Default (DPbD) are regularly endorsed by European policy-makers, but have found limited practical applications. In 2016, they left the realm of “buzz words” and entered the one of legal obligations, once the European General Data Protection Regulation (‘GDPR’) was adopted. Moreover, failure to comply with the new obligations could have a serious impact for companies due to the prospect of big fines. The principles underlined by these obligations aim at integrating privacy throughout the lifecycle of various technologies and applications that process personal data. At the same time, the practical implementation of DPbD is tremendously complex because of the uncertainty shielding the meaning of these principles. Challenges for engineers include the need for contextualisation, ambiguous legal language and diverse ethical values and social perceptions that accompany the fundamental rights at stake, the right to respect for private life and the right to protection of personal data. In parallel, big data applications, such as predictive analytics in consumer marketing, intensify the interference with the right to personal data and create the need for ‘by design’ and ‘by default’ protection. But how could such an elusive concept be enforced? This paper aims at bridging the gap between legal requirements and technical solutions for the conceptualization of DPbD best practices, by answering to the question: “what are the elements of DPbD obligations under the new Article 25 GDPR and how could they be applied in practice’? Our paper is developed in two parts. The first part will translate legal obligations into enforceable prerogatives by setting out the framework of what the GDPR understands by DPbD according to its Article 25. It will explore the various ways to demonstrate compliance with the DPbD obligations. Our interpretation will also explain the relation of DPbD with other concepts essential to the EU Data Protection Framework and Privacy Enhancing Technologies (PETs) and by extending their meaning beyond the notion of information security. In the second part, we will apply and validate the requirements explored above in three concrete scenarios. In particular, we will focus on predictive analytics in consumer marketing, mHealth applications and search engines that have to comply with right to be forgotten requirements. Results will be discussed in the concluding section claiming that, ultimately, the entire Regulation is a meta-privacy-by-design system.

14:30
Robot Doctors and Algorithm Therapists – The Limits of Automated Decision-making in Healthcare

ABSTRACT. This paper probes into questions surrounding the decision-making capacity of autonomous systems in healthcare from a legal perspective: To what extent is it legal that autonomous assistant systems replace the decisions of doctors and therapists? If this happens, various areas of law are affected: This includes classical Medical Law as well as Data Protection Law and Basic Rights.

The contribution looks into the rising significance of autonomous assistant systems in healthcare. It introduces two specific examples of therapy systems. The systems in question are currently being developed for stroke patients and scoliosis patients and operate with sophisticated ‘assist-as-needed’-functions. On the basis of these examples, I first discuss how the new technologies affect the right not to be subject to automated decisions regulated in Art. 22 General Data Protection Regulation. Second, I examine whether the technologies are compatible with the duties of practitioners as regulated in Medical Associations’ codes of conduct. In particular, this contribution focuses on the legal challenges we face in the case of healthcare environments which are ‘embedded’ in assistant systems and the importance of transparency for both patients and doctors/therapists.

15:00
UNCOVERING THE INVISIBLE: STUDYING ALGORITHMIC ONLINE COPYRIGHT ENFORCEMENT

ABSTRACT. The Notice & Takedown (N&TD) regime, facilitated by the Digital Millennium Copyright Act (DMCA) has turned online intermediaries into a copyright enforcement arm. The law offers online intermediaries immunity from liability in exchange for the removal and blocking of allegedly infringing materials in response to takedown notices. N&TD offers copyright owners an efficient procedure for enforcing their rights in digital networks - without necessitating prosecution and conviction of infringers in a court of law. N&TD procedures have become widespread. Consequently, the N&TD regime is mostly implemented by algorithms, which are embedded in the architecture of online intermediaries. These algorithmic enforcement mechanisms are applied by private entities, for the most part commercial players, with only limited transparency and almost no legal oversight, compared to the traditional enforcement arena – the courts. Unlike copyright enforcement in courts, where decisions become public, we know very little about the players in the online enforcement regime: who are the filers of notices? What materials are being removed? What legal claims are raised? How often are they being challenged? The proposed study seeks to uncover this robust copyright enforcement scene which so far remained invisible, by offering empirical data on the algorithmic implementation of N&TD, based on the analysis of removal requests filed with Google. Google regularly receives an immense number of notices from copyright owners to remove links to infringing materials from its search results (removal requests). Removing links to such content from the search results can significantly reduce the traffic on the website. The concern is that the robust copyright enforcement offered by Google, may also result in censorship. Indeed, Google has engaged in voluntarily publishing information on removal requests, in periodical reports (Google Transparency Report, or GTR). Yet these reports fail to provide a comprehensive picture of the enforcement actions. Google also forwards full removal requests to Chilling Effects, a database run by a nonprofit organization, which collects requests for removal of online materials. In this study we compiled a unique database of about 10,000 removal requests sent to Google, regarding allegedly infringing materials residing in .il over a period of six months. The notices were systematically analyzed and coded. The proposed study will analyze the database to explore the nature of algorithmic copyright enforcement actions: Who are the blockbuster filers? Who is being targeted? What types of copyrighted works are most likely to be subject to removal requests? What are the outcomes of the removal procedure? To what extent this mechanism complies with fair use? Initial findings of the notices analysis indicate that the N&TD regime was used not only for copyright enforcement, but also to enforce the controversial "right to be forgotten". The analysis of this database could offer empirical evidence on the non-transparent online copyright enforcement, which has become robust. Such a deeper understanding of NT&D in practice, might be essential for policy making pertaining to copyright enforcement by online intermediaries.

15:30-17:00Drinks