TILTING2017: TILTING PERSPECTIVES 2017
PROGRAM FOR THURSDAY, MAY 18TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-11:00 Session 7A: PLSC 3A: Yakovleva
Location: DZ7
09:30
Building Agency in Digital Systems
SPEAKER: Sean McDonald

ABSTRACT. Building Agency in Digital Systems will explore legal theories and mechanisms for agency in the governance of digital platforms and companies.

10:00
(Ac)counting risks, producing publics: exploring the role of vaccination registries in public health

ABSTRACT. During flu season, vaccination rates are prominent objects of media interest, and the likelihood of peaks (of infection rates) and all-time lows (of vaccine uptake) makes for frequent conversations. These projections suggest that vaccination data is collected, structured, and exchanged between individuals, administrators of vaccinations, and local and regional authorities with great care and efficiency and in a harmonized fashion. Yet, as this paper shows, these methods of data collection are not universal, but rather contingent upon governance systems in place in a given particular regional or national context. More specifically, this paper reports on early findings regarding the ways in which this vaccination data is collected (or not) and shared (or not) in Austria and the Netherlands in so-called “vaccination registries”. In doing so, we document the different ways in which notions of public and private health risk is counted and accounted for, documented, captured, and reproduced. Methodologically, we draw on policy documents as well as secondary literature and publicly available visualizations of epidemiological data. Our findings suggest that vaccinations registries first, order state-society relations in particular ways, and, second, reproduce and embody notions of compliance and deviance.

10:30
Open Governments and Transparency in an Age of Data Analytics
SPEAKER: Teresa Scassa

ABSTRACT. In October 2016, Geofeedia made the news when it was reported that police services in North America had contracted with it for data analytics based upon georeferenced information posted to social media websites such as Twitter and Facebook. This information is treated as “public” in the United States. Geofeedia is not the only data analytics company to mine social media or other “volunteered” data and to market its services to government authorities. In fact, it is part of a growing trend which sees the private sector harvesting, analyzing and packaging consumer data for government clients. While the specific Geofeedia example raises important issues around the transparency of state surveillance activities and the targeting of protestors exercising their constitutional rights to free speech, the broader phenomenon offers a window onto a developing shift in how governments source both the data and the analytics upon which they rely for decision-making. This shift raises important challenges to the ‘open government’ movement and to ‘right to know’ or access to information legislation by juxtaposing commitments to transparency with a layer of proprietary private sector data and algorithms that governments are not permitted to share. The impact on transparency will likely be felt not just through a lack of access to (reusable) government data, but also through proprietary restrictions that inhibit a critical assessment of both the data and the algorithms used to analyze it. Using an interdisciplinary approach that adopts a critical data studies lens, this paper will examine the relationship between ‘open government’ principles and the growing use by governments of outsourced data and data analytics.

09:30-11:00 Session 7B: Privacy 4: Privacy and Data Protection by Design
Location: DZ1
09:30
Transparency and Privacy in Smartphone Ecosystems: A Comparative Perspective
SPEAKER: unknown

ABSTRACT. This paper is the first paper to come out of a two-year multidisciplinary research project of the University of Amsterdam and MIT on transparency in smartphone ecosystems, funded by NWO and NSF. It will address the question of how transparency requirements in data privacy law map to the smartphone context, looking at the way in which different regulatory environments for data privacy (EU and US) shape transparency about the collection and use of personal data in dominant smartphone ecosystems (Android and Apple iOS). The project is unique in the way it will combine detailed technical insight from the field of Human Computer Interaction (HCI) with legal and policy analysis.

Billions of people use smartphone apps for a variety of purposes and both free and paid-for apps have become a lucrative business. Apps tend to request access to people’s personal data which in some cases is not strictly required for functionality. Even if there is a legitimate need for access to personal data, this access may not be transparent to users and once access is granted, it may be reused for other purposes e.g. marketing, advertisements in ways that escape the attention of smartphone users. Because of the importance of smartphones and the pervasive collection and use of personal data, privacy in mobile phones has become an important topic in the regulation of and policy debates about privacy. European and US regulators [1], [2], [3], [4] have addressed privacy in the smartphone context and have offered guidance for app store markets and app developers to improve mobile privacy. Transparency, an important goal of data privacy law [5], is one of the key privacy issues in the smartphone context. Scholarship notes the challenge of offering meaningful transparency in practice [1, 6, 2].

Previous research has shown that many smartphone users lack the knowledge needed to perform changes in privacy control settings and mistakenly trust that apps will protect the privacy of their data [7]. Only a few users actually read and understand the implications of transparency mechanisms in Android apps [8], [9]. Many people may hold unrealistic beliefs about how well their data are protected by regulations, consider information on their smartphones to be private and would overwhelmingly reject the collection of their data if they were better informed [9]. Users’ privacy expectations often do not reflect current practices. Many apps transmit sensitive data to third parties [10] that users intend to only use on-device. Given the disparity between users’ expectations of privacy [11] and the opaque practices of some data collectors [12], researchers have sought to address transparency by making privacy part of the app selection decision process [13], and by exposing data leakage [14]. Such secondary usage often happens without users even realizing. Even when secondary usage is anticipated (e.g. location being used for advertising purposes), applications often request this information far more often than users expect them to [14], [15]. Robust approaches to transparency in the smartphone context are largely still lacking.

The paper will address the different legal requirements and policy guidance as well as the technical implementations shaping transparency in the respective ecosystems. Transparency mechanisms are provided as part of specific applications, in the form of targeted notices as well as through privacy policies. Mechanisms have changed from pre-install notices to requesting access during use. The paper will identify applicable transparency mechanisms and requirements on apps within the two ecosystems (Android and iOS) and seeks to understand how these maps to the relevant regulations (EU and US) and policy developments.

Specifically, the paper will address the research question of what are the transparency mechanisms in iOS and Android and how do they map to transparency requirements in the EU and US data privacy frameworks, respectively? To answer this question, the paper will build on an analysis of US and EU legal requirements on application providers and ecosystem providers focusing on regulations aimed at mobile apps and transparency about data collection and use practices. On the EU side, the analysis will include a discussion of EU law as well as regulatory guidance and enforcement action. From the US perspective, this will include an analysis of US federal laws, case law, state laws, and regulatory enforcement action (including the FTC). Specific attention will be placed on cross-jurisdictional data flows and regulatory arrangements such as the recently adopted ‘Privacy Shield’.

The legal analysis will be grounded in a technical analysis of how transparency about personal data flows is operationalized in practice, in Android and iOS, including through the use of privacy policies, notices and authorizations. This technical analysis will also address how this access to personal data (method based) is reflected and enforced (for example dialog box, permission list) in the respective ecosystems. On the basis of this, an analysis will be made of the differences and similarities between transparency mechanisms in Android and iOS devices and how they map to the different legal requirements in the EU and the US.

Outline of the Paper:

1. Introduction 2. Background and Research Design 3. Transparency Mechanisms in Android and iOS 4. Transparency Requirements in EU and US data privacy law and policy 5. Transparency Requirements in Comparative Perspective: Discussion 6. Conclusion

References:

[1] O. Ben-Shahar, S. AS Chilton, 'Simplification of Privacy Disclosures: An Experimental Test' (2016). [2] N. van Eijk, R. Fahy, H. van Til, P. Nooren, H. Stokking, H. Gelevert, Digital Platforms: An Analytical Framework for Identifying and Evaluating Policy Options, 2015. [3] S. Meurer and R. Wismüller, Apefs: An infrastructure for permission-based filtering of android apps. In Security and Privacy in Mobile Information and Communication Systems, volume 107, pages 1–11, 2012. [4] Article 29 Working Party, ‘Opinion 02/2013 on apps on smart devices’ (WP 202) 27 February 2013. [5] P. De Hert, S. Gutwirth ‘Privacy, Data Protection and Law Enforcement. Opacity of the Individual and Transparency of Power’ in Claes E, Duff A and Gutwirth S (eds), Privacy and the Criminal Law, Intersentia 2006. [6] M. Hildebrandt M, 'The Dawn of a Critical Transparency Right for the Profiling Era' in Bus, J. and others (eds), Digital Enlightenment Yearbook (IOS Press 2012). [7] Y. J. Park, S. M Jang, Understanding privacy knowledge and skill in mobile communication. Computers in Human Behavior 38 (2014), 296 – 303. [8] A. P. Felt, E. Ha, S. Egelman, A. Haney, E. Chin, D. Wagner, Android permissions: User attention, comprehension, and behavior. In Proc. ACM SOUP (2012), 3. [9] I. Liccardi, J. Pato, D. J. Weitzner, H. Abelson, and D. De Roure. 2014. No Technical Understanding Required: Helping Users Make Informed Choices About Access to Their Personal Data. In Proc. of ACM Mobiquitous ’14, 140–150. [9] J. Urban, C. Hoofnagle, S. Li, Mobile phones and privacy. In UC Berkeley Public Law Research Paper (2012). [10] W. Enck, P. Gilbert, B.-G. Chun, L. P. Cox, J. Jung, P. McDaniel, and A. N. Sheth. Taintdroid: an information-flow tracking system for realtime privacy monitoring on smartphones. In Proc. of OSDI’10, pages 1–6. [11] K. Shilton, K. E. Martin, Mobile privacy expectations in context. In Proc. of Communication, Information and Internet Policy (2013). [12] I. Liccardi, J. Pato, and D. J. Weitzner. Improving Mobile App selection through Transparency and Better Permission Analysis. Journal of Privacy and Confidentiality: Vol. 5: Iss. 2, Article 1., pages 1–55, 2014. [13] P. G. Kelley, L. F. Cranor, and N. Sadeh. Privacy as part of the app decision-making process. In Proc. of ACM CHI‘13, pages 3393–3402, 2013. [14] R. Balebako, J. Jung, W. Lu, L.F. Cranor, C. Nguyen, “Little Brothers Watching You”: Raising Awareness of Data Leaks on Smartphones. In Proc. of ACM SOUPS (2013), 12. [15] J. Lin, S. Amini, J. I. Hong, N. Sadeh, J. Lindqvist, J. Zhang, Expectation and purpose: understanding users’ mental models of mobile app privacy through crowdsourcing. In Proc. of ACM Ubicomp (2012), 501–510.

09:30-11:00 Session 7C: HC 3 - Governance challenges of smart technologies for healthy communities
Location: DZ5
09:30
Privacy spaces

ABSTRACT. This paper proposes a definition of privacy as ‘having spaces in which you can be yourself.’ Although realising that yet another definition of privacy is not what privacy scholarship is waiting for, I hope to demonstrate in the paper that conceiving of privacy in relation to the spaces in which privacy can be experienced has added value in two respects. First, it allows bringing together a wide range of privacy scholarship. By identifying connections between seemingly disparate approaches in the literature (in itself already a useful exercise), it can be shown that privacy is a more unified notion than is often claimed. Second, it sheds new light on the challenge of digital or online privacy. Through focusing not on personal data but on the digital spaces that contain personal data, the interest(s) underlying the need for online privacy can be better understood. The paper will start with defining privacy combining ‘spaces’ and ‘being able to be yourself’, building on existing conceptualisations of privacy in terms of (boundary management of) spaces (e.g., Altman) and identity-building (e.g., Agre), with references to related literature. Then, the paper discusses how this definition relates with major conceptualisations of privacy, trying to show how the notions of Warren and Brandeis (being let alone), Westin (control over information), Johnson (freedom from judgment), Petronio (boundary management), Nissenbaum (contextual integrity) and Cohen (room for play) have significant overlap with (or may sometimes perhaps even be equivalent to) the notion of identity-fostering spaces. Subsequently, I will discuss the main spaces in which it may be important to be able to be yourself: body, mind, private conversation space, home, other private places, publicly accessible places, and public space. Finally, the paper discusses the problem of digital space. The challenges of digital privacy are partly related to the fact that digital space comprises (and collapses) different types of spaces (personal, intimate, semi-private, and public), which are less visibly separated than their equivalent traditional physical spaces. Moreover, digital space is usually conceived of in terms of an extension of traditional communications space, which overlooks the fact that part of digital space is an extension of the home (private or intimate space) rather than of telecommunications privacy. With reference to recent legal debates on protecting digital containers (BVerfG 2008; Riley v. California; Italian doctrine’s ‘cybernetic home’), I will show the complexity of capturing the privacy interest(s) associated with digital space(s), and discuss how framing the problem of digital privacy in spatial terms may at least help to better understand the challenges, which is a precondition for addressing them.

09:30-11:00 Session 7D: IP3: Collective Management of Copyright
Chair:
Location: DZ4
09:30
Panel: New safeguards of data protection law: a critical outlook

ABSTRACT. In order to achieve its goals of fair, lawful and transparent processing of personal data, EU data protection law has traditionally relied upon a number of mechanisms, instruments, otherwise referred to as safeguards such as consent, data minimization, or purpose limitation.

However, it seems that evolutions in processing practices such as so-called big data analytics and the new business models they go along with (e.g. platform capitalism stemming from the digital economy or mass-surveillance operated by security agencies) have called into question their practical implementation and their efficiency in ensuring an effective protection of data subjects. For instance, the data protection literature has already underscored the contradictions between minimizing data and collecting sufficient data to avoid discriminatory outcomes. Equally, the difficulties of adequately implementing consent –which is often seen as the prime basis for legitimizing the processing of data– have also been outlined.

In the face of these computing developments, new safeguards have been proposed, and some have been enshrined in the newly adopted General Data Protection Regulation (GDPR) and Data Protection Directive (DPD). The Regulation has for instance updated its provision on the right not to be subject to a decision solely based upon automatic profiling, providing for explicit safeguards when such processing is undertaken (e.g., right to human intervention or to contest the decision). A much talked about safeguard, which is not explicitly enshrined in the GDPR, is that of “algorithmic transparency”. One can see it as the update of the data subject right to be informed, applied to a specific case, namely, algorithms, the functioning of which is increasingly opaque to all. Finally, the Regulation also adopts a somehow broader perspective insofar as it provides for a new method for determining the acceptability of envisaged processing operations and the associated safeguards: the risk-based approach (whose spirit is evident in the new provisions concerning data protection impact assessments and data protection by design).

Thus, as we witness the development of yet a “new generation” of data protection instruments, and far from adopting positions that would simply discard already existing safeguards on accounts that they are not adapted to the contemporary data processing reality, it is important to investigate the promises these new safeguards bear, into how they operate in practice, and what exactly can be expected from them, as well as their articulation with already existing safeguards.

Finally, and looking at the broader picture, it remains to be seen whether and how these safeguards would be implemented in practice. This is important in particular in the light of developments in the digital economy and in the field of national and international security. The problematic and monopolistic role of platforms and the approach to data protection of national and international security actors, which are becoming the prime actors for operationalizing both “old” and “new” safeguards, deserve further analysis.

09:30-11:00 Session 7E: Data Science 4 - Algorithmic futures
Location: DZ8
09:30
Managing GDPR on the Ground – Implementing the requirements of the Regulation.
SPEAKER: Paul Jordan

ABSTRACT. TBD

10:00
A FAIR Data architecture for data protection of m-Health and e-Health solutions in East Africa

ABSTRACT. Authors:

Prof. Dr. Mirjam van Reisen (Leiden Center for Data Science/LIACS & Tilburg University, Department of Culture Studies) mirjamvanreisen@gmail.com

Dr. Erik Schultes (Dutch Tech Center for Life Sciences, Leiden University Medical Center, Leiden Center for Data Science) erik.schultes@dtls.nl

Prof. Dr. Gibson Kibiki (East Africa Health Research Department) gkibiki@eachq.org

Prof. Dr. Barend Mons (Leiden University Medical Center) barend.mons@dtls.nl

Dr. Albert Mons (Dutch Tech Center for Life Sciences) albert.mons@dtls.nl

Abstract

Health services in rural areas in Africa have declined substantially in the past decades, and brain drain, population growth and lack of investments in primary health care facilities have further contributed to the decline (World Bank, 2014)

The development of ICT-based solutions for health care are regarded as a new way of revitalising the health care services delivery chain within these contexts, as use of ICT’s can redefine the required capacities within health care systems, bring new efficiencies of data collection and analysis and create data exchange and interoperation capacities to analyse health trends and their localisation.

Countries in East Africa, such as Kenya and Rwanda, have set their future health policy entirely in the context of advancing e-Health and m-Health solutions (Government of Rwanda, 2009; Government of Kenya, 2011, Ventura, 2015)

In order to assess the potential of such solutions two foundational concerns need to be considered. First, system limitations and diversity of ICT architectures, such as latency, congestion and outmoding (Johnson, D.L. & Mikeka, C., 2016, Mawere M. and Stam, van, G, 2016, Brodkin, 2016). The E-health policies for East Africa still lack approaches to address problems of system diversity. The need for this has been addressed separately in the TILTing Conference (Reisen, van and Dechesne, 2016).

A second area of consideration is the question of data privacy and data protection due to the sensitive personal health data involved. This problem is not limited to East Africa but is increasingly recognised as a challenge for digitalised health care developments. the Netherlands Minister of Health was recently criticised for violating privacy rights and failing to propose a credible approach to protect personal health care data in her plans to link such data to insurance companies (NOS, 2016). FAIR data in a Personal Health Train Setting (www.personalhealthtrain.nl) are believed to mitigate this risk.

Here, we consider the potential of emerging FAIR Data infrastructures where the producer of data maintains fine-grained control over the long-term use of these data. FAIR data refers to architectural principles permit data to be Findable, Accessible, Interoperable and Re-usable (Wilkinson et al. Scientific Data 3, 160018 (2016)) by both humans and machines. FAIR Data principles propose an architecture for the deployment of existing technologies (Wilkinson et al. PeerJ Preprints 4:e2522v1) to create core elements of the “Internet of Data”: that is a protocol of data exchange and interoperation and an encryption-permission system that protects data from use unless the producer and owners of such data explicitly agrees to their use (DTL). FAIR Data Principles have been adopted as a main feature of the Open Science Cloud (European Commission, 2016a), the Data Commons of the US National Institutes of Health (USNiH, 2016) and the G02 (European Commission, 2016b)

East Africa is regarded as a good setting for FAIR Data because: (i) The East Africa Community is already integrating into an open market of goods and services and free mobility of people; (ii) The East African Community has placed a high priority on the development of regional health policies to support the development of deeper integration in the region; (iii) The East Africa Health Research Department is charged with the objective to support policy development through regional research and the aim to establish an E-Africa Open Science Cloud is a key priority of the regional body.

The article will present the following arguments: (i) the relevance of m-health and e-health for development of health care in the East African region; (ii) the challenges of such a development from the perspective of personal data protection and control; (iii) the architecture of a FAIR Data Open Science Cloud; (iv) an assessment of the possibility of this architecture to present a solution for data protection; (v) the potential of applying the Open Science Cloud in an East African context.

The article will conclude with an assessment of a FAIR Data East African Open Science Cloud architecture in providing an adequate basis for personal data protection while offering a foundation to expand e-health and m-health applications in this region.

References:

Brodkin, J. (2016) Can’t Stop Elon Musk. SpaceX plans worldwide satellite Internet with low latency, gigabit speed. SpaceX designing low-Earth orbit satellites to dramatically reduce latency. Art Technica (17.11.2016): http://arstechnica.com/information-technology/2016/11/spacex-plans-worldwide-satellite-internet-with-low-latency-gigabit-speed/

DTL (2016) Fair Data Personal Health Train. http://www.dtls.nl/fair-data/personal-health-train/

European Commission. (2016a) Open Science Cloud. Available at: http://ec.europa.eu/research/openscience/index.cfm?pg=open-science-cloud

European Commission (2016b) article 12. Press Release. Available at: http://europa.eu/rapid/press-release_STATEMENT-16-2967_en.htm.

Government of Kenya (2011) Kenya National E-Health Strategy 2011-2017, Ministry of Medical Services, Ministry of Public Health and Sanitation, Kenya. Available at: https://www.isfteh.org/files/media/kenya_national_ehealth_strategy_2011-2017.pdf

Government of Rwanda (2009) The National E-Health Strategic Plan 2009-2013, Ministry of Health, available at: https://www.isfteh.org/files/media/rwanda_national_ehealth_strategy_2009-2013.pdf

Johnson, D.L. & Mikeka, C. (2016) Bridging Africa’s Broadband Divide. How Malawi and South Africa are repurposing unused TV Frequencies for rural high-speed internet spectrum. IEEE.org International (29aug 2016), available at: http://spectrum.ieee.org/telecom/internet/malawi-and-south-africa-pioneer-unused-tv-frequencies-for-rural-broadband

Mawere, M. and Stam, van, G. (2015) ‘Paradigm Clash, Imperial Methodological Epistemologies and Development in Africa: Observations from rural Zimbabwe and Zambia.’ In: Munyaradzi Mawere and Tendai Mwanaka, editors, Development, Governance, and Democracy: A Search for Sustainable Democracy and Development in Africa, p. 193–211. Langaa RPCIG, Bamenda.

NOS (2016) Minister Schippers krijgt Prijs voor Grootste Privacy Schender. http://nos.nl/artikel/2143135-minister-schippers-krijgt-prijs-voor-grootste-privacy-schender.html

Reisen, van, M., Fulgencio, H. T., van Stam, G., Ong’ayo, A. O., & van Dijk, J. H. (2016). mMoney Remittances: Contributing to the Quality of Rural Health Care. In Africomm 2016, 6-8 Dec 2016, Ouagadougou, Burkina Faso.

Reisen, van , M and Dechesne, F. (2016/2017) mMoney, Mobility and Healthcare: Towards a Meta-Theory of System Diversity. TILTing conference 2017, University of Tilburg.

Rwanda tasked with pioneering EHealth for East Africa (2015) Venture, available at: http://venturesafrica.com/rwanda-tasked-with-pioneering-ehealth-for-east-africa/

US Institute of Health (USioH) (2016) The NIH Commons. Commons Overview Framework and Current Pilots. Available at: https://datascience.nih.gov/commons

Wilkinson, M.D. et al. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3, 160018 (2016).

Wilkinson MD, Verborgh R, Bonino da Silva Santos LO, Clark T, Swertz MA, Kelpin FDL, Gray AJG, Schultes EA, van Mulligen EM, Ciccarese P, Thompson M, Kaliyaperumal R, Bolleman JT, Dumontier M. (2016) Interoperability and FAIRness through a novel combination of Web technologies. PeerJ Preprints 4:e2522v1 https://doi.org/10.7287/peerj.preprints.2522v1

World Bank (2014) Kenya Economic Update : Take off Delayed? - Kenya's economy facing headwinds in 2014 with a special focus on delivering primary health care services (Vol. 2) : Main report. Available at: http://documents.worldbank.org/curated/en/849741468273022634/Main-report

10:30
The adoption of drones within data-farm realities in Brazil: risks and opportunities for regulation

ABSTRACT. Following international trends towards opportunities and possibilities of precision agriculture (PA) experienced worldwide, Brazil, as a global agricultural producer, has endeavored to follow up this process in terms of technology, legal framework and economic features. Projections for higher global demand for food and heated debates for more sustainable practices in agriculture, combined with a growing integration of Information and Communications Technologies (ICTs) in the farm, for various purposes, have mobilized and materialized this new kind of data-driven reality, in order to provide a better management of agricultural activities. However, and not different from other contexts, the country faces big challenges in terms of regulation involving PA technologies. Exploring the case of the Unmanned Aerial Vehicles (UAVs), also known as drones, this paper addresses key opportunities and challenges related to the adoption of drones for agriculture purposes in Brazil. Much emphasis has been given to the fact that Brazil has been known as a relevant “drone actor” for farm and land management, and technological advances expand the possibilities of the use of unmanned aircraft in agriculture. In this scenario, private (e.g. Qualcomm Incorporated), third sector (e.g. Institute of Socioeconomic Solidarity - ISES) and public research institutions (e.g. Embrapa), for example, have been working together to develop drone technologies in order to support farmers to reduce the environmental impacts and increase crop productivity. While great effort has been dedicated to the discussion and regulation for certification and authorization to commercial fly, under the National Civil Aviation Agency (ANAC) responsibility, the country still lacks a regulatory framework specifically focused on the collection, storage and processing of personal data, besides access and control of them. Based on this, the paper first maps the key actors, arguments and indicators associated to the drones market for agricultural uses in Brazil. In a second moment, key technologies and data required to achieve the sectors’ goals are described. Later, they are discussed regarding data protection and privacy concerns not only in Brazil, but also considering other Latin America countries that are experiencing similar process. Here, specific attention is assigned to the concept of data ownership within UAVs adoption in precision agriculture services and technologies. Empirically grounded in interviews with stakeholders and in literature review, the question “how may data ownership and personal data regulation can be considered a challenge or an opportunity in the increasing expansion of drones use in farms?” guides the analysis. Finally, controversies and research opportunities concerning data control and privacy implications attached to drones will also be outlined.

09:30-11:00 Session 7F: Privacy 5: Robots, Drones, and Privacy
Location: DZ3
09:30
The Role of Ethical Data Frameworks in ensuring Algorithmic Accountability

ABSTRACT. As the volume of user data being collected and generated keeps growing, an increasing number of decisions regarding its selection, classification and filtering are being automated for all kinds of purposes, from granting loans and recruiting new employees, to ranking search results and translate texts. The creation, design and deployment of machine-learning algorithms are behind this new reality of automated decision-making processes, and are crucially important to provide better and innovative products and services to society.

This panel will discuss what transparency and fairness mean in this new world of Machine Learning, and discuss how companies can approach algorithmic accountability and fairness through Ethical Frameworks and Principles.

Speakers: • Norberto Andrade, Privacy & Public Policy Manager, Facebook, nandrade@fb.com • Confirming experts from Microsoft, Academia and European Commission.

09:30-11:00 Session 7G: PLSC 3B: Koops
Location: DZ6
09:30
Internet Privacy Engineering Network (EDPS)

ABSTRACT. The IPEN (Internet Privacy Engineering Network) initiative was founded in 2014 by the European Data Protection Supervisor, with the support of several national Data Protection Authorities and Universities. It supports the creation of engineer groups working on (re)-usable building blocks, design patterns and other tools for selected Internet use cases where privacy is at stake. IPEN invites participants from different areas such as data protection authorities, academia, open source and business development, and other individuals who are committed to finding engineering solutions to privacy challenges. The objective of the work should be to integrate data protection and privacy into all phases of the development process, from the requirements phase to production, as it is most appropriate for the development model and the application environment. IPEN also supports networking between engineer groups and existing initiatives for engineering privacy into the Internet. This network facilitates exchange in order to coordinate work and avoid duplication, in addition to discussing which privacy oriented use cases should be addressed with priority. The purpose of the IPEN panel is to stir the discussion on privacy engineering, in the light of the General Data Protection Regulation, and present the latest developments in privacy engineering. https://secure.edps.europa.eu/EDPSWEB/edps/EDPS/IPEN

11:00-11:15Coffee Break
11:15-12:45 Session 8: Keynotes: M. Szpunar & John M. Golden
Location: DZ1
11:15
Online Price Discrimination and EU Data Privacy Law

ABSTRACT. Online shops could offer each website customer a different price. Such personalised pricing can lead to advanced forms of price discrimination based on individual characteristics of consumers, which may be provided, obtained, or assumed. An online shop can recognise customers, for instance through cookies, and categorise them as price-sensitive or priceinsensitive. Subsequently, it can charge (presumed) price-insensitive people higher prices. This paper studies personalised pricing from a legal and an economic perspective. From an economic perspective, there are valid arguments in favour of price discrimination but its effect on total consumer welfare is ambiguous. Irrespectively, many people regard personalised pricing as unfair or manipulative. The paper analyses how this dislike of personalised pricing may be linked to economic analysis, and to other norms or values. 

Next, the paper examines whether European data protection law applies to personalised pricing. Data protection law applies if personal data are processed, and this paper argues that that is generally the case when prices are personalised. Data protection law requires companies to be transparent about the purpose of personal data processing, which implies they must inform customers if they personalises prices. Subsequently, consumers have to give consent. If enforced, data protection law could thereby play a significant role in mitigating any adverse effects of personalised pricing. It could help to unearth how prevalent personalised pricing is, and how people respond to transparency about it.

12:45-13:45Lunch Break
13:45-15:15 Session 9A: IP4: Digital Copyright
Location: DZ4
13:45
Transcending the Promises and Fears: Governmentality as a new angle in research on wearable-technologies

ABSTRACT. An expected contribution to cost-reductions, the optimization of health results and the empowerment of patients: through the eyes of policymakers and private industries the added value of wearables is clear. Self-tracking technologies are thus put forward as the great solution to our healthcare system. However, inspired by the work of Michel Foucault on biopower and opposed to these techno-optimist visions, a remarkable growth in the number of critical analyses coming from social scientists can be noticed. Subsequently, current research on wearables risks to end up in a polarisation between techno-optimists on the one hand, and critical social research on the other hand. Out of this concern, this paper would like to propose the use of Foucault’s notion of Governmentality as an alternative angle for future research in wearable-developments. Why could this term provide us with a new look on recent self-tracking technologies? Firstly, the concept of ‘governmentality’ enables researchers to both empirically look at the way wearables are used in Belgian healthcare practices, as well as to look at how they are embedded in a broader network of institutions, producers and users. As a consequence, it will make it possible to overcome the polarisation between techno-optimists on the one hand, and criticists on the other hand. Secondly, governmentality makes it possible to analyse how this recent wearable-movement could go together with the questioning of incontestable considered fault-lines such as private vs. public, Lifestyle vs. Medical, expert vs. layman, and the conceptualisation of normal vs. abnormal or healthy vs. Ill. Because they are firmly rooted in today’s policies and moralisations, the introduction of wearable devices will have an impact on social and juridical questions on privacy, the rol of the doctor, solidarity and responsibility. By making it possible to study these challenges, the concept of governmentality enables us to study the interaction between the production of knowledge, policymaking and subjectivation.

14:15
“We Only Spy on Foreigners”: The Myth of a Universal Right to Privacy and the Practice of Foreign Mass Surveillance
SPEAKER: Asaf Lubin

ABSTRACT. The digital age brought with it a new epoch in global political life, one neatly coined by Professor Howard as the “Pax Technica”. In this new world order, Government and industry are “tightly bound” in technological and security arrangements which serve to push forward an information and cyber revolution of unparalleled magnitude. While the rise of information technologies tells a miraculous story of human triumph over the physical constraints that once shackled him, these very technologies are also the cause of grave concern. Intelligence agencies have been recently involved in the exercise of global indiscriminate surveillance, which purports to go beyond their limited territorial jurisdiction, and sweep in “the telephone, internet, and location records of whole populations”. Today’s political leaders and corporate elites are increasingly engaged in these kind of programs of bulk interception, collection, mining, analysis, dissemination, and exploitation of foreign communications data, that are easily susceptible to gross abuse and impropriety.

While the human rights community continues to adamantly uphold the myth system of a universal right to privacy, in actuality the Pax Technica has already erected and solidified a different operational code, one in which “our” right to privacy and “theirs” is routinely discerned. This distinction is a common feature in the wording of electronic communications surveillance laws and the practice of signals intelligence collection agencies, and it is further legitimised by the steadfast support of the laymen general public (e.g. in a March 2015 survey conducted by Amnesty International amongst 15,000 people from 13 countries - "in all surveyed countries more people were in favour of their government monitoring foreign nationals (45%) than citizens (26%)").

In this piece I offer some push back to the human rights agenda, trying to justify, in a limited sense, certain legal differentiations in treatment between domestic and foreign surveillance. These justifications are not rooted, as has been argued in the past, in xenophobic prejudice and historical biases, but rather they are grounded in practical limitations in the way foreign surveillance is conducted both generally, and in the digital age more specifically. Particularly I consider the (1) disparity in the political-jurisdictional reach of state agencies; (2) disparity in the technological reach of state agencies; and (3) disparity in harms from potential abuse of power.

I further make a controversial claim, that in fighting this absolutist battle for universality, human rights defenders are losing the far bigger war over ensuring some privacy protections for foreigners in the global surveillance context. Accepting that certain distinctions are in fact legitimate, would create an opportunity to step outside the bounded thinking of a one-size-fits-all European Court of Human Right surveillance jurisprudence. We could begin a much needed conversation on what tailored human rights standards might look like for foreign surveillance operations.

13:45-15:15 Session 9B: PLSC 4A: Erdos
Location: DZ7
13:45
Privacy-by-Design-Beyond-the-Screen
SPEAKER: Tjerk Timan

ABSTRACT. Privacy By Design Beyond the Screen - (how) is it possible?. When Internet of Things (IoT) devices start becoming a reality, we might enter a society in which we interact with devices in different ways than merely on a screen-based level (think voice command, ‘sensorial’ interaction, gestural interaction, face, - and emotion recognition etc.). One of the consequences is that new types of privacy threats and/or harm might occur, that we could not image or foresee within the current framework of thinking about and with ’smart’ devices. Where almost all legal scholarship regarding privacy and privacy-by-design is based on the idea that we interact with our smart devices on a rational-text-based manner, in which a ‘user’ can actually read a set of requirements or regulations and provide ‘informed consent’ (although this also is highly questionable), in this research we question what happens if we start interacting with devices in different, more rich and less rational-text based ways. Where the adagio of privacy-as-data-protection is highly prevalent in both legal privacy protection and mechanisms of mitigation (f.i. the legal obligation of performing privacy-by-design in the upcoming GDPR), when interaction becomes based on other modes input, processing and output, such legal frameworks start falling apart, and privacy-by-design as a design principle or process would need serious rethinking. In this paper/talk, I will provide a theoretical analysis of the concept of PbDBtS (!), based on a history of interaction design and PbD literature. I will also set out a research line on how to research PbDBtS.

14:15
New EU Developments in Collective Management of Copyright

ABSTRACT. Collective management of copyright plays a role at each crossroad of copyright’s reforms (multiterritorial licenses, the position of publishers, opt out schemes…). EU intervenes in this field today more than ever: the Directive 2014/26/EU came into force on 10 April 2016; the CJEU upset both some old deal (cf. C-572/13, HP v Reprobel) and bold lawmakers (cf. C-301/15, Soulier/Doke); and the EU Commission rushed into the house of cards with its copyright package of 14 September 2016. The draft directive on copyright in the digital single market introduces for example the possibility to extend collective licences regarding out-of-commerce works (art. 7), while the Court of justice just questioned the legitimacy of ECL schemes in the Soulier/Doke decision. Similarly, art. 14 of the draft directive on Copyright in the digital single market apparently intends to anchor the symbiosis of authors and publishers after the CJEU broke it in the HP v Reprobel case. Collective management of copyright in Europe today therefore asks many topical questions that interest both national and European judges, stakeholders and lawmakers.

14:45
Digital Goods Directive, Portability Regulation and their Impact on License Contracts
SPEAKER: Pavel Koukal

ABSTRACT. In 2015 the European Commission presented the first part of the Digital Single Market package which contained not only the “Portability Regulation Proposal” (Regulation of the European Parliament and of the Council on ensuring the cross-border portability of online content services in the internal market) with its well-known fiction which localises the accessibility to online content services (Shapiro, 2016, p. 353), but also two proposals for directives which aim to regulate various aspects of the e-commerce. Primarily, the paper aims to analyse the impact of the Proposal for a Directive of the European Parliament and of the Council on certain aspects concerning contracts for the supply of digital content (hereinafter „Digital Goods Directive Proposal“) on the law of obligations in selected EU member states, especially the impact on license contracts. Although Digital Goods Directive will not have direct impact on copyright laws and other intellectual property laws (Digital Goods Directive Proposal recital, para. 21), it will strongly influence the contractual relations between the „digital goods supplier“, and the “digital goods consumer” (user). In the paper, the author will try to answer these main questions: 1) To what extent can EU law regulate intangible assets in the same way as tangible goods (especially when assessing the conformity of the digital content with the contract)? 2) When focusing on the use of the software, in which aspects will the Digital Goods Directive affect the wording of the EULA in EU countries? 3) What will be the relations between the Portability Regulation and national laws of the EU Member states after the transposition of the Digital Goods Directive, especially when we concentrate on duties of the digital goods supplier concerning the accessibility and functionality of the digital content? The author will try to defend the thesis, that it makes little sense to regulate the portability issues at the level of the EU regulation and at the same time to adopt directive which governs the same issues (obligations arising from the “conformity of the digital content with the contract” - Art. 6 Digital Goods Directive). If the EU legislation is to be coherent it would be more sensible that all rules relating to the “functionality, interoperability and other performance features such as accessibility, continuity and security“ [Art. 6 para. 1 (a) Digital Goods Directive Proposal] were contained in the same piece of legislation.

13:45-15:15 Session 9C: Data Science 5 - Price discrimination
Location: DZ5
13:45
The influence of news personalization on the realization of the right to receive information
SPEAKER: Sarah Eskens

ABSTRACT. Online news consumers increasingly find news through personalized services, which means they automatically receive news stories that match their personal interests. For example, in the Netherlands online news platform Blendle sends out an email letter every morning that is tailored to the preferences of each subscriber. On social media like Facebook, an important news source for many people nowadays, the selection of stories in peoples’ newsfeed is also adapted to their likes.

Much of the legal research addressing news personalization, or research into related developments such as profiling and algorithms in communication, focusses on the privacy and data protection aspects of it [1]. Furthermore, legal writings and media policy documents often depart from the concern that personalization causes filter bubbles [2], even though empirical support for the filter bubble hypothesis is low [3].

This paper shifts the focus from privacy, data protection, and filter bubbles, to the right to receive information. Individuals have a fundamental right to receive information, which is a part of their fundamental right to freedom of expression. The paper deals with two questions: First, what is the import of the right to receive information in the context of news? For instance, does the right to receive information imply a right to be exposed to a diversity of sources, or even a diversity of content? Second, in what ways may news personalization positively or negatively affect the capability of news consumers to exercise their right to receive information?

To answer the first question, we look at case law of the European Court of Human Rights and the Court of Justice of the EU concerning the right to receive information, EU and Council of Europe (soft) law material in the field of media, and various theories for the right to freedom of expression. We analyse these materials to understand the legal position of news consumers in this development (are they right holders, or just passive factors considered in regulation?), and to find the underlying principles of the right to receive information. To answer the second question, we integrate this legal analysis with empirical research into the practice of news personalization; among others research into potential chilling effects when news consumers experience personalization [4].

We conclude, first, that news consumers do not have a claim or subjective right to receive particular news content (by contrast, individuals may have a claim to receive official documents held by public authorities). Instead, they have a freedom to receive news. More importantly, we find that the right to receive information can be conceptualized as encompassing eight different facets, which are oriented to individual or societal interests. These facets encompass among others the interest of individuals to access and find general interest content, and the importance for the public to be informed of different perspectives on a given situation. In answer to the second question, we describe specific situations in, or conditions under which news personalization could positively and negatively influence news consumers’ ability to exercise their right to receive information. These conditions relate back to the eight facets described.

Our conclusions set out an agenda for further legal and empirical research on news personalization. The paper thereby responds to the intuition that personalization warrants attention, but it replaces the alarmist filter bubble framework with an information rights framework for discussion. The conclusions also suggest that policy action might be geared towards promoting news personalization, in ways that benefits news consumers’ rights.

[1] See a.o. Walden and Woods 2011; Zuiderveen-Borgesius 2015; Van der Sloot 2014. For a different approach, see Helberger 2016.

[2] See a.o. European Commission 2013; Vīķe‐Freiberga et al. 2013; Commissariaat voor de Media 2015.

[3] See a.o. O’Hara and Stevens 2015; Zuiderveen Borgesius et al. 2016; Flaxman et al. 2016; Bakshy, Messing, and Adamic 2016; Garrett, Weeks, and Neo 2016.

[4] Our paper is part of a larger ERC funded research project that studies the implications of news personalization for the democratic role of the digital media, user rights and public information policy (see www.ivir.nl under “research” > “projects”). Within this project, we conduct empirical research that will feed back into the current paper, see a.o. Möller et al. 2016. For previous empirical research on chilling effects (in a government surveillance context), see a.o. Penney 2016; Stoycheff 2016

References

E. Bakshy, S. Messing and L.A. Adamic (2015), 'Exposure to ideologically diverse news and opinion on Facebook', 348 Science 6239, p. 1130-1132.

Commissariaat voor de Media (2015), Toezichtbrief 2016, 22 december 2015, Hilversum.

European Commission (2013), Green paper - Preparing for a Fully Converged Audiovisual World: Growth, Creation and Values, COM(2013) 231 final.

S. Flaxman, S. Goel and J.M. Rao (2016), 'Filter Bubbles, Echo Chambers, and Online News Consumption', 80 Public Opinion Quarterly S1, p. 298-320.

R.K. Garrett, B.E. Weeks and R.L. Neo (2016), 'Driving a Wedge Between Evidence and Beliefs: How Online Ideological News Exposure Promotes Political Misperceptions', 21 Journal of Computer-Mediated Communication 5, p. 331-348.

N. Helberger (2016), 'Policy Implications from Algorithmic Profiling and the Changing Relationship Between Newsreaders and the Media', 23 Javnost - The Public 2, p. 188-203.

J. Möller, D. Trilling, N. Helberger, K. Irion and C. De Vreese (2016), 'Shrinking core? Exploring the differential agenda setting power of traditional and personalized news media', 18 info 6, p. 26-41.

K. O'Hara and D. Stevens (2015), 'Echo Chambers and Online Radicalism: Assessing the Internet’s Complicity in Violent Extremism', 7 Policy & Internet 4, p. 401-422.

J.W. Penney (2016), 'Chilling Effects: Online Surveillance and Wikipedia Use', 31 Berkeley Technology Law Journal 1.

B. van der Sloot (2014), 'Do data protection rules protect the individual and should they? An assessment of the proposed General Data Protection Regulation', 4 International Data Privacy Law 4, p. 307-325.

E. Stoycheff (2016), 'Under Surveillance: Examining Facebook's Spiral of Silence Effects in the Wake of NSA Internet Monitoring', 93 Journalism & Mass Communication Quarterly 2, p. 296-311.

V. Vīķe‐Freiberga, H. Däubler-Gmelin, B. Hammersley and L.M. Maduro (2013), ‘A free and pluralistic media to sustain European democracy, The Report of the High Level Group on Media Freedom and Pluralism’, European Commission.

I. Walden and L. Woods (2011), 'Broadcasting Privacy', 3 Journal of Media Law 1, p. 117-141.

F.J. Zuiderveen Borgesius (2015), Improving Privacy Protection in the Area of Behavioural Targeting, Wolters Kluwer.

F.J. Zuiderveen Borgesius, D. Trilling, J. Möller, B. Bodó, C.H.d. Vreese and N. Helberger (2016), 'Should we worry about filter bubbles?', 5 Internet Policy Review 1.

14:05
From contested to shared responsibility: online platforms and the transformation of publicness
SPEAKER: unknown

ABSTRACT. Algorithmically mediated online platforms, from Facebook to YouTube, and from PatientsLikeMe to Coursera, have become deeply involved in a wide range of public activities, including journalism, civic engagement, policing, health care, and education. As such, they have started to play a vital role in the realization of important public values and policy objectives associated with these activities: freedom of expression, public discourse, consumer protection, and the accessibility to basic public services. These new platforms do so by enabling and shaping the active participation of users in public life, heavily assisted by algorithms and the collection of large amounts of data about participating users. This paper develops a conceptual framework for the governance of public role platforms, developing of a concept of cooperative responsibility for the realization of critical public policy objectives, taking into account broadly acknowledged societal values such as privacy, autonomy, equality and diversity.

Throughout the twentieth century, state institutions were primarily responsible for the organization of public space and for safeguarding public value. This societal arrangement has come under growing pressure as a result of economic liberalization and privatization of public institutions and services. The rapid rise of online platforms both accelerates and further complicates this development. These platforms appear to facilitate public activity with very little aid of public institutions, using instead computing power and technologies to organize publics. As such they are celebrated as instruments of the ‘participatory society’ and the ‘sharing economy’. Most platforms are, however, owned and technologically developed by large corporations, which have strong commercial interests in how public activities take shape on their platforms. These commercial interests and corresponding strategic motives do not always align well with those of public institutions, which, despite the dominant rhetoric, remain important organizational and regulatory actors. Equally complicated is the new active role of users, as creators, producers, sellers, and semi-experts, also with their own skills, interests and motives. Consequently, the integration of platforms in public space has been characterized by ongoing confrontations regarding the role and governance of platforms and their users.

Developing a framework to resolve such confrontations, we need to consider the multiple stakeholders at play: platform corporations, platform users, governments, civil society organizations, advertising industry, international governance bodies and various other actors. In this paper we focus on the particular ‘responsibilities’ of the first three key stakeholders: corporations, users and governments. Starting with the platform corporations. How do they currently regulate and steer public activity on their platforms? What kinds of responsibilities could they reasonably be expected to take? And, how can these responsibilities be encoded into the technological architecture and user policies of platforms? Subsequently, we turn to the platform users. Through viral online processes, personal interactions, preferences, and interests can suddenly attain public meaning in the form of trending topics and personal recommendations. To which extent can users be held responsible for their intended and especially also unintended contributions to public communication? Finally, the role of governments is discussed. In different sectors, public space no longer necessarily coincides with public institutions and strictly regulated commercial actors. From this perspective, we consider whether it would it be helpful or even required to shift the focus from protecting public space to advancing public value and societal responsibility of platforms. What would be the implication of such a shift for public policy and public law, and how could it be given form very concretely?

To analyze the interplay – and power distribution – between these three stakeholders, we explore three scenarios, in which there is conflict over the particular responsibilities of each stakeholder. In the first scenario we look at the contestation over the involvement of sharing economy platforms in public sectors, such as health care (PatientsLikeMe, 23andMe), education (Coursera, edX), hotels (AirBnB, Couchsurfing) and transport (Uber, BlaBlaCar). Here we investigate how the involved stakeholder deals with key public issues like data protection, transparency, and labor rights. A second scenario discusses the circulation of unlawful content through online platforms. In such cases, the three stakeholders are expected to take up different responsibilities regarding values like non-discrimination, freedom of speech and open access. Here we specifically look at hate speech and piracy. Finally, we take a closer look at the questions regarding the diversity of content on platforms. In what way do the different stakeholders have a role in realizing, and should take up responsibilities regarding values like cultural diversity, pluralism and inclusiveness.

Based on an exploration of these three types of scenarios we provide insight in the overarching patterns of how risks and responsibilities are contested and/or shared between the three types of stakeholders. An important point of attention in this respect are the differences in skills, possibilities and power of the different actors. Inspired by risk management theory, typically three approaches can be identified in the different scenarios: reducing, shifting and spreading risks and responsibilities. The question is to what extent and how the spreading or distributing of responsibilities can offer a sustainable solution for the conflicts that arise. In addition we ponder the need to combine mechanisms of shared responsibility with effective strategies of monitoring and controlling moral hazard and safeguarding a desirable outcome for society at large.

Based on these insights from theories about ‘risk sharing’ and the ‘problem of many hands’, we sketch the contours of a framework of shared responsibility for the realization of public values in societal sectors, centrally involving online platforms. Thompson’s concept of ‘prospective design’ will play an important role in this framework (Thompson, 2005). Developing this framework, we draw on three research projects on platforms, public space and citizen empowerment. The combination of different areas of expertise (communication studies, media studies, law and policy) allows us to understand the dynamic context in which platforms operate, the behavior of users, as well as the possible implications for public policy. The key idea of our proposal is that the realization of core public values in platform organized public activities should be the result of the dynamic interaction between platforms, users, and public institutions. To guide this interaction, we propose a number of key mechanisms to regulate the distribution of responsibilities between stakeholders.

The paper brings together insights that the authors have collected in three major research projects: Helberger, N. ERC project ‘Profiling and targeting news readers – implications for the democratic role of the digital media, user rights and public information policy’ (PersoNews) (2015-2019).

Pierson J. & L. Bleumers, FWO-SBO ‘FLAMENCO - FLAnders Mobile ENacted Citizen Observatories’ (2016-2019).

Poell, T. & J. van Dijck, KNAW-‘Over Grenzen’ research program on ‘Social Media and the Transformation of Public Space’ (2013-2016).

Thompson, Dennis F. (2005) Restoring responsibility: ethics in government, business, and healthcare (Vol. Cambridge): Cambridge University Press, 349.

14:25
I’ll take a picture if you scare me! How emerging mobile personal safety technologies and fear are restructuring privacy negotiations in public spaces

ABSTRACT. Recently, scholars have argued that smartphones make privacy very hard work, or even an outright impossibility (e.g. Smith 2016). However, others have argued that they represent an opportunity to retreat within a private sphere whilst in public (e.g. Hatuka and Toch 2016). Such ongoing debates highlight the various ways in which mobile computing and communication technologies are rapidly changing the nature of privacy. These debates carry extra weight with regards to public spaces, because they are characterized by interactions with strangers who may cause fear or be afraid themselves.

Various mobile personal safety technologies are emerging in order to alleviate such fears. They range from applications which can sound a loud siren to devices which promise to stream pictures of assailants to local law enforcement agencies while showering them in pepper spray (Pangaea Services Incorporated 2016).

This paper will investigate how such technologies are intervening in the processes by which we negotiate privacy in public spaces. The social nature of privacy in public makes the theoretical framework which Lefebvre developed in ‘The Production of Space’ (1991) highly suitable for examining the changes brought about by emerging personal safety technologies: such tools empower their users in privacy negotiations, which reshapes how we can use, think about, and experience public places. Using Lefebvre’s framework shows that this empowerment also lowers privacy expectations and leads to a hollowed-out right to privacy in public: you can remain private, unless I fear you and stream your picture to the police.

Hatuka, Tali, and Eran Toch. 2016. ‘The Emergence of Portable Private-Personal Territory: Smartphones, Social Conduct and Public Spaces’. Urban Studies 53 (10): 2192–2208. doi:10.1177/0042098014524608. Lefebvre, Henri. 1991. The Production of Space. Translated by Donald Nicholson-Smith. Malden, MA: Blackwell. Pangaea Services Incorporated. 2016. ‘What Is Defender 24/7’. Defender 24/7. www.getthedefender.com/what-is-defender/. Smith, Gavin J. D. 2016. ‘Surveillance, Data and Embodiment: On the Work of Being Watched’. Body & Society 22 (2): 108–39. doi:10.1177/1357034X15623622.

14:55
The Effect of IPR on Disruptive Innovations in Wireless Communications

ABSTRACT. In economics the optimal design of the institution of intellectual property is often considered as a proper balance between provision of incentives for innovations and resulting monopoly prices and deadweight losses. However, the effect on prices is not the only economic effect generated by this institution, and some argue that there is an issue of a trade-off between the benefits of IPR and the costs of the centralisation of decision making. From this point of view, the most important economic effect of IPR is on industry structure and, thus, it is possible to assume that such centralisation of decision making affects the technological change and influences the processes of adoption and diffusion of innovations. In other words, the market players that have been able to concentrate in their hands property rights on the most essential solutions for technological development have power to affect the rate and direction of technological change and the total outcome of the industry. While it is quite understandable how IPR protect positions of the leaders when the industry evolves in sustaining way, their role in the situation when the industry faces an appearance of disruptive innovations is not clear enough. If we understand that disruptive innovations can threat the established order of a market not only from the “low end” of this market, but also from other markets, then we might assume that the new entrants that introduce the innovation have more chances to overcome the protective role of IPR of the incumbents. However, the example of wireless communications shows us that even when a disruptive innovation comes from another market, the incumbents of the field still have good opportunities to use their intellectual property to suppress this threat to their positions. In my research I analyse the evolution of wireless technology and argue that the institution of intellectual property provides opportunities for the incumbents to protect the established order even from disruptive threats that come from the outside of the mainstream part of the industry. As a result, we observe that the industry becomes more concentrated and, moreover, it expresses in different layers of the industry. There were a number of sets of 2G standards, then there were just two 3G sets of standards, and there is the only one standard considered as 4G technology. Such regulatory interventions in the free market as institution of intellectual property contribute to the development of the mainstream part of the industry and helps the leaders to defeat the alternatives.

13:45-15:15 Session 9D: Panel 3: Covert Policing in the Digital Age
Discussant:
Location: DZ3
13:45
Unfair Commercial Practices: an alternative approach to privacy protection
SPEAKER: Nico van Eijk

ABSTRACT. In the European Union, enforcement of privacy rules almost solely takes place by national enforcement authorities. They typically apply sector specific rules, based on the European Data Protection Directive. The responsibility primarily comes to the independent national data protection authorities. In the US, the Federal Trade Commission is the primary enforcer of privacy, using its power to prevent unfair and deceptive acts and practices. In this paper the American legal system will be discussed and compared to the European legal framework, which forms our finding that in the EU rules on unfair commercial practices could be enforced in a similar manner to protect people’s privacy. In the EU, the many frictions concerning the market /consumer-oriented use of personal data form a good reason to actually deal with these frictions in a market/consumer legal framework.

In this paper, we will first set forth how the US addresses privacy issues through application of the Federal Trade Commission’s general power to prevent unfair and deceptive trade practices. Developed as a general tool to police business behavior, we explain how the FTC’s authorities are applied in the privacy field with two examples. Particular attention will be given to the use of ‘consent agreements’: an instrument used by the Federal Trade Commission to bind violators of the rules of this supervisory authority to long lasting obligations under the risk of substantial—though rarely realized—financial penalties. Thereafter, the European framework of unfair commercial practices as set forth in the Unfair Commercial Practices Directive will be discussed. This paper does not seek to give an exhaustive description of the US and European legal system; its aim is to give a first view of an alternative approach for privacy protection in the EU by an US example. Finally, in the analysis a comparison will be made between the European and American legal frameworks. From this comparison, it will be clear that essential features of the frameworks correspond. Following from this, European rules regarding unfair commercial practices can be can be applied in a comparable manner.

13:45-15:15 Session 9E: PLSC 4B: Lubin
Discussants:
Location: DZ6
13:45
Should fundamental rights to privacy and data protection be a part of EU’s international trade deals? Dynamic between fundamental rights protecting personal data and international trade law

ABSTRACT. Inspired by recent developments in labour standards, environmental protection and sustainable development this contribution argues that privacy and data protection should be a part of international trade deals of the European Union (EU). To preserve its autonomy to maintain the current framework of privacy and data protection, and especially the rules on transfers of personal data to third countries, the EU should negotiate future international trade agreements in a manner allowing the EU to reflect normative foundations of its privacy and personal data protection. This article suggests a way to do this. Inherent tension between the dignitary and economic aspects of personal data leads to conflicting regulatory goals. Economic regulation, as compared to that protecting privacy and personal data as intrinsic values calls for a lower optimal level of protection and a less restrictive model of regulation. Analysis shows that mechanisms envisaged in the General Agreement on Trade in Services (GATS) and more recent bilateral trade agreements (FTAs) concluded by the EU to accommodate privacy and data protection push such protection squarely in the field of economic regulation. As a result, they limit the regulatory autonomy of the parties to international trade agreements, such as the EU, to protect privacy and personal data as fundamental rights. Although fundamental rights driven protection of privacy and personal data is rooted in international and regional human rights instruments, the latter play only a limited role in the international trade law. Hence, reliance on human rights is unlikely to prevent subordination of fundamental rights to privacy and personal data protection to liberalistion of trade in the context of trade dispute settlement.

13:45-15:15 Session 9F: Data Science 6 - Perspectives on Cybersecurity
Location: DZ8
13:45
Intermediary Publishers and European Data Protection: Searching for Balance?
SPEAKER: David Erdos

ABSTRACT. Online intermediary publishers are by definition not responsible for the original publication of information. Nevertheless, they almost always practically critical both for the exercise of freedom of expression online and the perpetration of many of the legal harms which are linked to publication. Until now most of the debate concerning the liability of ʻintermediariesʼ has focused on legislation such as the e-Commerce Directive which limit and regulate the liability of types of intermediary activity such as hosting, conduiting and caching. However, growing threats to privacy, reputation and personal integrity posed by publication online has seen a renewed focus on the role of the data protection framework (which has often been couched under the rubric of the ʻright to be forgottenʼ). A passive intermediary publisher such as a blog host fits best within the category of data processor not controller; it should therefore only be liable for data protection infringements in so far as this is compatible with the e-Commerce Directive intermediary shields. However, increasingly many types of intermediary publisher – from rating websites to general search engines – engage in activity related to publication which can be “distinguished from and is additional to that carried out by” original uploaders of information (Google Spain at [83]). Google Spain recognised that such actors are ʻcontrollersʼ in this regard and given this will also generally be outside the e-Commerce Directive’s limiting shields. However, many of data protection’s default provisions – notably as regards intermediary ʻcontrollersʼ the presumption of effective ex ante control together with onerious substantive conditions for lawful processing especially but not only of ʻsensitiveʼ data – are in great conflict with freedom of expression. The new freedom of expression clause in the forthcoming General Data Protection Regulation (Art. 85 (2)) has a key role to play in reconciling these competing rights going forward.

High-Level Outline: I provisionally intend that my paper be divided into three principal sections. Section one will explore the social and legal foundations in this area, namely, the concept of ʻintermediary publisherʼ and increasing differentiation within this together with the provisions and legislative history of the Data Protection Directive 95/46, e-Commerce Directive 2000/31 and General Data Protection Regulation 2016/679. Section two will then examine relevant case law both at pan-EU Court of Justice level (the principal cases decided as regards the e-Commerce Directive shields and the Data Protection Directive) and at national level (more detailed cases exploring the data protection obligations of a range of intermediary publishers including social networking sites, rating websites and search engines). Section three will then engage in an explicit normative analysis of this area and the consideration of the role which the new GDPR’s freedom of expression provision (Art. 85 (1)) which is set out alongside but separately from the journalistic and other special expressive purposes derogation (Art. 85 (2)) needs to play in this area.

Indicative Literature: de Azevedo Cunha, Mario Viola, Luisa Marin, and Giovanni Sartor. "Peer-to-peer privacy violations and ISP liability: data protection in the user-generated web." International data privacy law 2.2 (2012): 50-67.

Kulk, Stefan, and Frederik J. Zuiderveen Borgesius. "Freedom of expression and ‘right to be forgotten’cases in the Netherlands after Google Spain." European Data Protection Law Review 2 (2015): 113-125.

Peguera, Miquel. "In the aftermath of Google Spain: how the ‘right to be forgotten’is being shaped in Spain by courts and the Data Protection Authority." International Journal of Law and Information Technology 23.4 (2015): 325-347.

Van Hoboken, J. "The Proposed Right to be Forgotten Seen from the Perspective of Our Right to Remember, Freedom of Expression Safeguards in a Converging Information Environment." Prepared for the European Commission, Amsterdam (2013).

13:45-15:15 Session 9G: Privacy 6: Panel - The Media and Privacy in Public
Location: DZ1
13:45
Understanding the notion of risk in the GDPR

ABSTRACT. The topic of the risk-based approach to data protection has stirred quite some controversy. More in particular, its main criticism argues that it goes directly counter the fundamental right nature of the right to personal data protection and the “rights-based approach” it vehicles, not least because it would offer an uneven level of protection based upon the harms suffered by the data subjects.

The EU has adopted a risk-based approach into the GDPR. Taking this controversy into account, it seems to have followed the opinion of the Article 29 Working Party, following which the EU risk-based approach is one limited to issues of compliance.

The overall goal of this contribution is therefore to explore the meaning of risk within the GDPR. It will proceed in two steps.

First, it aims to clarify a number of notions such as a data protection risk, that of the “risks to the rights and freedoms of the data subject” enshrined in Art. 35.1 GDPR, the notion of “high risk”, the types and number of harms covered, as well as the the difference between the notion of risk and harms, which is of paramount importance for data protection impact assessments. It also reflects upon the overall role of the notion of risk in the ecology of the whole Regulation.

Second, it aims to clarify several caveats concerning the risk-based approach as it is enshrined in the GDPR. What is exactly meant by such compliance oriented risk-based approach, how does it differ from the traditional approaches to compliance, what is its added value, and what can it achieve? Also, in what way(s) does it differ from other “approaches to the risk-based approach”, and in particular those providing for differentiated levels of protection. Finally, can these insights contribute to addressing the long-standing debate on the difference between privacy and data protection impact assessments?

15:15-15:30Coffee Break
15:30-17:00 Session 10: Keynotes: F. Henwood & I. Schluender
Location: DZ1
15:30
Panel: Privacy in the 2020s (a conversation about the future of privacy)
SPEAKER: unknown

ABSTRACT. In this roundtable discussion, panelists will engage with a number of questions about the future of privacy and its connections to technological development, including what privacy (and privacy-related regulation) might look like over the coming decade, and how technology and privacy ought (or ought not) to be regulated to anticipate likely developments in technological development in the near- to mid-term future. Panelists will present their insights into the direction in which technology is being developed and how this (possible) trajectory raises interesting and/or problematic questions of privacy and data protection,and will discuss whether their vision of the near-term future requires regulation now, or at least in advance of new technologies emerging in the marketplace, and (if so) what they think that regulation should look like (or what considerations need to be accounted for).