next day
all days

View: session overviewtalk overview

09:30-10:00 Session 2: opening Sally Wyatt
Location: DZ1
BLACK BOX TINKERING: Beyond Disclosure in Algorithmic Enforcement
SPEAKER: Maayan Perel

ABSTRACT. The pervasive growth of algorithmic enforcement magnifies current debates regarding the virtues of transparency. Not only does using codes to conduct robust online enforcement amplify the settled problem of magnitude, or “too-much-information,” often associated with present-day disclosures, it imposes additional practical difficulties on relying on transparency as an adequate check for algorithmic enforcement. In this Essay we explore the virtues of black box tinkering methodology as means of generating accountability in algorithmic systems of online enforcement. Given the far-reaching implications of algorithmic enforcement of online content for public discourse and fundamental rights, we advocate active public engagement in checking the practices of automatic enforcement systems. Initially, we explain the inadequacy of transparency in generating public oversight. First, it is very difficult to read, follow and predict the complex computer code which underlies algorithms as it is inherently non-transparent and capable of evolving according to different patterns of data. Second, mandatory transparency requirements are irrelevant to many private implementations of algorithmic governance which are subject to trade secrecy. Third, algorithmic governance is so robust that even without mandatory transparency it is impossible to review all the information already disclosed. Fourth, when algorithms are called on to replace humans in making determinations that involve discretion, transparency about the algorithms’ inputs (the facts) and outputs (the outcomes) is not enough to allow adequate oversight. This is because a given legal outcome does not necessarily yield sufficient information about the reasoning behind it. Subsequently we establish the benefits of black box tinkering as a proactive methodology that encourages social activism, using the example of a recent study of online copyright enforcement practices by online intermediaries. That study sought to test systematically how hosting websites implement copyright policy by examining the conduct of popular local image-sharing platforms and popular local video-sharing platforms. Particularly, different types of infringing, non-infringing and fair use materials were uploaded to various hosting facilities, each intended to trace choices made by the black box system throughout its enforcement process. The study’s findings demonstrate that hosting platforms are inconsistent, therefore unpredictable in detecting online infringement and enforcing copyrights: some platforms allow content that is filtered by others; some platforms strictly respond to any notice requesting removal of content despite its being clearly non-infringing, while other platforms fail to remove content upon notice of alleged infringement. Moreover, many online mechanisms of algorithmic copyright enforcement generally do very little in terms of minimizing errors and ensuring that interested parties do not abuse the system to silence legitimate speech and over-enforce copyright. Finally, the findings indicate that online platforms do not make full efforts to secure due process and allow affected individuals to follow, and promptly respond to, proceedings that manage their online submissions. Based on these findings, we conclude that black box tinkering methodology could offer an invaluable grasp of algorithmic enforcement practices on the ground. We hence evaluate the possible legal implications of this methodology and propose means to address them.

10:00-11:30 Session 3: Keynotes: Sean McDonald & Solon Barocas
Location: DZ1
When Google wasn’t there. The right to be forgotten in the pre-Internet era

ABSTRACT. A much debated privacy related right is the one “to be forgotten”. While this right has been very disputed among experts for years, it was only with the famous CJEU case “Google Spain” that it gained worldwide attention. But was there a right to be forgotten in the pre-Internet era? Was it possible to dive into someone else’s past? Taking the scenes of some movies as the starting point, this paper tries to highlight how microfilms and microfiches used to be a way to violate someone else’s right to be forgotten, with libraries playing as the “search engines of the past”. The ultimate aim of the paper is to demonstrate that violating the right to be forgotten has been possible for decades, the Internet only (sic!) make it (much) easier.

Turning users into consumers: Why “free” services should be regulated on price discovery

ABSTRACT. A highly important effort for Europe, the implementation of the digital single market strategy is progressing. The strategy’s focus on directing digital innovation, instead of limiting innovation, has been an important principle in this strategy. In this paper, we try to envision a decade ahead, attempting to foresee a direction for future policies and legislation following the same principle. In this future environment, autonomous intelligent agents direct our lives and our physical life has become digitized using IoT-electronics. Some researchers have argued that this development will lead, over the coming decades, to almost half of current jobs being eliminated. These challenges will likely be amplified by the current demographic shift in most of the developed countries. There is a great need for research efforts into how digitalization will affect the well-fare state and perhaps most importantly what policies exist to mitigate the effects of the coming challenges to the well-fare state.

In this paper, we restrict our discussion to a reflection of the problem and problem setting, with the nature of personal data, and the consequent processing and refinement through the use of intelligent algorithms. We focus on the ownership and valuation of personal data. We continue to the dilemma that arises from the use of predominantly free digital services, in maintaining the well-fare state when our society become digitalized. Current incumbent digital platforms often employ tax schemes that do not contribute tax-revenue to the state were said services are consumed. At the same time, these platforms view the data subject as a user and not a consumer with corresponding rights. Today, the true customer of these platforms are companies that want to sell something. The primary type of service the platforms provide for the customer is a direct marketing service, which connects the marketing message with what is presumed to be an interested data subject. The service offered to the data subject thus becomes secondary. The value of the platform can be considered directly linked to the degree of exploitation of user data, the complete history of a number of data subjects, and a perceived future exploitation value. This can be observed e.g. from M&A activity, such as Facebooks acquisition of Whatsapp, or Microsofts acquisition of Skype and now Linkedin. The underlying cash flow in the acquired companies have been almost non-existent compared to valuation, still the companies demand a significant premium compared to ordinary M&A activity.

Determining a value of personal data is of great importance to many stakeholders, for example the data subject, tax authorities, competition authorities, and start-ups. Some companies have started offering both a premium and a free ad-supported version of their platform. One such EU company paving the way is Spotify. Their service offering includes both of these versions. Examples such as this, may offer a price discovery process for determining a price for free services in general. Determining such a price for free services may in the long run also improve the trust towards the sector, giving companies an incentive not to exploit the consumer.

11:30-11:45Coffee Break
13:15-14:15Lunch Break
14:15-15:45 Session 5A: Privacy 3: Panel - Privacy in the 2020s (a conversation about the future of privacy)
Location: DZ3
The never ending lifecycle(s) of health data: regulatory challenges of data mining practices

ABSTRACT. The outburst of information technology as an integrative support of businesses’ activities has come to shape the global economy in new and still relatively unexplored manners.
 Health data has become a gold mine for a relentlessly growing field of both public and private enterprise, collecting, processing and reselling data scraped together from different information springs. With the health information becoming a central asset in nearly every business operation, the threats to the protection of patients’ and more generally users’ personal data appear to be quantitatively and qualitatively more complex and thus more thought-provoking than it ever was. The combination of different datasets through the use of mining techniques enables to extract hidden information and productive correlations, silently circulating through the hands of different stakeholders who incorporate these derivative data in their decision-making processes.
 In virtue of the uncountable possible health relations between the most different aspects and expressions of individuals’ lives and lifestyles, potentially every type of data spread on the web could become health inflected. Indeed, data mining technologies have caused information to be exploited for the generation of other newly extracted information and to be rearranged into collective clusters. Unpredictability of the results caused by the convergence of large datasets and strong analytical tools, means, from a legal standpoint, unpredictability of the legal consequences and risks connected to the business of big health data.
 Our analysis moves from the acknowledgement of the unsuitability of two fundamental assumptions of traditional data protection law: first, the assumption that the right to data protection has an individual scope; second, the assumption that the right to data protection is an end in itself. After having described the features of -and thus the criticalities arising from- the methods of big data collection and processing, this paper enquires the actual tenancy of the right to data protection in the face of health data mining practices for the ultimate purpose of detecting the actual nature and function of the same right in the era of big health data analytics. This will be done through a careful evaluation of both traditional categories and “second- generation” rules shaping the data protection framework vis à vis the massive data processing capacities of algorithms and predictive analytics. The generative potential of correlation patterns is indeed blurring the borders between the categories of personal and non-personal information, and between the ones of sensitive and non- sensitive data. The enhancement of technological portability capacities and the (subsequent) blossoming of data brokers make it moreover increasingly difficult to identify actual data controllers and processors. Finally, together with the fading of the distinction between data generation and data use, also the notion of data subject must be revised: in the trespass from individual profiling to group profiling (“pigeonholing”), data subject is indeed the one who is either included in or banned from cluster-based choices. Against this backdrop, also the new, more sophisticated rules provided by the General Data Protection Regulation, such as the right to explanation and the right not to be subject to automated decision making, raise some doubts as regards their effectiveness for their reliance on the individuals’ autonomous action, and actually appear to be thus incapable of effectively governing the fluidity of the present information environment. In the whirls of corporations’ data exchanges, the weak tenancy of both old and new data protection rules thus suggests that the time may have come for a shift from an individual to a collective data protection paradigm, centred on the active agents of mass scale processing activities rather than on the passive subjects bearing the legal and economic consequences of them and working as a framework for the assessment of the risks to other fundamental rights that end up to be threatened in the rising classifying society.

Behind closed doors. How personal intelligent robots influence the way we see, and protect, our private sphere.

ABSTRACT. The paper focuses on the legal implications of the presence of Intelligent Personal Robots in the “sacred precinct” of our private sphere: the home. Maintaining individuals, and their control, at the center of the analysis, this paper identifies the effects of said presence on the very concept of private sphere. This work identifies three main influences exerted by Intelligent Personal Robots on the traditional meaning of private sphere. It also analyses how such three influences shift the distinction between the private and public spheres creating new, aggregated layers of privacy. Home and Personal assistant robots are a subcategory of Service Robots for personal or consumer use, as identified by the IRF. They embed the idea of a digital personal assistant to ease the daily life of individuals or families. To perform, the assistant must know its owners: their voices, their faces, their schedule, their friends. To perform efficiently, the assistant must also understand those information, learn, elaborate them, and extract the habits and tastes of its owners. In one word, the assistant must be provided with (artificial) intelligence. In order to enhance their computational and learning capabilities these robots, which can be embedded into devices or simply consist of an app on the smartphones, take advantage of cloud computing, sharing experiences and crossing information with other databases. They can thus bring knowledge to bear beyond their own capabilities and provide more and more refined responses to their owners: for this reason, home and personal assistant robots will be defined in this paper as “Intelligent Personal Robots” or IPR. Examples of IPR that have recently entered the market are Amazon Echo and the newly presented Google Home . Several other devices are now in the production stage and about to enter the market, such as Cubic , Jibo and Buddy , all the objects of extremely successful crowdfunding campaigns. Except for Buddy, all the above mentioned devices do not present movement or kinetic capabilities. The common imaginary of the humanoid robot with wheels and arms carrying out chores around the house is now replaced by decorative desk units which interact with the owners through voice command (“Alexa, play the morning playlist” , or “OK Google, increase the room temperature of…” ) and, sometimes, complete their tasks by means of other devices connected to them (in the examples before, a speaker and Google Nest). IPRs feature a significant employment of artificial intelligence, based on information and data collected from the owners and the environment surrounding them . Some of those data are actively provided, for example when inserting memos or creating playlists. Others are the result of the spontaneous repeated sharing of information from the owner, for example by using the IPR to order dinner, set the room temperature, or watch a movie. In addition to that, the sensors can detect sounds or visual elements which can provide the IPR with further information. This circumstance gives life to a passive form of sharing, resembling the one occurring in public spaces, with the difference that, in the case of IPR, it now occurs in the area devoted to the expression of the private sphere par excellence: the home. Furthermore, the cloud-based intelligence of IPRs implies their capability of transferring the data collected . From this perspective, the private sphere, traditionally secluded from the eyes of strangers behind the door of the house, is now scanned and analyzed, the main information processed at both individual and aggregated level to obtain further, enhanced information that are then re-inserted in the sphere in the form of suggestions or completed tasks. The user’s individual sphere is, therefore, included by the IPR in a multilayered structure of information collected from other individuals’ spheres. From such structure, data can be taken, exchanged, elaborated, aggregated, separated, possibly returned. The presence of IPR does not necessarily expose our private life to the public, thanks to encryption mechanisms and anonymization, but it inserts the house – and the home – into an informational “hive”. IPRs respect users’ private sphere by providing them with aggregated privacy. In addition to that IPR, with their vocal command control and artificial intelligence, can give life to a dialogue-based setting of the preferred privacy protection of the users. Such possibility, which is already being tested with chat-bots and messenger applications, has been named “Conversational Privacy” . The term is used only with reference to the way the information is disclosed to the users, but it embeds a much deeper meaning. The dialogue between an IPR and its user concerning security measures can influence the way the underlying principles are perceived and defined . In these terms, Conversational Privacy with IPR implies more than just a way of administering information, arriving to touch the interests on which such provisions are based. Together, passive sharing, aggregated and Conversational Privacy can deeply influence individual definitions of private sphere. The shift in the perception and definition of the private sphere can in turn influence significantly the implementation of the rules created to protect it. For this reason, identifying the way IPRs affects the very definition of private sphere, is crucial to identify the deriving criticalities that can be expected in the application of the European GDPR. This paper aims at filling a void in the existing literature, by describing how the presence of IPRs in the deriving implications for the modalities of sharing of the users of IPRs, for the role of the household environment as place of protection and expression of the private sphere, and for the individual construction of privacy and protection.

Making human intervention work. Securing effective application of human intervention in automated data processing

ABSTRACT. Human intervention is deemed necessary from the point of view that human dignity or personal autonomy should not be precluded by any technology - while under circumstances human intervention might in its turn disturb the safety or efficiency of a network or system, or harm effective protection of (fundamental) rights (of others). Is it possible to identify specific characteristics or borderline areas, or maybe detect criteria for situations where the right should or should maybe not be applicable? It is broadly shared that machines should as a rule not be in a postion to determine fully autonomously over humans where these decisions have significant or legal consequences, for instance in law enforcement or in defense matters. On the other hand, allowing human intervention in smart cars may at some point endanger the lives or physical integrity of others on the road. Another example: some expect that the exclusion of human intervention in selecting letters of application might at some point in future lead to less discrimination on the basis of ethnic or racial origins in selection procedures (but the machine need to be digitally blindfolded first-by humans...).

Regulatory tools that guarantee human intervention already exist. Current European legislation (article 15 of EU-Directive 95/46/EC on privacy, article 22 of the EU-GDPR and article 11 of the EU-directive in matters of law enforcement, as well as Council of Europe Recommendation on Profiling and its Convention 108 on data protection) have created a right to human intervention in automated prcessing of personal data that applies to all sectors. These norms therefore carry necessarily a relatively open character. The general norm is in the need of a more accurate interpretation, since the number of automated interactions between humans and machines and between machines that take decisions over humans rapidly becomes the rule in the Internet of Things (IoT) and in Artificial Intelligence (AI) applications. Especially AI, which is generally based on deep-learning systems that function by means of neural networks, will in the near future require a clear and robust stance on the application of the requirement of human intervention in automated processing that leads to decision-making over humans.

Objective: 1) contribute to the discussion of correct interpretation of article 22 of the GDPR (and other EU-legislation) and of article 8 (1) of the Convention for the Protection of Indivduals with regard to the Processing of Personal Data; 2) to develop a clear and robust stance with regard to situations (criteria?) the requirmeent of human intervention has to be applied (and maybe where it can or even should deliberatly be ignored) and 3) to suggest tools to make or to maintain the rquirement of human intervention in automated decision-making accurate.

14:15-15:45 Session 5B: HC 2 - Personal Data Protection in mHealth and eHealth: current challenges
Location: DZ5

ABSTRACT. Why are certain forms of algorithmic engagement with news readers perfectly acceptable, and why should we be concerned about others? How can we define the dividing line between using algorithms to offer people more personally relevant news and encapsulating them in “filter bubbles”? Is it acceptable that the news media make access to their websites conditional upon the unconditional acceptance of cookies? And are there some players that we would rather not see engaged in profiling and targeting the news readers, such as political parties, religious groups or governments? The news media is one of those sectors where experimentation with Big Data and algorithms is in full swing. And probably one of the most difficult and most pressing questions concerning the news media sector for scholars, policy makers and the media alike is: what kinds of algorithmic practices are useful for users and acceptable for society, and when does profiling and targeting lead to undesirable outcomes and unfair situations? So far, the scholarly debate on profiling and targeting has taken place first and foremost in the privacy and data protection arena. Privacy and the rules about data protection are the central benchmarks against which profiling and targeting the user are being assessed. This article argues that looking at data protection law alone is not enough. The situation of a news reader is different to that of someone buying a pair of shoes or a Fitbit user. News readers have different concerns, and different demands for privacy. What is more, the news media sector is subject to its own public policies, values and constitutional guarantees: freedom of expression, non-discriminatory access to information, media diversity and freedom from censorship. And this is why we are suggesting here that the profiling and targeting of media users, and more generally algorithmic news making, should be also considered from the perspective of media law and policy, and the essential values and principles that guide them (such as impartiality and independence from commercial influences, diversity, inclusiveness and non-discrimination but also the responsible use of the “power to influence”). Thus far, this is a perspective that has been somewhat underrepresented in the ongoing debate on profiling and targeting users. This paper demonstrates that it is not only privacy concerns that need to figure more prominently in the media policy debate. The value and objectives enshrined in media law and policy also provide an important source of inspiration for a more general debate about “fair algorithmic media practices” and the protection of news readers’ privacy. The paper demonstrates and explains that the use of algorithms and Big Data, and the way the news media use and engage with the personal data of news readers, can have profound implications for the realisation of important media policy objectives, such as media diversity, non-discriminatory access to information and equal chances to communicate and participate in the “market place of ideas”. The discussion about filter bubbles is instructive: when the media profile and target the user to offer more personally relevant services, not only privacy concerns are at stake, but broader societal concerns about diversity, information access and the democratic role of the media. This is why algorithmic profiling and targeting in the newsmedia needs to be placed into the broader media political context. The insights from media law and policy can also help to define “fair algorithmic media practices” and algorithmic ethics in the media sector. For example, a central political consideration for the media is how to balance editorial and commercial influences – an issue that is also of great relevance in algorithmic news making. Media law gives users a concrete right to not be informed when the media inform or sell ideologies or products. Being transparent about the motives behind personalised messages (whether that is to offer more personally relevant information to persuade) could also be an important way of ensuring fairness in algorithmic profiling of the media user. To give two other examples: media law and policy has for long acknowledged that some groups, such as minors, warrant a higher level of protection than other groups in the population because of their credulity. Again it is worth exploring to what extent that reasoning could be beneficially extended to the case of profiling and targeting media users. Finally, media law acknowledges that some types of content are so critical to the functioning of our democracy, or so intimate, that they cannot be subject to commercial influences at all (for example current affairs news or religious content). Are there reasons to argue that, following a similar line of thought, profiling and targeting media users is more acceptable for some types of content than for others, for example in order to discourage commercial influences on some categories of content?

Policing and The Cloud

ABSTRACT. Christopher Slobogin will discuss law enforcement’s burgeoning use of databases maintained by third parties, ranging from internet service providers and commercial establishments to public utilities and other government agencies.   Police access to these databases can come in at least five different guises: suspect-driven, profile-driven, event-driven, program-driven, or volunteer-driven.  Each of these investigative endeavors are different from the other four.  Each calls for a different regulatory regime. 

14:15-15:45 Session 5C: IP2: Copyright, Patents and Licensing
Location: DZ4
Towards child-specific privacy impact assessments

ABSTRACT. European data protection law has recently witnessed significant changes since the General Data Protection Regulation (2016/679) took a risk-based turn and embraced risk identification and management practises in order to respond to new risks that scientific and technological advances (e.g. profiling, data mining, IoT) created to individuals. Following in the footsteps of traditional risk regulation in environmental, human health and safety policy domains, the GDPR assigns a central position to the data protection impact assessments (DPIA). These assessments are charged with identifying the salient risks to privacy, data protection and fundamental freedoms, before the creation and deployment of a new product or services that process personal data. Although the DPIA share some elements with the existing environmental and technology impact assessments, it is essentially new as a concept and as a practise for the data protection domain. A growing body of literature on the DPIA and privacy impact assessments (PIA) is emerging, including manuals and guidance documents by academics(1), data protection authorities(2)and other bodies (3). Two DPIA practise-oriented frameworks cover specific technologies (4). All of these sources, however, focus on the methodology of assessing the “risks to the rights and freedoms of data subjects”, viewing the data subjects as a homogeneous group. Yet, general DPIA frameworks are inadequate to fully address the risks to vulnerable data subjects, such as children. The paper proposes to tailor the DPIA to data subjects’ needs and vulnerabilities and develop a conceptual framework for the evaluation and assessment of digital privacy risks for children as data subjects. The development of this child-specific framework is based on a two-fold premise: 1) the calculation of risks to children’s privacy online is different than to adults’ privacy due to the enhanced potential negative impact (harm) and (often) higher probability of such impact occurring; 2) a specific catalogue of fundamental rights and freedoms that can be infringed by new online products and services should be used for children in DPIAs (i.e. the UN Convention on the Rights of the Child). Drawing upon multidisciplinary literature in risk studies, social sciences (developmental psychology and media studies) and legal scholarship, in particular, child rights and data protection law, the article: first, provides an overview of the potential online privacy risks and harms for children and shows the extent to which they differ from privacy risks and harms as listed in general for all individuals in the GDPR; second, reviews various existing (D)PIA methodologies and distinguishes the main criteria and risk-related elements to be evaluated; third, based on this review elaborates a set of child-tailored DPIA criteria that allows the online service providers to more fully understand the child rights impact related to their activities. In constructing the criteria, the paper uses the following international documents: the UN Convention on the Rights of the Child (1989), the UN Children’s Rights and Business Principles (2013), UNICEF Mobile Operator Child Rights Self-Impact Assessment Tool (MO-CRIA) and UNICEF Business Child Online Safety Assessment tool (COSA).

Sources: (1) David Wright. Making privacy impact assessment more effective. The Information Society, 29(5):307–315, 2013; David Wright, Rachel Finn, and Rowena Rodrigues. A comparative analysis of privacy impact assessment in six countries. Journal of Contemporary European Research, 9(1), 2013. Sourya Joyee De, Daniel Le Métayer, Privacy Risk Analysis, Synthesis Lectures on Information Security, Privacy, and Trust, 2016 8(3) Wright, D., and P. De Hert, eds. 2012.Privacy impact assessment. Dordrecht, The Netherlands: Springer (2) Information Commissioner’s Office, ‘Conducting Privacy Impact Assessments’ (2014) Code of Practise ; Agencia Espanola de Proteccion de Datos, ‘Guía para una Evaluación del Impacto en la Protección de Datos Personales’ (2014) ; Commission Nationale de l’Informatique et des Libertés (CNIL), ‘Methodology for Privacy Risk Management (2012) and ‘Measures for the privacy risk treatment’ (2012) A Catalogue of good practices, (3) Sourya Joyee De, Daniel Le Métayer. PRIAM: A Privacy Risk Analysis Methodology. [Research Report] RR-8876, Inria - Research Centre Grenoble – Rhône-Alpes (2016) ; Privacy Risk Management for Federal Information Systems (2015) (4) Privacy and Data Protection Impact Assessment Framework for RFID Applications, (2011); Data Protection Impact Assessment Template for Smart Grid and Smart Metering Systems (2014)

14:15-15:45 Session 5D: PLSC 2A: Kannekens, Hoofnagle, and van Eijk
Location: DZ6
Panel: Price Discrimination

ABSTRACT. Price discrimination

Panel proposal for TILTing Perspectives 2017

Online shops could offer each website customer a different price. Such personalised pricing can lead to advanced forms of price discrimination based on individual characteristics of consumers. An online shop can recognise customers, for instance through cookies, and categorise them as price-sensitive or price-insensitive. The shop can charge (presumed) price-insensitive people higher prices.

This panel discusses such online price discrimination practises. We discuss price discrimination from different angles, in a panel representing the perspectives of law, economics, ethics, machine learning, regulation and digital civil rights.

At the start of the panel we present some preliminary empirical results from a large-scale survey we are currently conducting on people’s attitudes towards online price discrimination.

Questions that will be discussed during the panel include:

- Under which conditions is online price discrimination considered fair? Acceptable? Welfare enhancing? - Is it possible to determine ‘red lines’ which online price discrimination should not cross? - How can de facto statistical discrimination of protected groups be avoided or dealt with? - Are companies too afraid of consumer backlash to engage in large-scale price discrimination? - In 30 years, will we encounter personalised prices all the time?

Proposed speakers:

• Dr. Frederik Zuiderveen Borgesius, Institute for Information Law (moderator), (confirmed) • Dr. Joost Poort, Institute for Information Law (confirmed) • Dr. Solon Barocas, Princeton / Microsoft Research (confirmed) • Prof. dr. Beate Roessler, University of Amsterdam (t.b.c.) • Estelle Massé, Access Now (t.b.c.) • EU legislative/regulatory representative (t.b.c.)

14:15-15:45 Session 5E: PLSC 2B: Gellert
Location: DZ7
The “Rule of Law” implications of data-driven decision-making: A techno-regulatory perspective

ABSTRACT. The proposed paper intends to identify certain “rule of law” implications of Big Data analysis from a techno-regulatory perspective. The main idea is that automated decision-making (DM), governed by algorithms of varying degrees of complexity, share a common aim with law— namely the control and/or steering of institutional practices and individual behaviour within the society. Combined with other regulatory features of the ICTs, Big Data ushers a new prospect of techno-regulatory settings which is capable of achieving goals common with human-governed normative systems. Accordingly Part I sets the scene as providing an account of predictive analytics and data-mining (Big Data practices) as regulatory tools and modalities within the sense as set forth by Lessig in Code and Other Laws of Cyberspace. This Part develops the idea that when complemented and reinforced by data analysis capabilities, rule-based systems could overcome the rigidness of pre-set architectures in implementing norms. Part II carries on with an analysis of the implementation of data-driven DM in several regulatory spaces/implementation fields— e.g. detection of wrongdoing and financial status; relevance, risk and credit scoring; content filtering; sentiment analysis; performance testing and other optimization tasks such as traffic management. The overall aim is to define and theorise data-driven DM and data mining from a techno-regulatory perspective, and thus develop a taxonomical approach with regard to implementation spaces and the legal effect they bring about. Put in other words, a consolidation of the “code as law” in view of the emerging data-driven practices and their regulatory relevance. Part III identifies two mutually reinforcing dynamics/features of data mining as the main challenges giving rise to rule of law concerns. Accordingly bias/discrimination and informational asymmetries are elucidated as the kernel/seat of the harmful consequences specific to data-driven DM as an implementation of norms by and through architecture/software. Accordingly, we raise the questions, first, how to approach and evaluate different types of biases inherent in ML and their “true” causes, and second, whether such distinction has any value for our intended legal analysis and theoretical framework. Part IV refines certain “rule of law” implications derived from the above analysis— namely, (i) the collapse of the normative enterprise, (ii) the erosion of moral enterprise and (iii) replacing of causative basis with correlative calculations. Although these implications are not completely specific to Big Data space but rather of general nature regarding techno-regulation, each of these rule of law implications become aggravated, and extend into deeper dimensions when techno-regulation is implemented through data-driven systems. Taking the notion of autonomy—supposedly the ultimate aim of human existence— as the touchstone, the paper concludes engaging with the question whether, and to what extent a theorisation of data-driven DM from a “systemic” perspective—with reference to findings from systems theory and complexity studies a.k.a. cybernetics— could contribute to the above-explained analysis.

Will Data Protection Eventually Kill Privacy?

ABSTRACT. It has often been stipulated that data protection has a focus on the information part of privacy. It is more narrow than privacy as such, but gives more specific rules and requirements for the legitimate processing of personal data. These requirements include adequate measures for the protection of data, at an organizational as well as a technological level. This is where information security touches upon data protection directly. Privacy, as said, has a broader scope, and its roots are closely related to individual autonomy, human dignity, and bodily integrity (e.g. Koops et al. 2016). This is the reason why certain technologies have long been backfired in its adoption, as they were considered to be too infringing upon privacy. A telling example is the use of biometrics, such as fingerprints scanning and facial recognition (e.g. Sherman 2005). Strikingly, biometrics are now gradually becoming more commonplace. Applications in the field of national security are well-known. Think, for instance, of the US Border Customs, where fingerprints and a photo are taken when entering the country (Dimitrova 2016). But these technologies are also getting used for information security purposes, such as fingerprints for unlocking smartphones, but also face images (selfies) in relation to eID solutions. Biometrics become a technological means to protect personal data by controlling access to information and accounts. What we see is that a technology which has always been considered highly infringing upon privacy is now getting used as a technological measure for data protection. And if, despite the sometimes disputable quality of biometrics, this becomes a state-of-the-art measure for protection of personal data, data protection laws may require an infringement on privacy for the benefit of data protection. This is paradoxical, and may in the end result in data protection becoming the prevalent standard, which pushes privacy more and more to the background. A tension arises between data protection and privacy, which are both recognized as fundamentals rights in itself in the EU Charter of Fundamental Rights. There is a need to rethink data protection practices with privacy as the underlying concept in mind. And to work on innovative solutions that protect personal data as well as privacy. This paper contributes to the existing literature, by taking the privacy versus security debate as a starting point and then moving to security measures being used for data protection purposes. The tension between data protection and privacy that follows has not been described so far.

Technological Disruption in Privacy Law
SPEAKER: unknown

ABSTRACT. Perhaps the only premise that privacy scholars can agree on is that the field abounds with controversy. Some assert that privacy faces dire threats from new surveillance technologies—from web cookies to drones and GPS tracking—and the government and private actors who have capitalized on new opportunities to track us. Others assure us that there is little new here and that contemporary privacy law will soon catch up to new technologies, much as the law ultimately adapted to the portable cameras that incensed Samuel Warren and Louis Brandeis in their 1890 article, The Right To Privacy.

Scholars caught in this debate are asking the wrong question. The question of whether a new surveillance technology disrupts privacy law tells us little about how to treat the new technology. The more important project is to identify how the new technology might disrupt the law. We identify three categories of legal disruption. First, some technologies present discrete but manageable problems for privacy law. Drones might, for example, allow voyeurs to intrude on their neighbors’ seclusion without technically violating existing “Peeping Tom” laws, but the disruption is manageable insofar as courts and legislators are equipped to extend the laws to cover the new practices. Second, some technologies create more difficult problems because they challenge the underlying assumptions on which privacy laws are based. Consider how GPS tracking impacts policing. While police officers have long had the right to follow suspects’ vehicles, the costs of doing so impose a soft constraint against dragnet surveillance. We must now grapple with the question of whether the same lax standards for police monitoring should apply even when these de facto constraints disappear. And third, some technologies are disruptive because they raise deeper challenges for the very process of lawmaking. The growing internet of things presents a unique difficulty because of the convergence of several ordinarily distinct regulatory concerns. Even the humble John Deere tractor—which now includes sensors and proprietary software—raises a host of privacy, security, intellectual property, and consumer protection concerns that span the jurisdiction of a dozen different agencies in the United States alone. The resultant political jockeying complicates the regulation of privacy.

Clearly delineating these different types of legal disruptions allows us to break the impasse in the current privacy debate and shift to the more productive discussion of how to best respond to each type of disruption. In doing so we can also open privacy law to insights from other fields that have adapted to extensive technological change, such as labor law’s response to the industrial revolution and intellectual property’s response to home-copying technologies.

14:15-15:45 Session 5F: Data Science 3 - Datafication and economic power in developing countries
Location: DZ8
Panel: The Media and Privacy in Public
SPEAKER: Kirsty Hughes

ABSTRACT. This panel will discuss the ways in which the media and laws applicable to the media create, shape and challenge conceptions of public and private, in particular ideas of public and private spaces and people. Drawing upon theory, practice and comparative experiences across the common law world the panel will discuss the development and contemporary relevance of these binary tools.

The panel will consist of three 20 minute presentations leaving time for discussion and questions:

Kirsty Hughes The Public Figure Doctrine as a Conceptual Device in Human Rights Law

Gavin Phillipson The Media and Privacy in the Cyberworld "My presentation will focus on the recent decision of the UK Supreme Court in PJS (2016) to continue a privacy injunction even in the face of publication of the relevant couple's identities on social media and in other jurisdictions, esp the US. The decision - which was widely attacked and derided in the press - was therefore an example of the maintenance of privacy in the midst of some publicity - the cyber equivalent, in some ways, of protecting privacy in physically public spaces. "

Nicole Moreham Comparative Perspectives on the Public/Private Divide in Media Privacy Law

14:15-15:45 Session 5G: Panel 1 - Data Portability Panel
Location: DZ1
Family privacy in the internet age: Family photographs as a case study

ABSTRACT. Abstract and Outline


In the modern internet age significant challenges are posed to the privacy of families and individuals. Online sharing of family photographs poses particular challenges, most notably for the children who are often the subject of those photos. Although parental use of social media to share photographs of their children online (known as ‘sharenting’) has attracted much media comment there has been little academic discussion of the legal ramifications of sharenting. A recent article by Steinberg (2016) provides the first analysis of the impact of sharenting on children’s privacy, considering the potential legal solutions in American law. This paper seeks to further develop the discussion started by Steinberg.

In 2016 media reports suggested that an eighteen year old Austrian girl had brought a claim against her parents for violating her privacy by posting embarrassing childhood photos on Facebook. Whilst the story has since been denounced as untrue, it is not inconceivable that a child might bring such a claim against their parents. Part one of this paper considers how such a claim by a child against her parents for unauthorised online disclosure of images of her childhood might be decided before the English courts. Whilst English law recognises no enforceable general right to privacy, the child is not without a potential remedy. Three legal regimes are clearly relevant to such a scenario; the law of confidentiality, the tort of misuse of private information (MOPI) and the Data Protection Act 1998. The child might, under these provisions, potentially obtain an injunction to prevent continued publication and to obtain the removal of images posted online. They might obtain monetary compensation. As recognised, in this paper, however, such a claim raises many difficult questions, not least, how might the courts answer a parent’s argument that in the twenty first century a child cannot reasonably expect not to have their photographs displayed on social media.

In part two of this paper recognition is given to the fact that some commentators have questioned whether the state should ever intervene in such a family dispute between a child and their parent(s). The discussion is thus set in the context of a wider examination of the notion of family privacy. Whilst there is some academic disagreement about what family privacy is and what it protects, it is evident from analysis of the literature that family privacy may be seen to both protect parental rights to determine how family life operates (Cahn,1999) and to protect the family from state intervention (Fineman, 1999; Lacey, 1998). Family privacy has also been described as an ethic which ‘transcends law as such and informs the way that laws are interpreted and understood’ (Fineman (1998). Analysis of the tort of misuse of private information certainly suggests that in determining MOPI claims the judiciary continue to be influenced by an ideology of family privacy and view protection of the private family sphere as important (ETK v News Group Newspapers [2011] EWCA Civ 439; Weller v Associated Newspapers Ltd [2015] EWCA Civ 1176; R v Broadcasting Complaints Commission [1995] EMLR 163). MOPI case law suggests also that the courts consider parents should be able to determine what happens to their children’s information, even if their decisions have a negative impact on those children (AAA v Associated Newspapers Ltd [2013] EWCA Civ 554). There does appear, therefore to be some strength to the argument, that the notion of family privacy is still relevant in the twenty first century, and that it might allow parents to post family photographs online without fear of state or court sanction. This paper puts forward a counter argument, however, one which would lend strength to the child’s contention that their claim should be heard. This paper will argue that many of the assumptions which underpin the notion of family privacy are no longer appropriate in the twenty first century, and that the child in a sharenting claim is deserving of the court’s protection.

Ultimately, in exploring the tensions between parental rights and children’s rights that lie at the heart of the sharenting scenario, this paper highlights a need to reconsider notions of family privacy and their influence upon the legal regimes which protect private and personal information.




I The Law of Confidence

(a) Do parents owe a duty of confidence to their children? (Consider by reference to 3 stage test in Coco v A N Clark (Engineers) Limited [1969] FSR 415) (b) Remedies available; damages and/or an injunction. But note that if the information has been shared so widely that it is effectively public information no injunction is likely to be made

II Misuse of private information (MOPI)

(a) Again damages and/or injunction available. 2 stage test to be satisfied (Campbell v MGN [2004] UKHL 22) (b) Stage 1: Reasonable expectation of privacy. See Campbell, Murray v Big Pictures (UK) Limited [2008] EWCA Civ 446 (Key question is whether the child has a reasonable expectation that their photographs will not be shared online in the social media age? This will be a value judgment to be taken by the court.) (c) Stage 2: Balancing of parental Article 8 and 10 rights against child’s Article 8 rights (MOPI action designed for disputes between individuals and the media – consider how the courts have used it in parent-child disputes of this nature ie: In re S (a child) [2004] UKHL 47; [2005] 1 AC 593; OPO v (1) MLA and (2) STL [2014] EWCA Civ 1277 (CA) and the Supreme Court decision in the same case, Rhodes v OPO by his litigation friend BHM and another [2015] UKSC 32 focusing on the Wilkinson v Downton tort. Consider dicta in Weller v Associated Newspapers Ltd [2015] EWCA Civ 1176 on importance of children’s interests)

III Data Protection Act 1998

(a) Section 13 enables individuals to claim for compensation where damage suffered by reason of contravention of the DPA. Consider interpretation by Court of Appeal in Google Inc v Vidal-Hall and others [2015] EWCQ Civ 311 - potential remedy even if the only damage caused to the child is distress, or ‘moral damage’ (b) But problems posed by personal and household exemption (s36 DPA) – potential argument by parents that DPA does not apply since processing is for the purposes of their personal, family or household affairs (ICO (2014) Social networking and online forums- when does the DPA apply?)


I What family privacy is and what it protects – exploration of different notions of family privacy and the potential arguments as to why the notion of family privacy supports an argument that parents should be allowed to post family photographs online without fear of state or court sanction

(a) Family privacy as ‘privatisation’ – preservation of parental rights to determine how family life operates (Cahn,1999). (b) Family privacy as protection of liberty interests – protecting the family from state intervention (Fineman, 1999; Lacey, 1998). (c) Family privacy as ‘the idea that there exists for families, as for individuals, an area in which they may expect to be let alone’ (Peterman and Jones, 2003)

II Evidence of an underpinning ideology of family privacy in English law

(a) MOPI case law: ETK v News Group Newspapers [2011] EWCA Civ 439; Weller v Associated Newspapers Ltd [2015] EWCA Civ 1176; AAA v Associated Newspapers Ltd [2013] EWCA Civ 554; R v Broadcasting Complaints Commission [1995] EMLR 163 (b) Personal and Household Exemption in Data Protection Act 1998 (s36)

III Arguments for revisiting the notion of family privacy and respecting childrens’ privacy

(a) Assumptions that underpin notions of family privacy i. The family is a unique unit, the best place to bring up children and deserves and requires protection ii. The family is synonymous with the private sphere, the home is a private place separate from society and state (b) The changing nature of the family and of society i. Families increasingly share intimate details of family life online – the family no longer inhabits a separate family sphere ii. Increasing individualisation of the family (Beck and Beck-Gernshem, Giddens) justifies reconsideration of the family as a cohesive unit meriting protection from outsiders iii. Recognition in the jurisprudence of the English Courts and the European Court of Human Rights that parents do not always act in their children’s best interest, may cause significant harm to their children, and that it may be justified to interfere in family life in the best interests of the child.

CONCLUSION • Family privacy as a notion needs reconsideration - It should not be used as an excuse to prevent the courts from providing a remedy to children when their parents share their information • Children have the same rights to privacy as adults and should be able to enforce their rights in the same way as adults, even where the very individuals who have breached their rights are their parents. • There may be force in arguments that parents should be able to take decisions about how their children’s information is used – consider parents’ Article 8 right to family life. Parents also have their own rights under Article 10 right to freedom of expression to be balanced against children’s rights • But there are good reasons why parents should also be cautious about sharing their children’s information online – impact on privacy and potential harm. Issue of children’s best interests. • When courts balance parents and children’s rights they should not be afraid to provide children with a remedy, whether damages or an injunction

15:45-16:00Coffee Break
16:00-17:30 Session 6: Keynotes: Christopher Slobogin & Gary T. Marx
Location: DZ1
Panel: Copyright lawmaking in the EU
SPEAKER: Ana Ramalho

ABSTRACT. This panel will address the topic of copyright lawmaking in the EU from different perspectives, as follows:

Chair – Ana Ramalho

Keynote and introduction (Ana Ramalho) "Copyright law making in the EU: constraints and opportunities" (15’) This introduction will provide some background on the topic of the panel, focussing inter alia on competence issues and constitutional guidelines in relation to copyright lawmaking.At the end of the introduction, the speaker will briefly present the other speakers and their topics.

Political perspective (Ben Farrand) Political difficulties in achieving the single market strategy (20’)

Legal perspective (Christina Angelopoulos) "Normative guidance of human rights in copyright law – the example of intermediary liability" (20´)

Methodological perspective (Bodo Balazs) "A new type of evidence in the age of limited enforceability: testing the public support for copyright policy alternatives" (20’)

Q&A (15')

18:30-21:30Conference Dinner