FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
CSF ON WEDNESDAY, JULY 11TH
Days:
previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

09:00-10:30 Session 60B: Information flow
Location: Maths LT2
09:00
A Permission-Dependent Type System for Secure Information Flow Analysis
SPEAKER: Hongxu Chen

ABSTRACT. We introduce a novel type system for enforcing secure information flow in an imperative language. Our work is motivated by the problem of statically checking potential information leakage in Android applications. To this end, we design a lightweight type system featuring Android permission model, where the permissions are statically assigned to applications and are used to enforce access control in the applications. We take inspiration from a type system by Banerjee and Naumann (BN) to allow security types to be dependent on the permissions of the applications. A novel feature of our type system is a typing rule for conditional branching induced by permission testing, which introduces a merging operator on security types, allowing more precise security policies to be enforced. The soundness of our type system is proved with respect to non-interference. In addition, a type inference algorithm is presented for the underlying security type system, by reducing the inference problem to a constraint solving problem in the lattice of security types.

09:30
Types for Information Flow Control: Labeling Granularity and Semantic Models
SPEAKER: Vineet Rajani

ABSTRACT. Language-based information flow control (IFC) tracks dependencies within a program using sensitivity labels and prohibits public outputs from depending on secret inputs. In particular, literature has proposed several type systems for tracking these dependencies. On one extreme, there are fine-grained type systems (like Flow Caml) that track dependence at the level of individual values, by labeling all values individually. On the other extreme are coarse-grained type systems (like SLIO) that track dependence coarsely, by associating a single label with an entire computation context and not labeling all values individually.

In this paper, we show that, despite their glaring differences, both these styles are, in fact, equally expressive. To do this, we show a semantics- and type-preserving translation from a coarse-grained type system to a fine-grained one and vice-versa. The forward translation isn't surprising, but the backward translation is: It requires a construct to arbitrarily limit the scope of a context label in the coarse-grained type system (e.g., HLIO's ``toLabeled'' construct), as well as a small change to how labels are interpreted in coarse-grained type systems. Along the way, we show how to build logical relation models of both fine-grained and coarse-grained type systems (with full higher-order state). We use these relations to prove the two type systems and our translations sound.

10:00
Inductive Invariants for Noninterference in Multi-agent Workflows

ABSTRACT. Our goal is to certify absence of information leaks in multi-agent workflows, such as conference management systems like \textsc{EasyChair}. These workflows can be executed by any number of agents some of which may form coalitions against the system. Therefore, checking noninterference is a challenging problem.

Our paper offers two main contributions: First, a technique is provided to translate noninterference (in presence of various agent capabilities and declassification conditions) into universally quantified invariants of an instrumented new workflow program. Second, general techniques are developed for checking and inferring universally quantified inductive invariants for workflow programs. In particular, a large class of workflows is identified where inductiveness of invariants is decidable, as well as a smaller, still useful class of workflows where the weakest inductive universal invariant implying the desired invariant, is effectively computable. The new algorithms are implemented and applied to certify noninterference for workflows arising from conference management systems.

10:30-11:00Coffee Break
11:00-12:00 Session 64A: CSF Invited Talk: Catuscia Palamidessi
Location: Maths LT2
11:00
Local Differential Privacy on Metric Spaces: optimizing the trade-off with utility

ABSTRACT. Local differential privacy (LDP) is a distributed variant of differential privacy (DP) in which the obfuscation of the sensitive information is done at the level of the individual records, and in general it  is used to sanitize data that are collected for statistical purposes.  LDP has the same properties of compositionality and independence from the prior as DP, and it has the further advantages that (a) each user can choose the level of privacy he wishes, (b) it does not need to assume a trusted third party, and (c) since all stored  records are individually-sanitized,  there is no risk of privacy leaks due to security breaches. On the other hand LDP in general requires more noise than DP to achieve the same level of protection, with negative consequences on the utility. In practice, utility becomes acceptable only on very large collections of data, and this is the reason why LDP is especially successful among big companies such as Apple and Google, which can count on a huge number of users.  In this talk, we propose a variant of LDP suitable for metric spaces, such as location data or energy consumption data, and we show that it provides a much higher utility for the same level of privacy. Furthermore, we discuss algorithms to extract the best possible statistical information from the data obfuscated with this metric variant of LDP. 

12:00-12:30 Session 65A: Privacy
Location: Maths LT2
12:00
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
SPEAKER: Samuel Yeom

ABSTRACT. Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role.

This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks.

12:30-14:00Lunch Break
14:00-15:30 Session 66A: FLoC Keynote Lecture: Shafi Goldwasser
Location: Maths LT1
14:00
Pseudo deterministic algorithms and proofs

ABSTRACT. Probabilistic algorithms for both decision and search problems can offer significant complexity improvements over deterministic algorithms. One major difference, however, is that they may output different solutions for different choices of randomness. This makes correctness amplification impossible for search algorithms and is less than desirable in setting where uniqueness of output is important such as generation of system wide cryptographic parameters or distributed setting where different sources of randomness are used. Pseudo-deterministic algorithms are a class of randomized search algorithms, which output a unique answer with high

probability. Intuitively, they are indistinguishable from deterministic algorithms by a polynomial time observer of their input/output behavior. In this talk I will describe what is known about pseudo-deterministic algorithms in the sequential, sub-linear and parallel setting. We will also describe an extension of pseudo-deterministic algorithms to interactive proofs for search problems where the verifier is guaranteed with high probability to output the same output on different executions, regardless of the prover strategies. Based on joint work with Goldreich, Ron, Grossman and Holden.

15:30-16:00Coffee Break
16:00-17:00 Session 67A: Electronic voting
Chair:
Location: Maths LT2
16:00
Alethea: A Provably Secure Random Sample Voting Protocol
SPEAKER: Lara Schmid

ABSTRACT. In random sample voting, only a randomly chosen subset of all eligible voters vote. We propose Alethea, the first random sample voting protocol that satisfies end-to-end verifiability and receipt-freeness. Our protocol makes explicit the distinction between the human voters and their devices. This allows us to make more fine-grained statements about the required capabilities and trust assumptions of each agent than is possible in previous work. Consequently, we can make more precise statements about the adversary models under which our protocol is secure.

Alethea is the first formally verified random sample voting protocol. This necessitates defining new security properties, in particular that the sample group is chosen randomly and that the chosen voters remain anonymous. We model the protocol and its properties in a symbolic model amenable to tool support. We verify most of the properties automatically and complete our analysis using traditional hand-written proofs when automatic analysis is either infeasible or when we desire a more fine-grained analysis than our symbolic abstractions allow.

16:30
Machine-checked proofs for electronic voting: privacy and verifiability for Belenios

ABSTRACT. We present a machine-checked security analysis of Belenios-- a deployed voting protocol which has already been used in more than 200 elections. Belenios extends Helios with an explicit registration authority to obtain eligibility guarantees.

We offer two main results. First, we build upon a recent framework for proving ballot privacy in EasyCrypt. Inspired by our application to Belenios, we adapt and extend the privacy security notions to account for protocols that include a registration phase. Our analysis identifies a trust assumption which is missing in the existing (pen and paper) analysis of Belenios: ballot privacy does not hold if the registrar misbehaves, even if, the role of the registrar is seemingly to provide eligibility guarantees. Second, we develop a novel framework for proving strong verifiability in EasyCrypt and apply it to Belenios. In the process, we clarify several aspects of the pen-and-paper proof, e.g. how to deal with revote policies.

Together, our results yield the first machine-checked analysis of both privacy and verifiability properties for a deployed electronic voting protocol. Last, but not least, we identify several issues regarding the applicability of existing definitions of privacy and verifiability to systems other than Helios. While we show how to adapt the definitions to the particular case of Belenios, our findings indicate a need for the development of less specific security notions for electronic voting protocols with registration authorities.

19:00-21:30 FLoC banquet at Examination Schools

FLoC banquet at Examination Schools. Drinks and food available from 7pm (pre-booking via FLoC registration system required; guests welcome).