View: session overviewtalk overviewside by side with other conferences
10:45 | The Complexity of Estimating Systematic Risk in Networks SPEAKER: unknown ABSTRACT. This risk of catastrophe from an attack is a consequence of a network's structure formed by the connected individuals, businesses and computer systems. Understanding the likelihood of extreme events, or, more generally, the probability distribution of the number of compromised nodes is an essential requirement to provide risk-mitigation or cyber-insurance. However, previous network security research has not considered the higher moments of this distribution, while previous cyber-insurance research has not considered the effect of topologies on the supply side. We provide a mathematical basis for bridging this gap: we study the complexity of computing these loss-number distributions, both generally and for special cases of common real-world networks. In the case of scale-free networks, we demonstrate that expected loss alone cannot determine the riskiness of a network, and that this riskiness cannot be naively estimated from smaller samples, which highlights the lack/importance of topological data in security incident reporting. |
11:15 | Automated Generation of Attack Trees SPEAKER: Roberto Vigo ABSTRACT. Attack trees are widely used to represent threat scenarios in a succinct and intuitive manner, suitable for conveying security information to non-experts. The manual construction of such objects relies on the creativity and experience of specialists, and therefore it is error-prone and impracticable for large systems. Nonetheless, the automated generation of attack trees has only been explored in connection to computer networks and levering rich models, whose analysis typically leads to an exponential blow-up of the state space. We propose a static analysis approach where attack trees are automatically inferred from a process algebraic specification in a syntax-directed fashion, encompassing a great many application domains and avoiding incurring systematically an exponential explosion. Moreover, we show how the standard propositional denotation of an attack tree can be used to phrase interesting quantitative problems, that can be solved through an encoding into Satisfiability Modulo Theories. The flexibility and effectiveness of the approach is demonstrated on the study of a national-scale authentication system, whose attack tree is computed thanks to a Java implementation of the framework. |
11:45 | Mignis: A semantic based tool for firewall configuration SPEAKER: Pedro Adão ABSTRACT. The management and specification of access control that enforce a given policy is a non-trivial, complex, and time consuming task. In this paper we aim at simplifying this task both at specification and verification levels. For that, we propose a formal model of Netfilter, a firewall system integrated in the Linux kernel. We define an abstraction of the concepts of chains, rules, and packets existent in Netfilter configurations, and give a semantics that mimics packet filtering and address translation. We then introduce a simple but powerful language that permits to specify firewall configurations which are unaffected by the relative ordering of rules and that does not depend on the underlying Netfilter chains. We give a semantics for this language and show that it can be translated into our Netfilter abstraction. We present Mignis, a publicly available tool that translates abstract firewall specifications into real Netfilter configurations. Mignis is currently used to configure the whole firewall of the DAIS Department of Ca’ Foscari University. |
12:15 | Provably Sound Browser-Based Enforcement of Web Session Integrity SPEAKER: Stefano Calzavara ABSTRACT. Enforcing protection at the browser side has recently become a popular approach for securing web authentication. Though interesting, existing attempts in the literature only address specific classes of attacks, and thus fall short of providing robust foundations to reason on web authentication security. In this paper we provide such foundations, by introducing a novel notion of web session integrity, which allows us to capture many existing attacks and spot some new ones. We then propose FF+, a security-enhanced model of a web browser that provides a full-fledged and provably sound enforcement of web session integrity. We leverage our theory to develop SessInt, a prototype extension for Google Chrome implementing the security mechanisms formalized in FF+. SessInt provides a level of security very close to FF+, while keeping an eye at usability and user experience. |
14:30 | Privacy in the Age of Augmented Reality SPEAKER: Alessandro Acquisti ABSTRACT. I will present the results of a series of studies connecting research on privacy, behavioural economics, and online social networks. I will present the results of a series of behavioural experiments aimed at investigating consumers' privacy valuations and the adequacy of "notice and consent" mechanisms for privacy protection. Then, I will discuss security and privacy challenges in the context of social media. In particular, I will focus on the feasibility of combining publicly available Web 2.0 data with off-the-shelf face recognition software for the purpose of large-scale, automated individual re-identification. I will discuss the implications of the inevitable convergence of face recognition technology and increasing online self-disclosures, the emergence of "personally predictable" information, and the future of privacy in an "augmented" reality world in which online and offline data will seamlessly blend. |
16:30 | TUC: Time-sensitive and Modular Analysis of Anonymous Communication SPEAKER: Praveen Manoharan ABSTRACT. The anonymous communication protocol Tor constitutes the most widely deployed technology for providing anonymity for user communication over the Internet. Several frameworks have been proposed that show strong anonymity guarantees; none of these, however, are capable of modeling the class of traffic-related timing attacks against Tor, such as traffic correlation and website fingerprinting. In this work, we present TUC: the first framework that allows for establishing strong anonymity guarantees in the presence of time-sensitive adversaries that mount traffic-related timing attacks. TUC incorporates a comprehensive notion of time in an asynchronous communication model with sequential activation, while offering strong compositionality properties for security proofs. We apply TUC to evaluate a novel countermeasure for Tor against website fingerprinting attacks. Our analysis relies on a formalization of the onion routing protocol that underlies Tor and proves rigorous anonymity guarantees in the presence of traffic-related timing attacks. |
17:00 | Differential Privacy: An Economic Method for Choosing Epsilon SPEAKER: Justin Hsu ABSTRACT. \emph{Differential privacy} is becoming a gold standard for privacy research; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of interesting problems. However, the question of when differential privacy works \emph{in practice} has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ϵ, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role that these parameters play in concrete applications of differential privacy, identifying the key challenges that must be addressed when choosing specific values. This choice requires balancing the interests of two different parties: the data {\em analyst} and the prospective {\em participant}, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters. We illustrate the model with several scenarios, including a medical study and a study of movie ratings data. We also explore a surprising insight: in some circumstances, a differentially private study can be \emph{more accurate} than a non-private study for the same cost, under our model. Finally, we discuss a number of simplifying assumptions in our model and outline a research agenda for possible refinements. |
17:30 | Proving differential privacy in Hoare logic SPEAKER: unknown ABSTRACT. \emph{Differential privacy} is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach is based on transforming a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy. |
18:00 | Surveillance and Privacy SPEAKER: Jonathan Mayer |
08:45 | VSL Keynote Talk: Verification of Computer Systems with Model Checking SPEAKER: Edmund Clarke ABSTRACT. Model Checking is an automatic verification technique for large state transition systems. The technique has been used successfully to debug complex computer hardware and communication protocols. Now, it is beginning to be used for complex hybrid (continuous/discrete) systems as well. The major disadvantage of the technique is a phenomenon called the State Explosion Problem. This problem is impossible to avoid in worst case. However, by using sophisticated data structures and clever search algorithms, it is now possible to verify hybrid systems with astronomical numbers of states. |