View: session overviewtalk overviewside by side with other conferences
09:00 | PRESENTER: Petar Paradžik ABSTRACT. We propose conditional observational equivalence --- a variant of observational equivalence that is more flexible since it can be made dependent on arbitrary safety trace properties. We extend an existing method for verifying observational equivalence in the multiset rewriting setting with the ability to handle conditions. Our extension can automatically verify conditional observational equivalence for a simple class of conditions that depend only on the structure of the execution. By using conditional observational equivalence, we give the first method for verifying off-line guessing resistance in the multiset rewriting setting and apply it to analyze and verify the properties of EAP-EKE, a password-authenticated key exchange (PAKE) protocol. |
09:30 | PRESENTER: Reynaldo Gil-Pons ABSTRACT. Various modern protocols tailored to emerging wireless networks, such as body area networks, rely on the proximity and honesty of devices within the network to achieve their security goals. However, there does not exist a security framework that supports the formal analysis of such protocols, leaving the door open to unexpected flaws. In this article we introduce such a security framework, show how it can be implemented in the protocol verification tool Tamarin, and use it to find previously unknown vulnerabilities on two recent key exchange protocols. |
10:00 | A small bound on the number of sessions for security protocols PRESENTER: Stephanie Delaune ABSTRACT. Bounding the number of sessions is a long-standing problem in the context of security protocols. It is well known that even simple properties like secrecy are undecidable when an unbounded number of sessions is considered. Yet, attacks on existing protocols only require a few sessions. In this paper, we propose a sound algorithm that computes a sufficient set of scenarios that need to be considered to detect an attack. Our approach can be applied for both reachability and equivalence properties, for protocols with standard primitives that are type-compliant (unifiable messages have the same type). Moreover, when equivalence properties are considered, else branches are disallowed, and protocols are supposed to be simple (an attacker knows from which role and session a message comes from). Since this class remains undecidable, our algorithm may return an infinite set. However, our experiments show that on most basic protocols of the literature, our algorithm computes a small number of sessions (a dozen). As a consequence, tools for a bounded number of sessions like DeepSec can then be used to conclude that a protocol is secure for an unbounded number of sessions. |
Lunches will be held in Taub lobby (CAV, CSF) and in The Grand Water Research Institute (DL, NMR, ITP).
14:00 | PRESENTER: Liyi Li ABSTRACT. We present a formal model of Checked C, a dialect of C that aims to enforce spatial memory safety. Our model pays particular attention to the semantics of dynamically sized, potentially null-terminated arrays. We formalize this model in Coq, and prove that any spatial memory safety errors can be blamed on portions of the program labeled unchecked; this is a Checked C feature that supports incremental porting and backward compatibility. While our model’s operational semantics uses annotated (“fat”) pointers to enforce spatial safety, we show that such annotations can be safely erased. Using PLT Redex we formalize an executable version of our model and a compilation procedure to an untyped C-like language, as well as use randomized testing to validate that generated code faithfully simulates the original. Finally, we develop a custom random generator for well-typed and almost-well-typed terms in our Redex model, and use it to search for inconsistencies between our model and the Clang Checked C implementation. We find these steps to be a useful way to co-develop a language (Checked C is still in development) and a core model of it. |
14:30 | SecurePtrs: Proving Secure Compilation Using Data-Flow Back-Translation and Turn-Taking Simulation PRESENTER: Roberto Blanco ABSTRACT. Proving secure compilation of partial programs typically requires back-translating an attack against the compiled program to an attack against the source program. To prove back-translation, one can syntactically translate the target attacker to a source one---i.e., syntax-directed back-translation---or show that the interaction traces of the target attacker can also be emitted by source attackers---i.e., trace-directed back-translation. Syntax-directed back-translation is not suitable when the target attacker may use unstructured control flow that the source language cannot directly represent. Trace-directed back-translation works with such syntactic dissimilarity because only the external interactions of the target attacker have to be mimicked in the source, not its internal control flow. Revealing only external interactions is, however, inconvenient when sharing memory via unforgeable pointers, since information about shared pointers stashed in private memory is not present on the trace. This made prior proofs unnecessarily complex, since the generated attacker had to instead stash all reachable pointers. In this work, we introduce more informative *data-flow traces*, combining the best of syntax- and trace-directed back-translation in a simpler technique that handles both syntactic dissimilarity and memory sharing well, and that is proved correct in Coq. Additionally, we develop a novel *turn-taking simulation* relation and use it to prove a recomposition lemma, which is key to reusing compiler correctness in such secure compilation proofs. We are the first to mechanize such a recomposition lemma in the presence of memory sharing. We use these two innovations in a secure compilation proof for a code generation compiler pass between a source language with structured control flow and a target language with unstructured control flow, both with safe pointers and components. |
15:00 | PRESENTER: Thomas Van Strydonck ABSTRACT. Assembly-level protection mechanisms (virtual memory, trusted execution environments, virtualization) make it possible to guarantee security properties of a full system in the presence of arbitrary attacker provided code. However, they typically only support a single trust boundary: code is either trusted or untrusted, and protection cannot be nested. Capability machines provide protection mechanisms that are more fine-grained and that do support arbitrary nesting of protection. We show in this paper how this enables the formal verification of full-system security properties under multiple attacker models: different security objectives of the full system can be verified under a different choice of trust boundary (i.e. under a different attacker model). The verification approach we propose is modular, and is robust: code outside the trust boundary for a given security objective can be arbitrary, unverified attacker-provided code. It is based on the use of universal contracts for untrusted adversarial code: sound, conservative contracts which can be combined with manual verification of trusted components in a compositional program logic. Compositionality of the program logic also allows us to reuse common parts in the analyses for different attacker models. We instantiate the approach concretely by extending an existing capability machine model with support for memory-mapped I/O and we obtain full system, machine-verified security properties about external effect traces while limiting the manual verification effort to a small trusted computing base relevant for the specific property under study. |
16:00 | Interpreting Epsilon of Differential Privacy in Terms of Advantage in Guessing or Approximating Sensitive Attributes PRESENTER: Peeter Laud ABSTRACT. Differential privacy is a privacy technique with provable guarantees which is typically achieved by introducing noise to statistics before releasing them. The level of privacy is characterized by a certain numeric parameter ε > 0, where smaller ε means more privacy. However, there is no common agreement on how small ε should be, and the actual likelihood of data leakage for the same ε may vary for different released statistics and different datasets. In this paper, we show how to relate ε to the increase in the probability of attacker's success in guessing something about the private data. The attacker's goal is stated as a Boolean expression over guessing particular categorical and numerical attributes, where numeric attributes can be guessed with some precision. The paper is built upon the definition of d-privacy, which is a generalization of ε-differential privacy. |
16:30 | DPL: A Language for GDPR Enforcement PRESENTER: Farzane Karami ABSTRACT. The General Data Protection Regulation (GDPR) regulates the handling of personal data, including that personal data may be collected and stored only with the data subject’s consent, that data is used only for the explicit purposes for which it is collected, and that is deleted after the purposes are served. We propose a programming language called DPL (Data Protection Language) with constructs for enforcing these central GDPR requirements and provide the language’s runtime operational semantics. DPL is designed so that GDPR violations cannot occur: potential violations instead result in runtime errors. Moreover, DPL provides constructs to perform privacy-relevant checks, which enable programmers to avoid these errors. Finally, we formalize DPL in Maude, yielding an environment for program simulation, and verify our claims that DPL programs cannot result in privacy violations. |
17:00 | Privacy as Reachability PRESENTER: Sébastien Gondron ABSTRACT. We show that privacy can be formalized as a reachability problem. We introduce a transaction-process formalism for distributed systems that can exchange cryptographic messages (in a black-box cryptography model). Our formalism includes privacy variables chosen non-deterministically from finite domains (e.g., candidates in a voting protocol), it can work with long-term mutable states (e.g., a hash-key chain) and allows one to specify consciously released information (e.g., number of votes and the result). We discuss examples, e.g., problems of linkability, and the core of the privacy-preserving proximity tracing system DP-3T. |