View: session overviewtalk overviewside by side with other conferences
09:00 | ABSTRACT. We apply formal methods to lay and streamline theoretical foundations to reason about Cyber-Physical Systems (CPSs) and cyber-physical attacks. We focus on integrity and DoS attacks to sensors and actuators of CPSs, and on the timing aspects of these attacks. Our contributions are threefold: (1) we define a hybrid process calculus to model both CPSs and cyber-physical attacks. (2) we define a threat model of cyber-physical attacks and provide the means to assess attack tolerance/vulnerability with respect to a given attack. (3) we formalise how to estimate the impact of a successful attack on a CPS and investigate possible quantifications of the success chances of an attack. We illustrate definitions and results by means of a non-trivial engineering application. |
10:00 | LISA: Predicting the Impact of DoS Attacks on Real-World Low Power IoT Systems SPEAKER: Luca Arnaboldi ABSTRACT. Organizations and researchers alike have widely recognised the multiple advantages of adapting Network Intrusion Detection Systems (NIDS) as the norm to monitor against DoS attacks on their systems. Although effective, implementation within the Internet of Things (IoT) is complicated as the setups and protocols used vary, necessitating data collection to be bespoke to an individual system. Standard approaches used to train NIDS include; 1) Use a database of known attacks or 2) testing systems to create a ``benchmark" behaviour and flag any anomaly as a potential attack. It is not feasible to establish a benchmark behaviour in dynamic IoT systems where devices may constantly shift, new devices might join and behaviours might change. The IoT is by its very nature ubiquitous and therefore time consuming to benchmark, we therefore focus on the first approach. This approach has its own drawbacks amplified for IoT systems as: 1) Collecting data unique to a system and for each attack is time consuming and 2) some system changes can require to collect the data or part of the data from scratch (e.g. interactive smart homes where devices can change frequently) To address these limitations we present a novel modelling approach which we call Lightweight IoT System under Attack (LISA) to represent the effects of Power Drain and DDoS attacks on IoT Systems. We begin with a precise formalization of properties of IoT devices using measurements from the real system and run verification on the model to assure matching behaviour. We then model specific attacks on the systems and generate synthetic dataset. |
11:00 | SPEAKER: Jiaming Jiang ABSTRACT. Attribute-based access control (ABAC) is a general access control (AC) model that subsumes numerous earlier AC models. Its increasing popularity stems from the intuitive generic structure of granting permissions based on domain-dependent attributes of users, subjects, objects, and other entities in the system. Multiple formal and informal languages have been developed to express policies in terms of such attributes. The utility of ABAC policy languages is potentially undermined without a properly formalized underlying model. The high-level structure in a majority of ABAC models consists of sets and sets of sets, expressions that demand that the reader unpack multiple levels of sets and tokens to determine what things mean. The resulting reduced readability potentially endangers correct expression and reduces maintainability and validation. These problems could be multiplied with models that employ nonuniform representations of actions and their governing policies. In this paper, we address these problems by recasting the high-level structure of ABAC models in a logical formalism that treats all types of actions uniformly. Our formalism uses a simple variant of description logics to model the high-level structure, and function-free first-order logic with equality to represent and reason about the policies. Use of description logics for model formalizations, including hierarchies of types of entities and attributes, is a promise of improved usability, compared with existing ABAC models, in specifying the relationships between and requirements on domain-dependent attributes. Our formal model provides improved flexibility in supporting a variety of different requirements depending on the domain. Specifically, we will discuss how to modify the model if time plays a role in authorizing a requested action, if different policies would potentially arrive at conflicting decisions, and if default and exception rules are in application. |
11:30 | Proving physical proximity using symbolic models SPEAKER: Alexandre Debant ABSTRACT. For many modern applications like e.g. contactless payment, and keyless systems, ensuring physical proximity is a security goal of paramount importance. Formal methods have proved their usefulness when analysing standard security protocols. However, existing results and tools do not apply to e.g. distance bounding that aims to ensure physical proximity between two entities. This is due in particular to the fact that existing models do not represent in a faithful way the locations of the participants, and the fact that transmission of messages takes time. In this paper, we propose several reduction results: when looking for an attack, it is actually sufficient to consider a simple scenario involving at most four participants located at some specific locations. An interesting consequence of our reduction results is that it allows one to reuse ProVerif, an automated tool developed for analysing standard security protocols. As an application, we analyse several distance bounding protocols, as well as a contactless payment protocol. |
12:00 | SPEAKER: Tajana Ban Kirigin ABSTRACT. Distance-bounding (DB) protocols were proposed to thwart relay attacks on proximity-based access control systems. In a DB protocol, the verifier computes an upper bound on the distance to the prover by measuring the time needed for a signal to travel to the prover and back. DB protocols are, however, vulnerable to distance fraud, in which a dishonest prover is able to manipulate the distance bound computed by an honest verifier. Despite their conceptual simplicity, devising a formal characterization of DB protocols and distance fraud attacks that is amenable to automated formal analysis is non-trivial, primarily because of their real-time and probabilistic nature. In this work, we present a framework, based on rewriting logic, for formally analyzing different forms of distance-fraud, including recently identified timing attacks. We introduce a generic, real-time and probabilistic model of DB protocols and use it to (mechanically) verify false-acceptance and false-rejection probabilities under various settings and attacker models through statistical model checking with Maude and PVeStA. Using this framework, we first accurately confirm known results and then define and quantitatively evaluate new guessing-ahead attack strategies that would otherwise be difficult to analyze manually. |
14:00 | ABSTRACT. Cryptographic constant-time programming is an established coding discipline used in cryptography to secure programs against timing attacks. Most, if not all, cryptography library try to adhere to this coding style. The C programming language is oftentimes considered a portable assembly, and is hence used by a great number of cryptography libraries. However, what is executed by the hardware is actual assembly, not C. One can thus wonder whether security properties are preserved through compilation as even formally verified compilers only ensure preservation of observable behaviors. We present in this paper how to derive a natural framework to prove preservation of cryptographic constant-time security from simulation based proofs of compiler correctness. We also give insights on how this could be adapted to CompCert. |
14:30 | SPEAKER: Daniel Dougherty ABSTRACT. A cryptographic protocol can be deployed in a variety of environments, but existing methods of protocol analysis focus only on the protocol, without being sensitive to assumptions about these environments. We present LPA, a tool which analyzes protocols in context. LPA uses two programs, cooperating with each other: CPSA, a well-known system for protocol analysis, and Razor, a model-finder based on SMT technology. Our analysis follows the enrich-by-need paradigm, in which models of protocol execution are generated and examined. The choice of which models to generate is important, and we develop a careful motivation and evaluation of LPA's strategy of building minimal models. In fact ``minimality'' can be defined with respect to either of two preorders, namely the homomorphism preorder and the embedding preorder (i.e. the preorder of injective homomorphisms); we discuss the merits of each. Our main technical contributions are algorithms for building homomorphism-minimal models and for generating a set-of-support for the models of a theory, in each case by scripting interactions with an SMT solver. |
15:00 | ABSTRACT. We present a new direction for the formal specification of cryptographic schemes using types. In this approach, we specify a cryptographic protocol using the tools of homotopy type theory. Homotopy type theory adds the notion of higher inductive type and univalence axiom to Martin-Löf’s intensional type theory. A higher inductive type allows us to introduce constructors for paths and higher-dimensional paths in addition to points. The paths are then identified by equivalences in the universe through univalence. A higher inductive type can act as a front-end mapped to a concrete cryptographic implementation in the universe. By having a higher inductive type front-end, we can encode domain-specific laws of the cryptographic implementation as higher-dimensional paths. Due to univalence and functoriality, the path structure will be preserved in the mapping and realized by equivalence in the universe. Using this model we can achieve various guarantees on the correctness of the cryptographic implementation. |