FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
FCS ON THURSDAY, AUGUST 11TH

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:15-10:30 Session 123B: Keynote Presentation
Location: Ullmann 310
09:15
Computer-aided Verification and Automated Synthesis of Cryptographic Protocols

ABSTRACT. In this talk we summarize our recent efforts developing efficient high-assurance implementations of advanced cryptographic protocol such as secure multi-party computation (MPC) and zero-knowledge (ZK) proof systems. Such high-assurance implementations are automatically extracted from formally verified protocols specifications in EasyCrypt. We show how such automatically extracted implementations do not suffer from a significant performance overhead (as one may initially expect them to), even though they do not incorporate natural performance-boosting optimizations such as parallelism.

10:30-11:00Coffee Break
11:00-11:45 Session 125C: Blockchain and Real-world Security
Location: Ullmann 310
11:00
A Preview to HoRStify: Sound Security Analysis of Smart Contracts
PRESENTER: Sebastian Holler

ABSTRACT. The cryptocurrency Ethereum is the most widely used execution platform for smart contracts. Smart contracts are distributed applications, which govern financial assets and, hence, can implement advanced financial instruments, such as decentralized exchanges or autonomous organizations (DAOs). Their financial nature makes smart contracts an attractive attack target, as demonstrated by numerous exploits on popular contracts resulting in financial damage of millions of dollars. This omnipresent attack hazard motivates the need for sound static analysis tools, which assist smart contract developers in eliminating contract vulnerabilities a priori to deployment. Vulnerability assessment that is sound and insightful for EVM contracts is a formidable challenge because contracts execute low-level bytecode in a largely unknown and potentially hostile execution environment. So far, there exists no provably sound automated analyzer that allows for the verification of security properties based on program dependencies, even though prevalent attack classes fall into this category. In this work, we present HoRStify, the first automated analyzer for dependency properties of Ethereum smart contracts based on sound static analysis. HoRStify grounds its soundness proof on a formal proof framework for static program slicing that we instantiate to the semantics of EVM bytecode. We demonstrate that HoRStify is flexible enough to soundly verify the absence of famous attack classes such as timestamp dependency and, at the same time, performant enough to analyze real-world smart contracts.

11:15
Automatic Fair Exchanges

ABSTRACT. In a decentralized environment, exchanging resources requires users to bargain until an agreement is found. Moreover, human agreements involve a combination of collaborative and selfish behavior and often induce circularity, complicating the evaluation of exchange requests. We introduce MuAC, a policy language that allows users to state in isolation under which conditions they are open to grant their resources and what they require in return. In MuAC, exchange requests are evaluated automatically with the guarantee that the only exchanges that will take place are those that mutually satisfy users’ conditions. Moreover, MuAC can be used as an enforcement mechanism to prevent users from cheating. As a proof of concept, we implement a blockchain smart contract that allows users to exchange their non-fungible tokens.

11:30
Towards Unlinkable Smartcard-based Payments: an extended abstract
PRESENTER: Semen Yurkov

ABSTRACT. The most prevalent smart card-based payment method, EMV, currently offers no privacy to its users. Transaction details and the card number are sent in clear text, enabling cardholders' profiling and tracking. Since public awareness of privacy issues grows and legislation initiatives like GDPR, aiming to regulate unwanted data collection, emerges, we investigate the possibility to make payments anonymous towards eavesdroppers and terminals without compromising the intended security guarantees of EMV. In this extended abstract, we offer an enhanced payment protocol where no information allowing the identification of the card is transferred to the terminal.

12:00-12:30 Session 126A: Secure Hardware
Location: Ullmann 310
12:00
Unveiling Security through Obscurity Approach of Intel TDX Remote Attestation

ABSTRACT. Intel Trust Domain Extensions (TDX) is the next-generation confidential computing offering of Intel. One of the most critical processes of Intel TDX is the remote attestation mechanism. Since remote attestation bootstraps trust in remote applications, any vulnerability in the attestation mechanism can therefore impact the security of an application. Hence, we investigate the use of formal methods to ensure the correctness of the attestation mechanisms. The symbolic security analysis of remote attestation protocol in Intel TDX reveals a number of subtle inconsistencies found in the specification of Intel TDX that could potentially lead to design and implementation errors as well as attacks. These inconsistencies have been reported to Intel and Intel is in process of updating the specifications. We also explain how formal specification and verification using ProVerif could help avoid these flaws and attacks.

12:15
pi_RA: A pi-calculus for verifying protocols that use remote attestation
PRESENTER: Emiel Lanckriet

ABSTRACT. Remote attestation (RA) allows authentication of software components on untrusted systems by relying on a root of trust. Over the last years use of devices supporting remote attestation (RA) has been rising. Researchers have made several formal models to reason about RA, however, these models tend to stay fairly close to the implementations found in hardware. In this paper we devise a model, called pi_RA, that supports RA at a high level of abstraction by treating it as a cryptographic primitive in a variant of the applied pi-calculus.

To demonstrate the use of pi_RA, we use it to formalise and analyse the security of MAGE, an SGX-based framework that allows mutual attestation of multiple enclaves. This analysis is done by proving the security of a compiler, that incorporates MAGE to resolve problems with mutual attestation, from a language with named actors, pi_Actor, to pi_RA.

12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

14:00-14:45 Session 127C: Analysis for Security and Privacy
Location: Ullmann 310
14:00
VeNuS: Neural Network Robustness Specifications via Verifier-guided Optimization
PRESENTER: Anan Kabaha

ABSTRACT. Analyzing the robustness of neural networks is crucial for trusting them. The vast majority of existing work focuses on networks' robustness to epsilon-ball neighborhoods, but these cannot capture complex robustness properties. In this work, we propose VeNuS, a general approach and a system for computing maximally non-uniform robust neighborhoods that optimize a target function (e.g., neighborhood size). The key idea is to let a verifier guide our search by providing the directions in which a neighborhood can expand while remaining robust. We show two approaches to compute these directions. The first approach treats the verifier as a black-box and is thus applicable to any verifier. The second approach extends an existing verifier with the ability to compute the gradient of its analysis results. We evaluate VeNuS on various models and show it computes neighborhoods with 10^407x more distinct images and 1.28x larger average diameters than the epsilon-ball ones. Compared to another non-uniform robustness analyzer, VeNuS computes neighborhoods with 10^314x more distinct images and 2.49x larger average diameters. We further show that the neighborhoods that VeNuS computes enable one to understand robustness properties of networks which cannot be obtained using the standard epsilon-balls. We believe this is a step towards understanding global robustness.

14:15
Compositional Higher-order Declassification Using Logical Relations
PRESENTER: Jan Menz

ABSTRACT. To ensure that programs do not accidentally leak sensitive data, we need to formally specify desirable information-flow properties, including declassification properties, of programs and prove that programs abide by them. Compositionality is a desirable property of such security specifications. In the higher-order setting compositionality of such definitions is, however, not trivial. For programs involving intensional forms of declassification i.e., the deliberate release of private information based on internal state of the program, most past specifications of security in the higher-order setting are not compositional. We argue, with a concrete example, that \emph{logical relations} provide an excellent basis for compositional definitions of declassification-based information security in the higher-order setting, including intensional forms of declassification.

14:30
Robust Indistinguishability
ABSTRACT. We introduce a new privacy notion, \emph{robust indistinguishability} (RI), parameterized by variables $\varepsilon,\alpha$. We show $(\varepsilon,\alpha)$-RI is \emph{inequivalent} to $(\varepsilon',\delta)$-differential privacy (DP) for \emph{every} $(\varepsilon',\delta)$ unless $\alpha = 0$, in which case it is equivalent to $\delta' = 0$ and another $\epsilon$. Therefore, RI relaxes pure DP along a novel axis compared to $(\varepsilon',\delta)$-DP.
More precisely, we formulate $(\varepsilon,\alpha)$-RI for a randomized mechanism $F$ as a game wherein a source algorithm chooses two neighboring datasets $x_0,x_1$ and a set of outcomes $Y$ with cumulative weight at least $\alpha$. The adversary is then given a sample of random variable $F(x_b)$ where $b$ is a random bit, \emph{conditioned on the outcome of $F$ being in $Y$}. We measure the advantage of the adversary as twice the probability it guesses $b$ minus 1. When $\alpha =1$, this is simply statistical (aka.~total variation) distance between $F(x_1)$ and $F(x_0)$. We stress that to avoid well-known pitfalls of using statistical distance as a privacy notion, RI conditions on the outcome of the mixture distribution $F(x_b)$. RI captures privacy in distinguishability games'' that are often implicitly used in applications of DP.
We show foundational results about RI as well as applications. We show an optimal'' RI adversary,, which is sometimes efficient. Then we show that RI satisfies composition, post-processing, and group privacy laws. We also show that the Laplace mechanism requires smaller noise magnitude under RI for a fixed $\varepsilon'$ with $\alpha = 1$ vs.~$\alpha = 0$. We also show RI bounds the adversary's advantage in various attack modalities in privacy-preserving machine learning. Finally, we analyze database shuffling under RI for i.i.d.~database entries.