LPAR-20: 20TH INTERNATIONAL CONFERENCES ON LOGIC FOR PROGRAMMING, ARTIFICIAL INTELLIGENCE AND REASONING
PROGRAM FOR WEDNESDAY, NOVEMBER 25TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 5: Satisfiability: from Quality to Quantities (Invited Presentation.)

Invited Presentation

Speaker: Nikolaj Bijorner

09:00
Satisfiability: from Quality to Quantities

ABSTRACT. Satisfiability Modulo Theories, SMT, solvers have in the past decade enabled a number of software engineering tools thanks to improved theorem proving technologies, their support for domains that are commonly used in software and a confluence of advances in symbolic analysis methodologies. These methodologies are diverse and range from bug localization, symbolic model checking algorithms, dynamic symbolic execution for uncovering bugs and creating parametric unit tests, certified development using program verification tools, compiler validation, biological modeling, model based design tools, web sanitizers, and runtime analysis. The synergy with application domains has lead to a constant stream of inspiration for improved domain support and algorithmic advances. A simultaneous trend in applications is leading research on SMT solvers into calculating with quantities. We believe this is part of an overall trend of tools for checking and synthesizing quantitative, including probabilistic, properties.

Using Network Verification as a starting point, we describe how the SMT solver Z3 is used at scale in Microsoft Azure to check network access restrictions and router configurations. Z3 is used in a monitoring system, called SecGuru, that continuously checks configurations as they appear on routers. We learned early on that network operators required a tool that could return a set of models in a compact way. This led us to develop a domain specific algorithm, that works well for access control lists. It enumerates models compactly in fractions of a second. A more ambitious effort is to check reachability properties in large data-centers. Again, our experience was that the domain called for special purpose data-structures and symmetry reduction methods that turn analysis of data-centers with hundreds of routers and a million forwarding rules into very small finite state systems that can be analyzed in fractions of a second.

Our experience with Network Verification is not unlike other domains as we are reaching a point where qualitative analysis has shown its use, but a larger elephant is lurking in the room: most systems rely on performance guarantees. Thus, the need for cheking and synthesizing quantitative properties. To support SMT with quantities we have embarked on long term projects on integrating optimization algorithms with Z3 and integrating methods for counting the number of solutions to constraints. In this context we developed a new MaxSAT algorithm that exploits dualities between unsatisfiable cores and correction sets and we illustrate some uses of the emerging quantitative features in Z3.

The work rests on collaboration with a large number of colleagues including Karthick Jayaraman, George Varghese, Nina Narodytska, Nuno Lopes, Andrey Rybalchenko, Leonardo de Moura, Christoph Wintersteiger, Gordon Plotkin.

10:00-10:30Coffee Break
10:30-12:10 Session 6: Analysis of security and information flow
10:30
Boolean Formulas for the Static Identification of Injection Attacks in Java
SPEAKER: unknown

ABSTRACT. The most dangerous security-related software errors, according to CWE 2011, are those leading to injection attacks --- user-provided data that result in undesired database access and updates (SQL-injections), dynamic generation of web pages (cross-site scripting-injections), redirection to user-specified web pages (redirect-injections), execution of OS commands (command-injections), class loading of user-specified classes (reflection-injections), and many others. This paper describes a flow- and context-sensitive static analysis that automatically identifies if and where injections of tainted data can occur in a program. The analysis models explicit flows of tainted data. Its notion of taintedness applies also to reference (non-primitive) types dynamically allocated in the heap, and is object-sensitive and field-sensitive. The analysis works by translating the program into Boolean formulas that model all possible flows. We implemented it within the Julia analyzer for Java and Android. Julia found injection security vulnerabilities in the Internet banking service and in the customer relationship management of a large Italian bank.

10:55
Analyzing Internet Routing Security Using Model Checking
SPEAKER: Adi Sosnovich

ABSTRACT. The goal of this work is to enhance Internet security by applying formal analysis of traffic attraction attacks on the BGP routing protocol. BGP is the sole protocol used throughout the Internet for inter-domain routing, hence its importance. In attraction attacks an attacker sends false routing advertisements to gain attraction of extra traffic in order to increase its revenue from customers, drop, tamper, or snoop on the packets. Such attacks are most common on the inter-domain routing. We use model checking to perform exhaustive search for attraction attacks on BGP. This requires substantial reductions due to scalability issues of the entire Internet topology. Therefore, we propose static methods to identify and automatically reduce Internet fragments of interest, prior to using model checking. We developed a method, called BGP-SA, for BGP Security Analysis, which extracts and reduces fragments from the Internet. In order to apply model checking, we model the BGP protocol and also model an attacker with predefined capabilities. Our specifications allow to reveal different types of attraction attacks. Using a model checking tool we identify attacks as well as show that certain attraction scenarios are impossible on the Internet under the modeled attacker capabilities.

11:20
Value Sensitivity and Observable Abstract Values for Information Flow Control
SPEAKER: unknown

ABSTRACT. Much progress has recently been made on information flow control, enabling the enforcement of increasingly rich policies for increasingly expressive programming languages. This has resulted in tools for mainstream programming languages as JavaScript, Java, Caml, and Ada that enforce versatile security policies. However, a roadblock on the way to wider adoption of these tools has been their limited permissiveness (high false positives). Flow-, context-, and object-sensitive techniques have been suggested to improve the precision of static information flow control and dynamic monitors have been explored to leverage the knowledge about the current run for precision.

This paper explores value sensitivity to boost the permissiveness of information flow control. We show that both dynamic and hybrid information flow mechanisms benefit from value sensitivity. Further, we introduce the concept of observable abstract values to generalize and leverage the power of value sensitivity to richer programming languages. We demonstrate the usefulness of the approach by comparing it to known disciplines for dealing with information flow in dynamic and hybrid settings.

11:45
Cobra: A Tool for Solving General Deductive Games

ABSTRACT. We propose a general framework for modelling and solving deductive games, where one player selects a secret code and the other player strives to discover this code using a minimal number of allowed experiments that reveal some partial information about the code. The framework is implemented in a software tool Cobra, and its functionality is demonstrated by producing new results about existing deductive games.

12:00-14:00Lunch Break
14:00-15:30 Session 7: Program analysis
14:00
Automated Discovery of Simulation Between Programs

ABSTRACT. The paper presents SimAbs, the first fully automated SMT-based approach to synthesize an abstraction of one program (called target) that simulates another program (called source). SimAbs iteratively traverses the search space of existential abstractions of the target and choses the strongest abstraction among them that simulates the source. Deciding whether a given relation is a simulation relation is reduced to solving validity of Forall-Exists-formulas. We present a novel algorithm for dealing with such formulas using an incremental SMT solver. In addition to deciding validity, our algorithm extracts witnessing Skolem relations which further drive simulation synthesis in SimAbs. Our evaluation confirms that SimAbs is able to efficiently discover both, simulations and abstractions, for C programs from the Software Verification Competition.

14:25
Finding Inconsistencies in Programs with Loops
SPEAKER: unknown

ABSTRACT. Inconsistent code is an important class of program abnormalities that appears in real-world code bases and often reveals serious bugs. Existing approaches to inconsistent code detection scale to pro- grams with millions of lines of code, and have lead to patches in applications like Tomcat or the Linux kernel. However, the ability of existing tools to detect inconsistencies is limited by gross over-approximation of looping control-flow. We present a novel approach to inconsistent code detection that can reason about programs with loops without compromising precision. To that end, by leveraging recent advances in Horn clause solving, we demonstrate how to encode the problem as a sequence of Horn clauses queries enabling us to detect inconsistencies that were previously unattainable.

14:50
Reasoning About Loops Using Vampire in KeY
SPEAKER: unknown

ABSTRACT. We describe symbol elimination and consequence finding in the first-order theorem prover Vampire for automatic generation of quantified invariants, possibly with quantifier alternations, of loops with arrays. Unlike the previous implementation of symbol elimination in Vampire, our work is not limited to a specific programming language but provides a generic framework by relying on a simple guarded command representation of the input loop. We also improve the loop analysis part in Vampire by generating loop properties more easily handled by the saturation engine of Vampire. Our experiments show that, with our changes, the number of generated invariants is decreased, in some cases, by a factor of 20. We also provide a framework to use our approach to invariant generation in conjunction with pre- and post-conditions of program loops. We use the program specification to find relevant invariants as well as to verify the partial correctness of the loop. As a case study, we demonstrate how symbol elimination in Vampire can be used as an interface for realistic imperative languages, by integrating our tool in the KeY verification system, thus allowing reasoning about loops in Java programs in a fully automated way, without any user guidance.

15:15
Compiling Hilbert's epsilon operator
SPEAKER: Rustan Leino

ABSTRACT. Hilbert's epsilon operator is a binder that picks an arbitrary element from a nonempty set. The operator is typically used in logics and proof engines. This paper contributes a discussion of considerations in supporting this operator in a programming language. More specifically, the paper presents the design choices made around supporting this operator in the verification-aware language Dafny.

15:30-16:00Coffee Break
16:00-18:00 Session 8: Automated Reasoning
16:00
Compositional Propositional Proofs
SPEAKER: unknown

ABSTRACT. Many hard-combinatorial problems can only be tackled by SAT solvers in a massively parallel setting. This reduces the trust one can have in the final result as errors can occur in parallel SAT solvers as well as in methods to partition the original problem. We present a new framework to produce clausal proofs for cube-and-conquer, arguably the most effective parallel SAT solving paradigm for hard-combinatorial problems. The framework also provides an elegant approach to parallelize the validation of clausal proofs efficiently, both in terms of run time and memory usage. We demonstrate the usefulness of the framework on some hard-combinatorial problems and validate the constructed clausal proofs in parallel.

16:25
There Is No Best Beta-Normalization Strategy for Higher-Order Reasoners
SPEAKER: unknown

ABSTRACT. The choice of data structures for the internal representation of terms in logical frameworks and higher-order theorem provers is a crucial low-level factor for their performance. We propose a representation of terms based on a polymorphically typed nameless spine data structure in conjunction with perfect term sharing and explicit substitutions.

In related systems the choice of a beta-normalization methods is usually statically fixed and cannot be adjusted to the input problem at runtime. The typical predominant strategies are hereby implementation specific adaptions of leftmost-outermost normalization. We introduce several different beta-normalization strategies and empirically evaluate their performance by reduction step measurement on about 7000 heterogeneous problems from different (TPTP) domains.

Our study shows that there is no generally best beta-normalization strategy and that for different problem domains, different winning strategies can be identified. The evaluation results suggest a problem-dependent choice of a preferred beta-normalization strategy for higher-order reasoning systems. To that end, we present first preliminary approaches for strategy-selection heuristics.

16:50
Automated Benchmarking of Incremental SAT and QBF Solvers

ABSTRACT. Incremental SAT and QBF solving potentially yields improvements when sequences of related formulas are solved. An application program that employs an incremental approach to a problem is usually tailored towards some specific solver and decomposes the problem into several incremental solver calls generated directly within the application. This hinders the independent comparison of different incremental solvers, particularly when the application program is not available. To remedy this situation, we present an approach to automated benchmarking of incremental SAT and QBF solvers. Given a collection of formulas in (Q)DIMACS format generated incrementally by an application program, our approach automatically translates the formulas into instructions to import and solve a formula by an incremental SAT/QBF solver. The result of the translation is a program which replays the incremental solver calls defined by the formulas in the collection. The program can be configured to include any SAT/QBF solver which implements a minimal API similar to IPASIR, which has recently been proposed for the Incremental Library Track of the SAT Race 2015. Our approach allows to evaluate incremental solvers independently from the application program that was used to generate the formulas. We illustrate our approach by different hardware verification problems for SAT and QBF solvers.

17:15
FEMaLeCoP: Fairly Efficient Machine Learning Connection Prover
SPEAKER: unknown

ABSTRACT. FEMaLeCoP is a connection tableau theorem prover based on leanCoP which uses efficient implementation of internal learning-based guidance for extension steps. Despite the fact that exhaustive use of such internal guidance incurs a significant slowdown of the raw inferencing process, FEMaLeCoP trained on related proofs can prove many problems that cannot be solved by leanCoP. In particular,FEMaLeCoP adds 90 (15.7%) more MPTP2078 benchmark problems to the 574 problems that are provable by leanCoP. FEMaLeCoP is thus the first system that practically demonstrates that smart internal learning-based guidance can significantly improve the performance of current automated theorem provers running without such sophisticated guiding methods. This paper describes the system, discusses the technology developed, and evaluates the system.

17:40
Reasoning in the presence of inconsistency through Preferential ALC
SPEAKER: unknown

ABSTRACT. This paper presents an inconsistency tolerant semantics for the Description Logic ALC called Preferential ALC (p-ALC). A p-ALC knowledge base is comprised of defeasible and non-defeasible axioms. The defeasible ABox and TBox are labelled with confidence weights that could reflect an axiom's provenance. Entailment is defined through the notion of preferred interpretations which minimise the total weight of the inconsistent axioms. We introduce a modified ALC tableau algorithm in which the open branches give rise to the preferred interpretations, and show that it can compute p-ALC entailment by refutation. The modified algorithm is implemented as an incremental answer set program (ASP) that exploits optimisation to capture preferred interpretations of p-ALC.