previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 15B: SAT solver analysis (POS15)
Evaluating CDCL Restart Schemes
SPEAKER: Armin Biere

ABSTRACT. Modern CDCL (conflict-driven clause learning) SAT solvers are used for many practical applications. One of the key ingredients of state-of-the-art CDCL solvers are efficient restart schemes. The main contribution of this work is an extensive empirical evaluation of various restart strategies. We show that optimal static restart intervals are not only correlated with the satisfiability status of a certain instance, but also with the more specific problem class of the given benchmark. We further compare uniform restart intervals with the performance of non-uniform restart schemes, such as Luby restarts. Finally, we revisit the dynamic restart strategy used in Glucose and propose a new variant thereof, which is based on the concept of exponential moving averages. The resulting implementation in Lingeling improves state-of-the-art performance in SAT solving.

Predicting SAT Solver Performance on Heterogeneous Hardware
SPEAKER: unknown

ABSTRACT. In recent years a lot of effort has been expended in determining if SAT solver performance is predictable. However, the work in this area invariably focuses on individual machines, and often on individual solvers. It is unclear whether predictions made on a specific solver and machine are accurate when translated to other solvers and hardware. In this work we consider five state-of-the-art solvers, 26 machines and 143 feature instances selected from the 2011 to 2014 SAT competitions. Using combinations of solvers, machines and instances we present four results: First, we show that UNSAT instances are more predictable than corresponding SAT instances. Second, we show that the number of cores in a machine has more impact on performance than L2 cache size. Third, we show that instances with fewer reused clauses are more CPU bound than those where clause reuse is high. Finally, we make accurate predictions of solution time for each of the instances considered across a diverse set of machines.

09:00-10:00 Session 15C: Invited Talk (QBF 2015)
Dependency Schemes for Quantified Boolean Formulas

ABSTRACT. The nesting of existential and universal quantifiers in Quantified Boolean Formulas (QBFs) causes dependencies among variables that have to be respected by solvers and preprocessing techniques. Given formulas in prenex normal form, standard algorithms implicitly make the most conservative assumption about variable dependencies: variable y depends on variable x whenever x and y are associated with different quantifiers and x precedes y in the quantifier prefix. The resulting set of dependencies is often a coarse overapproximation containing many "spurious" dependencies which lead to unnecessary restrictions that, in turn, inhibit performance. We survey dependency schemes as a means to obtaining more fine-grained overapproximations of a formula's variable dependencies and talk about challenges arising from the integration of dependency schemes into solvers.

10:00-10:30Coffee Break
10:30-12:30 Session 17A: Hardware and Architecture Design and Analysis (MEMOCODE'15)
Reducing power with activity trigger analysis
SPEAKER: unknown

ABSTRACT. In this paper we propose and implement a methodology for power reduction in digital circuits, closing the gap between conceptual (by designer) and local (by EDA) clock gating. We introduce a new class of coarse grained local clock gating conditions and develop a method for detecting such conditions and formally proving their correctness. The detection of these conditions relies on architecture characterization and statistical analysis of simulation, all done at the RTL. Formal verification is performed on an abstract circuit model. We demonstrate a significant power reduction from 33 to 40% of total power on a clusterized circuit design for video processing.

Implementing Latency-Insensitive Dataflow Blocks
SPEAKER: unknown

ABSTRACT. To simplify the implementation of dataflow systems in hardware, we present a technique for designing latency-insensitive dataflow blocks. We provide buffering with backpressure, resulting in blocks that compose into deep, high-speed pipelines without introducing long combinational paths. Our input and output buffers are easy to assemble into simple unit-rate dataflow blocks, arbiters, and blocks for Kahn networks. We prove the correctness of our buffers, illustrate how they can be used to assemble arbitrary dataflow blocks, discuss pitfalls, and present experimental results that suggest our pipelines can operate at a high clock rate independent of length

Symbolic Loop Parallelization for Balancing I/O and Memory Accesses on Processor Arrays
SPEAKER: unknown

ABSTRACT. Loop parallelization techniques for massively parallel processor arrays using one-level tiling are often either I/O- or memory-bounded, exceeding the target architecture's capabilities. Furthermore, if the number of available processing elements is only known at runtime, as in adaptive systems, static approaches fail. To solve these problems, we present a hybrid compile/runtime technique to symbolically parallelize loop nests with uniform dependences on multiple levels. At compile time, two novel transformations are performed: (a) symbolic hierarchical tiling followed by (b) symbolic multi-level scheduling. By tuning the size of the tiles on multiple levels, a trade-off between the necessary I/O-bandwidth and memory is possible, which facilitates obeying resource constraints. The resulting schedules are symbolic with respect to the number of tiles; thus, the number of processing elements to map onto does not need to be known at compile time. At runtime, when the number is known, a simple prolog chooses a feasible schedule with respect to I/O and memory constraints that is latency-optimal for the chosen tile size. In this way, our approach dynamically chooses latency-optimal and feasible schedules while avoiding expensive re-compilations.

Process algebra semantics and reachability analysis for micro-architectural models of communication fabrics
SPEAKER: unknown

ABSTRACT. We propose an algorithm for reachability analysis in micro-architectural models of communication fabrics. The main idea of our solution is to group transfers in what we call transfer islands. In an island, all transfers fire at the same time. To justify our abstraction, we give semantics of the initial models using a process algebra. We then prove that a transfer occurs in the transfer islands model if and only the same transfer occurs in the process algebra semantics. We encode the abstract micro-architectural model together with a given state reachability property in the input format of nuXmv. Reachability is then solved either using BDDs or IC3. Combined with inductive invariant generation techniques, our approach shows promising results.

10:30-12:00 Session 17B: Work in progress presentations (POS15)
Amoeba-inspired Spatiotemporal Dynamics for Physically Implemented Satisfiability Problem Solvers
SPEAKER: Masashi Aono

ABSTRACT. Natural computing is a new discipline that aims to develop nature-inspired computers by using physical dynamics and fluctuations of interacting devices. Inspired by the amoeba, which exhibits spatiotemporal oscillatory dynamics and sophisticated computing capabilities, we extracted the essential spatiotemporal dynamics by which the amoeba solves computationally demanding problems. Our dynamical system model, AmoebaSAT, runs on conventional digital computers and functions as an algorithm for searching for a solution to the satisfiability problem (SAT). Moreover, AmoebaSAT can be implemented using nanoelectronic and nanophotonic devices. In this study, performing Monte Carlo simulations using a digital computer, we show that AmoebaSAT outperforms a randomized version of the GSAT algorithm for randomly generated 10000 variable 3-SAT instances. Thus, when implemented using the nanodevices, AmoebaSAT will open up a new computing paradigm that might enable high-speed problem solving with low power consumption.

Proposal and Application of Search Similarity Index for SAT solver
SPEAKER: Yoichiro Iida

ABSTRACT. It is said that both intensive and diversified search are important for effective SAT solver. However so far, the balance between intensive search and search diversification are only kept heuristically with some solver’s features. We concern that quantitative balancing of intensive and diversified search leads to more effective SAT solver and propose Search Similarity Index (SSI) to measure the balances by comparison with a pair of searching direction from now. By using SSI, we can evaluate some methods which is assumed its effectiveness is cause of changing search space. And we confirmed that the effectiveness of parallelization is about twice by decreasing searching duplication with SSI on portfolio parallel SAT solver.

Model Counting with Arbitrary Alphabets

ABSTRACT. An approach to model counting based on regular expressions is explored. Model counting is treated as a string counting problem, where the strings encode variable assignments, and is solved generating a regular expression that defines the assignments and calculating how many strings satisfy it. Since this approach depends only on properties of regular expressions, any alphabet can be used, not just the {0,1} alphabet of CNF formulas. An algorithm based on inclusion-exclusion is proposed, and the inclusion-exclusion summation is performed in a special order that exploits the structure of the problem to enable memoization. Indeed, for formulas that don't have any non-disjoint clauses, it is often possible to index the clauses so that the algorithm runs in polynomial time. Since the algorithm's performance is largely dependent on the indexing of the formula, this provides a "backdoor" into the problem based on finding efficient indexing heuristics and algorithms, and the increased generality of the string counting problem can allow for more efficient reductions from other problems in #P.

10:30-12:00 Session 17C: Contributed Presentations (QBF 2015)
Feasible Interpolation for QBF Resolution Calculi
SPEAKER: unknown

ABSTRACT. In sharp contrast to classical proof complexity we are currently short of lower bound techniques for QBF proof systems. We establish the feasible interpolation technique for all resolution-based QBF systems, whether modelling CDCL or expansion-based solving. This both provides the first general lower bound method for QBF calculi as well as largely extends the scope of classical feasible interpolation. We apply our technique to obtain new exponential lower bounds to all resolution-based QBF systems for a new class of QBF formulas based on the clique problem. Finally, we show how feasible interpolation relates to the recently established lower bound method based on strategy extraction.

Skolem functions computation for CEGAR based QBF solvers
SPEAKER: unknown

ABSTRACT. In this work we propose an approach to extract Skolem-functions from CEGAR based QBF solvers (e.g., RareQS) for true QBF formulas containing 2 or 3 quantification levels. We as well propose some optimizations to improve extracted certificates and perform detailed experimental evaluation.

Reducing Satisfiability and Reachability to DQBF
SPEAKER: unknown

ABSTRACT. In this work, we aim at transforming propositional satisfiability and directed reachability problems into DQBF and therefore provide useful benchmarks to bridge solvers among different complexity classes. With the generated benchmarks, we perform experiments to evaluate DQBF solver IDQ.

12:00-14:00Lunch Break
14:00-15:00 Session 18A: Runtime Verification (MEMOCODE'15)
Automatic and Configurable Instrumentation of C Programs with Temporal Assertion Checkers
SPEAKER: unknown

ABSTRACT. The long-term goal of the work presented here is the automatic instrumentation of C programs with temporal property checkers to perform the runtime verification that these programs behave as expected, both for debugging purposes and for security or safety-oriented monitoring. This paper describes our first results towards this objective. To give requirements engineers or software developers the possibility to express advanced properties, the chosen specification language is the IEEE standard PSL (Property Specification Language). From PSL properties, a tool automatically generates assertion checkers and instruments the program with these verification components together with an observation mechanism that enables their event-driven activation. For maximum flexibility, the current implementation proposes either to decorate the source code or to observe the binary code under execution. An analysis of these solutions is achieved by means of experimental results.

From Signal Temporal Logic to FPGA Monitors
SPEAKER: unknown

ABSTRACT. Due to the heterogeneity and complexity of systems-of-systems (SoS), their simulation is becoming very time consuming, expensive and hence impractical. As a result, design simulation is increasingly being complemented with more efficient design emulation. Runtime monitoring of emulated designs would provide a precious support in the verification activities of such complex systems.

We propose novel algorithms for translating signal temporal logic (STL) assertions to hardware runtime monitors implemented in field programmable gate array (FPGA). In order to accommodate to this hardware specific setting, we restrict ourselves to past and bounded future temporal operators interpreted over discrete time. We evaluate our approach on two examples: the mixed signal bounded stabilization; and the serial peripheral interface (SPI) communication protocol. These case studies demonstrate the suitability of our approach for runtime monitoring of both digital and mixed signal systems.

14:00-15:00 Session 18B: Unsat proofs and MaxSAT (POS15)
Applications of MaxSAT in Data Analysis

ABSTRACT. Due to recent advances in maximum satisfiability (MaxSAT) solving techniques, MaxSAT, and especially its weighted partial generalization, are being used for solving an increasing range of Boolean optimization problems. In this paper, we overview various important optimization problems arising from data analysis and machine learning and how they have been recently successfully approached via MaxSAT. Our motivation for this work is three-fold: (i) We aim to highlight important hard real-world optimization problems arising from data analysis and machine learning, representing somewhat atypical applications of SAT-based solver technology. (ii) To address the problem of current lack of heterogeneity in benchmark sets available for evaluating MaxSAT solvers, we provide a benchmark library of MaxSAT instances encoding different data analysis and machine learning problems. (iii) We want to advocate extending the current state-of-the-art MaxSAT solver to natively support real-valued weights for soft clauses. Indeed, in each of the problem domains described, the use of real-valued plays an integral role; more generally, we believe that by directly supporting real-valued weights, MaxSAT could be made a more attractive choice as a Boolean optimization paradigm for a wider base of end-users.

Checking Unsatisfiability Proofs in Parallel
SPEAKER: unknown

ABSTRACT. Contemporary SAT solvers emit unsatisfiability proofs in the DRAT format to increase confidence in their answers. The state-of-the-art proof verifier drattrim uses backward-checking and deletion information to efficiently check unsatisfiability results. Verifying large proofs still takes as long as solving a formula, and therefore parallelization seems to be a promising approach. However, Heule et al. that parallelization of the backward-checking procedure is difficult since clausal proofs do not include dependency information. In this paper, we present a parallelization approach and a prototypical implementation that scales almost linearly compared to its sequential version.

15:00-15:30Coffee Break
15:30-17:00 Session 20A: Hybrid Systems Verification (MEMOCODE'15)
From Non-Zenoness Verification to Termination
SPEAKER: unknown

ABSTRACT. We investigate the problem of verifying the absence of zeno executions in a hybrid system. A zeno execution is one in which there are infinitely many discrete transitions in a finite time interval. The presence of zeno executions poses challenges towards implementation and analysis of hybrid control systems. We present a simple transformation of the hybrid system which reduces the non-zenoness verification problem to the termination verification problem, that is, the original system has no zeno executions if and only if the transformed system has no non-terminating executions. This provides both theoretical insights and practical techniques for non-zenoness verification. Further, it also provides techniques for isolating parts of the hybrid system and its initial states which do not exhibit zeno executions. We illustrate the feasibility of our approach by applying it on hybrid system examples.

Verification Condition Generation for Hybrid Systems
SPEAKER: unknown

ABSTRACT. Verification condition generators (VCGs) can reduce overall correctness statements about sequential programs to verification conditions (VCs) that can then be proved independently by automatic theorem provers like SMT solvers. SMT solvers became not only more powerful in recent years in that they can now solve much bigger problems than before, they can now also solve problems of less restricted logics, for example, by covering non-linear arithmetic as required by some hybrid systems. However, there is so far still no VCG procedure that could generate VCs of hybrid programs for these SMT solvers.

We therefore propose in this paper a first VCG procedure for hybrid systems that is based on induction proofs on the strongly connected components (SCCs) of the underlying state transition diagrams. Given the right invariants for a safety property, the VCs can be automatically generated for the considered hybrid system. The validity of the VCs is then independently proved by SMT solvers and implies the correctness of the considered safety property.

Towards Verification of Hybrid Systems in a Foundational Proof Assistant
SPEAKER: unknown

ABSTRACT. Unsafe behavior of hybrid systems can have disastrous consequences, motivating the need for formal verification of the software running on these systems. Foundational verification in a proof assistant such as Coq is a promising technique that can provide extremely strong, foundational, guarantees about software systems. In this paper, we show how to apply this technique to hybrid systems. We define a TLA-inspired formalism in Coq for reasoning about hybrid systems and use it to verify two quadcopter modules: the first limits the quadcopter's velocity and the second limits its altitude. We ran both of these modules on an actual quadcopter, and they worked as intended. We also discuss lessons learned from our experience foundationally verifying hybrid systems.