LOPSTR 19: 29TH INTERNATIONAL SYMPOSIUM ON LOGIC-BASED PROGRAM SYNTHESIS AND TRANSFORMATION
PROGRAM FOR TUESDAY, OCTOBER 8TH
Days:
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 2: UTP Keynote
09:00
A Calculus for Concurrent and Sequential Programming

ABSTRACT. A calculus is defined by the abstract syntax of its terms, together with axioms and rules that define its semantics. In a programming calculus, the terms are classified into four sorts, program specifications, program designs, the programs themselves, and their traces (logs of execution behaviour). The axioms are familiar algebraic equations, together with an ordering relation for valid transformation (refinement) of terms. Additionally, logical proof rules permit conditional algebraic calculations. The axioms apply equally to specifications, designs, programs and traces. Each sort includes all terms of the following sort. Refinement permits optimisation of terms within each sort, or derivation of a terms by stepwise refinement and decomposition from its preceding sorts. A trace is a constant term that cannot be further refined. It is most frequently generated automatically by running the program on a correct implementation of the calculus. The calculus detects faults in the implementation, distinguishing them clearly from faults due to programming errors.

10:00-10:30Coffee Break
10:30-12:30 Session 3: Static Analysis
10:30
On fixpoint/iteration/variant induction principles for proving total correctness of programs with denotational semantics

ABSTRACT. Westudypartialandtotalcorrectnessproofmethodsbasedongeneralizedfixpointanditeration induction principles applied to the denotational semantics of first-order functional and iterative programs.

11:00
A General Framework for Static Cost Analysis of Parallel Logic Programs

ABSTRACT. The estimation and control of resource usage is now an important challenge in an increasing number of computing systems. In particular, requirements on timing and energy arise in a wide variety of applications such as internet of things, cloud computing, health, transportation, and robots. At the same time, parallel computing, with (heterogeneous) multi-core platforms in particular, has become the dominant paradigm in computer architecture. Predicting resource usage on such platforms poses a difficult challenge. Most work on static resource analysis has focused on sequential programs, but relatively little progress has been made on the analysis of parallel programs, or more specifically on parallel logic programs. We propose a novel, general, and flexible framework for setting up cost equations/relations which can be instantiated for performing resource usage analysis of parallel logic programs for a wide range of resources, platforms and execution models. The analysis estimates both lower and upper bounds on the resource usage of a parallel program (without executing it) as functions on input data sizes. In addition, it also infers other meaningful information to better exploit and assess the potential and actual parallelism of a system. We develop a method for solving cost relations involving the max function that arise in the analysis of parallel programs. Finally, we instantiate our general framework for the analysis of logic programs with Independent And-Parallelism, report on an implementation within the CiaoPP system, and provide some experimental results. To our knowledge, this is the first approach to the cost analysis of parallel logic programs.

11:30
Incremental Analysis of Logic Programs with Assertions and Open Predicates

ABSTRACT. Generic components represent a further abstraction over the concept of modules, which introduces dependencies on other (not necessarily available) components implementing specified interfaces. It has become and a key concept in large and complex software applications. Despite its undeniable advantages, generic code is known to be anti-modular. Precise analysis (e.g., for detecting bugs or optimizing code) requires such code to be instantiated with concrete implementations, potentially leading to a prohibitively expensive combinatorial explosion. In this paper we claim that incremental (whole program) analysis can be very beneficial in this context, and alleviate the anti-modularity nature of generic code. We propose a simple Horn-clause encoding of generic programs, using open predicates and assertions, and we introduce a new incremental analysis algorithm that reacts incrementally not only to changes in program clauses, but also to changes in the assertions, upon which large parts of the analysis graph may depend. We also discuss the application of the proposed techniques in a number of practical use cases. In addition, as a realistic case study, we apply the proposed techniques in the analysis of the LPdoc documentation system. We argue that the proposed traits are a convenient and elegant abstraction for modular generic programming, and that our preliminary results support the conclusion that the new incrementality features added to the analysis bring promising analysis performance advantages.

12:00
Computing Abstract Distances in Logic Programs

ABSTRACT. Abstract interpretation is a well-established technique for performing static analyses of logic programs. However, choosing the abstract domain, widening, fixpoint, etc. that provides the best precision-cost trade-off remains an open problem. This is in a good part because of the challenges involved in measuring and comparing the precision of different analyses. We propose a new approach for measuring such precision, based on defining distances in abstract domains and extending them to distances between whole analyses of a given program, thus allowing comparing precision across different analyses. We survey and extend existing proposals for distances and metrics in lattices or abstract domains, and we propose metrics for some common domains used in logic program analysis, as well as extensions of those metrics to the space of whole program analysis. We implement those metrics within the CiaoPP framework and apply them to measure the precision of different analyses over on both benchmarks and a realistic program.

12:30-14:00Lunch Break
14:00-15:00 Session 4: LOPSTR & PPDP invited talk
Chair:
14:00
10 Years of the Higher-Order Model Checking Project

ABSTRACT.    

15:00-15:30Coffee Break
15:30-16:30 Session 5: Program synthesis
15:30
Synthesizing Imperative Code from Answer Set Programming Specifications

ABSTRACT. We consider the problem of obtaining an implementation of an algorithm from its specification. We assume that these specifications are written in answer set programming (ASP). ASP is an ideal formalism for writing specifications due to its highly declarative and expressive nature. To obtain an implementation from its specification, we take advantage of operational semantics of Predicate ASP devised by Marple et al. in the s(ASP) system. This operational semantics is used to transform the declarative specification written in ASP, to obtain an equivalent efficient program that uses imperative control features. This work is inspired by our overarching goal of automatically deriving efficient concurrent algorithms from declarative specifications. This paper reports our first step towards achieving that goal where we restrict ourselves to simple sequential algorithms. We illustrate our ideas through several examples. Our work opens up a new approach to logic-based program synthesis not explored before.

16:00
Logic-Based Synthesis of Fair Voting Rules Using Composable Modules
PRESENTER: Michael Kirsten

ABSTRACT. Voting rules aggregate multiple individual preferences in order to make collective decisions. Commonly, these mechanisms are expected to respect a multitude of different fairness and reliability properties, e.g., to ensure that each voter's ballot accounts for the same proportion of the elected alternatives, or that a voter cannot change the election outcome in her favor by insincerely filling out her ballot. However, no voting rule is fair in all respects, and trade-off attempts between such properties often bring out inconsistencies, which makes the design of arguably practical and fair voting rules non-trivial and error-prone.

In this paper, we present a formal systematic approach for the flexible synthesis of voting rules from composable core modules to respect such properties by design. Formal composition rules guarantee resulting properties from properties of the individual components, which are of generic nature to be reused for various voting rules. We provide a prototypical logic-based implementation with proofs for a selected set of structures and composition rules within the theorem prover Isabelle/HOL. The approach can be readily extended in order to support many voting rules from the literature by extending the set of basic modules and composition rules. We exemplarily synthesize the well-known voting rule sequential majority comparison (SMC) from simple generic modules, and automatically produce a proof that SMC satisfies the fairness property monotonicity. Monotonicity is a well-known social-choice property that is easily violated by voting rules in practice.