VSL 2014: VIENNA SUMMER OF LOGIC 2014
PROGRAM FOR SUNDAY, JULY 13TH, 2014
Days:
previous day
next day
all days

View: session overviewtalk overview

08:45-10:15 Session 22A (Linearity)
Location: FH, Seminarraum 136
08:45
Modalities and Linearity

ABSTRACT.  

This is a work about how to use linear logic (with or without subexponentials) as a logical framework to specify sequent calculus proof systems as well as concurrent computational systems. In particular, we present simple and decidable conditions for guaranteeing that the cut rule and non-atomic initial rules can be eliminated and we show how to use the so called subexponentials in order to deal with more involving logical and computational systems. 

09:45
The inhabitation problem for non-idempotent intersection

ABSTRACT. The inhabitation problem for intersection types is known to be undecidable. We study the problem in the case of non-idempotent intersection, in particular for a type assignment system characterizing solvable terms, and we prove decidability through a sound and complete algorithm.

08:45-10:15 Session 22B: Isabelle tutorial for beginners I (Isabelle)
Location: FH, Zeichensaal 1
08:45
Isabelle tutorial for beginners (part 1)
SPEAKER: Tobias Nipkow

ABSTRACT. This is a hands-on tutorial introduction to the proof assistant Isabelle/HOL. Slide presentations alternate with practical exercises: partipants should bring their own laptop with Isabelle installed. There will be seasoned Isabelle tutors to help you using the system. The tutorial takes 3.5h and has three sections:

  1. Recursion and induction
    The initial introduction concentrates on defining small functional programs and proving properties about them by induction.
  2. Logic and Automation
    This section introduces more of the logic and the available automation, including the famous Sledgehammer that harnesses external automatic theorem provers.
  3. Structured proofs
    This section introduces Isabelle's structured proof language Isar, which combines readability (via block-structured natural-deduction proofs) with conciseness (via the rich proof automation infrastructure).

See also Programming and Proving with Isabelle/HOL.

NOTE: Please install this Isabelle release candidate before coming to the tutorial.

08:45-10:15 Session 22C: Isabelle workshop (Isabelle)
Location: FH, Seminarraum 138C
08:45
A formally verified proof of the Central Limit Theorem

ABSTRACT. We describe a formally verified proof of the Central Limit Theorem in the Isabelle proof assistant.

09:15
Primitively (Co)recursive Definitions for Isabelle/HOL
SPEAKER: Lorenz Panny

ABSTRACT. Isabelle/HOL has recently been enriched with a new definitional package for datatypes and codatatypes. The package introduces the specified types and derives auxiliary constants and characteristic theorems, notably (co)recursors and (co)induction principles. We now introduce support for high-level specifications of (co)recursive functions, in the form of three commands: primrec, primcorec, and primcorecursive. The new commands internally reduce the specifications to arguments to the (co)recursor and generate a number of theorems about the function definition, automating an otherwise tedious process.

09:45
Pattern-based Subterm Selection In Isabelle
SPEAKER: unknown

ABSTRACT. This article presents a pattern-based language designed to select (a set of) subterms of a given term in a concise and robust way. Building on this language, we implement a single-step rewriting tactic in the Isabelle theorem prover, which removes the need for obscure "occurrence numbers" for subterm selection.

The language was inspired by the language of patterns of Gonthier and Tassi, but provides an elegant way of handling bound variables and a modular semantics.

08:45-10:15 Session 22D: Novel Directions in QBF Research (QBF)
Location: FH, Hörsaal 2
08:45
A Unified Proof System for QBF Preprocessing
SPEAKER: Marijn Heule

ABSTRACT. For quantified Boolean formulas (QBFs), preprocessing is essential to solve many real-world formulas. The application of a preprocessor, however, prevented the extraction of proofs for the original formula. Such proofs are required to independently validate correctness of the preprocessor's rewritings and the solver's result. Especially for universal expansion proof checking was not possible so far. In this talk, we present a unified proof system based on three simple and elegant quantified resolution asymmetric tautology QRAT rules. In combination with an extended version of universal reduction, they are sufficient to efficiently express all preprocessing techniques used in state-of-the-art preprocessors including universal expansion. Moreover, these rules give rise to new preprocessing techniques. We equip the preprocessor bloqqer with QRAT proof logging and provide a proof checker for QRAT proofs.

09:30
On Instantiation-Based Calculi for QBF

ABSTRACT. Several calculi for quantified Boolean formulas (QBFs) exist but relations between them are not yet fully understood. This talk defines a novel calculus, which is resolution-based and enables unification of Q-resolution and the expansion-based calculus ∀Exp+Res. These calculi play an important role in QBF solving. This talks shows simulation results for the new calculus and discusses further possible extensions of the calculus.

08:45-10:15 Session 22E: HOL Workshop (HOL)
Location: FH, CAD 2
08:45
Welcome
SPEAKER: Organisers
09:00
Using HOL4 to Formalize Physical Systems

ABSTRACT. Recently, HOL4 has been successfully used in both the formalization of pure mathematics (e.g., extended real number theory and fractional calculus) and verification of engineering systems (e.g., wireless sensor networks, communication protocols). The aim of this talk is twofold: First, presenting the perspective of non-computer science community such as physicist or an engineer. We will also present some suggestions which can be helpful in extending the user community of HOL4 in new fields such as physics, biology, and economics. For example, developing an automatic translation from procedural to declarative style proofs, which are close to informal proofs (one such example is miz3 for HOL Light [1]). Second, suggestion to introduce archives of formal proofs [2] for HOL4, which is an efficient way to maintain large HOL4 scripts. In this way the HOL4 scripts (particularly related to application) can be centralized which are usually available on research groups or individual researchers webpages.

[1] F. Wiedijk, A Synthesis of the Procedural and Declarative Styles of Interactive Theorem Proving, Logical Methods in Computer Science (8), 2012 [http://arxiv.org/abs/1201.3601]

[2] http://afp.sourceforge.net/

09:30
Modernising HOL's documentation

ABSTRACT. Much of HOL's documentation looks and feels old. I propose that we try to modernise the documentation to make it more appealing to, say, masters students who might like to do projects using HOL. I'm thinking that the documentation could be in some new format, e.g. screencasts, a blog, a new well-linked website, wiki or something even better. I hope there will be a fruitful discussion that leads to real improvements.

08:45-10:15 Session 22F: Induction and Disscusion on ISR (IFIP-WG16)
Location: FH, Seminarraum 101C
08:45
Decision Procedures for Proving Inductive Theorems without Induction
SPEAKER: Takahito Aoto

ABSTRACT. Automated inductive reasoning for term rewriting has been extensively studied in the literature. Classes of equations and term rewriting systems (TRSs) with decidable inductive validity have been identified and used to automatize the inductive reasoning. In this talk, a new approach for deciding inductive theorems of TRSs is described. In particular, we give procedures for deciding the inductive validity of equations in some standard TRSs on natural numbers and lists. Our essential idea is to use the validity in the initial algebras of TRSs, instead of validity guaranteed by existence of inductive proofs. Contrary to previous decidability results, our procedures can automatically decide without involving induction reasoning the inductive validity of arbitrary equations for these TRSs, that is, without imposing any syntactical restrictions on the form of equations.

09:15
Discussion on the International School on Rewriting (ISR)
ABSTRACT.
  • ISR 2014: Status Report
  • ISR 2015: Progress Report
  • ISR 2016: Proposals and Voting
08:45-09:00 Session 22G: Welcome (CLC)
Location: FH, Dissertantenraum E104
09:00-10:15 Session 23A: WPTE Opening and Invited Talk by Andy Gill on HERMIT (WPTE)
Location: FH, Hörsaal 4
09:00
Opening
09:15
HERMIT: An Equational Reasoning Model to Implementation Rewrite System for Haskell
SPEAKER: Andrew Gill

ABSTRACT. HERMIT is a rewrite system for Haskell. Haskell, a pure functional programming language, is an ideal candidate for performing equational reasoning. Equational reasoning, replacing equals with equals, is a tunneling mechanism between different, but equivalent, programs. The ability to be agile in representation and implementation, but retain equivalence, brings many benefits. Post-hoc  optimization is one obvious application of representation agility.

What we want to explore is the mechanization of rewriting, inside real Haskell programs, enabling the prototyping of new optimizations, the explicit use of types to direct transformations, and perform larger data refinement tasks than are currently undertaken. Paper and pencil program transformations have been published that improve performance in a principled way; indeed some have turned the act of program transformation into an art form. But there is only so far a sheet of paper and a pencil can take you. There are also source code development environments
that provide support for refactoring, such as HaRe. These work at the syntactical level, and Haskell is a large and complex language.
What we want is mechanization, for examples that are currently undertaken by hand, and for examples that are challenging to perform using
current development environments.

In this talk, we overview HERMIT, the Haskell equational reasoning model to implementation tunnel.

HERMIT operates at the Glasgow Haskell compiler's Core level, deep inside GHC, where type information is easy to obtain, and the language being rewritten is smaller.   HERMIT provides three levels of support for transformation and prototyping: a strategic programming base with many typed rewrite primitives, a simple shell that can used to interactively request rewrites and explore transformation possibilities, and a batch language that can mechanize focused, and optionally program specific, optimizations. We will demonstrate all three of these levels, and show how they cooperate.

The explicit aim of the HERMIT project is to explore the worker/wrapper transformation as a specific way of mechanizing data refinement. HERMIT has been successfully used on small examples, efficient reverse, tupling-fib, and many other examples from the literature. We will show two larger and more interesting examples of program transformation using HERMIT. Specifically, we will show the mechanization of the making a century program refinement pearl, originally by Richard Bird, and the exploration of datatype alternatives in Graham Hutton's implementation of John Conway's Game of Life.

09:00-10:15 Session 23B: Opening, Invited Talk, and Contributed Talk (GSB)
Location: FH, Seminarraum 138A
09:00
Welcome
SPEAKER: Organizers
09:05
Semantics and Proof Theory in Team Play: the case of Paraconsistent Logics
SPEAKER: Anna Zamansky

ABSTRACT. A useful modular semantics is an important property of any proof system: it can be used to characterize important syntactic properties of the system, provide insights into its underlying logic, and in many cases induce a decision procedure for it. Recent developments in the field of paraconsistent logics are an interesting case which exemplifies the cross-fertilization between semantic explorations and proof theoretical developments. In this talk I will survey the framework of non-deterministic semantics and its proof-theoretical applications, and demonstrate how non-deterministic semantics pushed forward the development of analytic Gentzen-type calculi for the large family of Logics of Formal Inconsistency.

09:50
On the Construction of Analytic Sequent Calculi for Sub-classical Logics
SPEAKER: Ori Lahav

ABSTRACT. We study the question of when a given set of derivable rules in some basic analytic propositional sequent calculus forms itself an analytic calculus. First, a general syntactic criterion for analyticity in the family of pure sequent calculi is presented. Next, given a basic calculus admitting this criterion, we provide a method to construct weaker pure calculi by collecting simple derivable rules of the basic calculus. The obtained calculi are analytic-by-construction. While the criterion and the method are completely syntactic, our proofs are semantic, based on interpretation of sequent calculi via non-deterministic valuation functions. In particular, this method captures calculi for a wide variety of paraconsistent logics, as well as some extensions of Gurevich and Neeman's primal infon logic.

09:00-10:15 Session 23C: Opening and Invited Talk 1 (UNIF)
Location: FH, Seminarraum 104
09:00
Opening
SPEAKER: Temur Kutsia
09:15
Extensible Symbolic System Analysis
SPEAKER: Jose Meseguer

ABSTRACT. Unification and narrowing are a key ingredient not only to solve equations modulo an equational theory, but also to perform symbolic system analysis.  The key idea is that a concurrent system can be naturally specified as a rewrite theory (Sig,E,R), where (Sig,E) is an equational theory specifying the system's states as an algebraic data type, and R specifies the system's concurrent, and often non-deterministic, transitions.  One can perform such symbolic analysis by describing sets of states as (the instances of) terms with logical variables, and using narrowing modulo E to symbolically perform transitions.  Under reasonable conditions on the rewrite theory, this idea can be applied not only for complete reachability analysis, but also for temporal logic model checking. This approach is quite flexible but has some limitations.  Could it be possible to make symbolic system analysis  techniques more extensible and more widely applicable by simultaneously combining the powers of rewriting, narrowing, SMT solving and model checking? We give a progress report on current work aiming at such a unified symbolic approach.

 
09:00-10:15 Session 23E: Non-standard Analysis (ACL2)
Location: FH, Hörsaal 7
09:00
Formal Verification of Medina's Sequence of Polynomials for Approximating Arctangent
SPEAKER: unknown

ABSTRACT. The verification of many algorithms for calculating transcendental functions is based on polynomial approximations to these functions, often Taylor series approximations. However, computing and verifying approximations to the arctangent function are very challenging problems, in large part because the Taylor series converges very slowly to arctangent---a 57th-degree polynomial is needed to get three decimal places for arctan(0.95). Medina proposed a series of polynomials that approximate arctangent with far faster convergence---a 7th-degree polynomial is all that is needed to get three decimal places for arctan(0.95). We present in this paper a proof in ACL2(r) of the correctness and convergence rate of this sequence of polynomials. The proof is particularly beautiful, in that it uses many results from real analysis. Some of these necessary results were proven in prior work, but some were proven as part of this effort.

09:30
Equivalence of the Traditional and Non-Standard Definitions of Concepts from Real Analysis
SPEAKER: unknown

ABSTRACT. ACL2(r) is a variant of ACL2 that supports the irrational real and complex numbers. Its logical foundation is based on internal set theory (IST), an axiomatic formalization of non-standard analysis (NSA). Familiar ideas from analysis, such as continuity, differentiability, and integrability, are defined quite differently in NSA---some would argue the NSA definitions are more intuitive. In previous work, we have adopted the NSA definitions in ACL2(r), and simply taken as granted that these are equivalent to the traditional analysis notions, e.g., to the familiar $\epsilon$-$\delta$ definitions. However, we argue in this paper that there are circumstances when the more traditional definitions are advantageous in the setting of ACL2(r), precisely because the traditional notions are classical, so they are unencumbered by IST limitations on inference rules such as induction or the use of pseudo-lambda terms in functional instantiation. To address this concern, we describe a formal proof in ACL2(r) of the equivalence of the traditional and non-standards definitions of these notions.

09:00-10:15 Session 23F: ImmermanFest Opening and Invited Talk (LCC)
Location: FH, Zeichensaal 3
09:00
ImmermanFest Opening Remarks
09:15
27 and still counting: Iterated product, inductive counting, and the structure of P
SPEAKER: Eric Allender

ABSTRACT. This month marks an important anniversary.  It was precisely 33 years ago (July, 1987) that Neil Immerman announced a proof that NL = coNL. This shook the complexity-theoretic universe, triggering the collapse of multiple hierarchies, and spawning a series of aftershocks that still reverberate. This lecture will survey some of these developments, and will also trace the impact of some work of Immerman and Landau, which brought to the fore the central role that is played by iterated multiplication (in different algebras), in elucidating the structure of subclasses of P.

09:00-10:15 Session 23G: Invited Talk (DTP)
Location: FH, Seminarraum 325/1
09:00
FRP, LTL and GUIs
SPEAKER: Alan Jeffrey

ABSTRACT.  

Functional Reactive Programming (FRP) is a style of reactive programming whose semantics is given as time-indexed streams of values. Linear-time Temporal Logic (LTL) is a logic indexed by time, with modalities such as "at all times in the future". In this talk, I will discuss the use of constructive LTL as a type system for FRP, and show that functional reactive programs can serve two purposes: as executable programs, and as proof terms fur constructive LTL. This style of programming has been implemented in Agda, compiled to JavaScript, and linked against the DOM API for Graphical User Interfaces (GUIs). The result is a programming environment which combines interactive code with proofs of tautologies in temporal logic.

09:00-10:15 Session 23H: Opening and Invited Talk (FWFM)
Location: FH, Seminarraum 134
09:00
Fun With Formal Methods 2014: Opening Remarks

ABSTRACT. A part of the reason of student’s and engineer’s poor attitude to Formal Methods, is very simple: FM-experts do not care about primary education in this field at the early stage of higher education. In particular, many courses on Formal Semantics start with fearful terms like state machine, logic inference, denotational semantics, etc., without elementary explanations of the basic notions.

09:15
Algebraic description of restricted programming and elements of its logics

ABSTRACT. Algebras describing higher order programming notions in a maximally abstract manner (general algebraic program structures, GAPS) are introduced and investigated. The criterion when given programming language can be enriched by a system of program transformations is stated and proved. Specializations of GAPS for fully invertible and injective programs are investigated. Numerous examples are presented.

09:00-10:20 Session 23I (CLC)
Location: FH, Dissertantenraum E104
09:00
TBC

ABSTRACT. TBA

10:00
Extensional Models of Typed Lambda-mu Calculus
SPEAKER: Koji Nakazawa

ABSTRACT. This paper shows that the stream models introduced by Nakazawa and Katsumata can be adapted to a typed setting. A variant of de'Liguoro's type system for an extension of the Lambda-mu calculus, called $\Lambda\mu_{cons}$, is sound and complete with respect to the stream models. It is proved that any individual stream model with whole function spaces and infinite domains for base types characterizes the extensional equality of $\Lambda\mu_{cons}$. This result corresponds to Friedman's theorem for the simply-typed lambda calculus and the full type hierarchies.

09:15-10:15 Session 24A: Joint DCM / TermGraph Invited Talk (DCM and TermGraph)
Location: FH, Seminarraum 325/2
09:15
Numeral Systems in the Lambda Calculus
SPEAKER: Ian Mackie

ABSTRACT. We investigate numeral systems in the λ-calculus; specifically
in the linear λ-calculus where copying and erasing are not
permitted. Our interest is in finding efficient (and where possible,
in constant time) representations for successor, predecessor,
addition, subtraction and test for zero.  We begin by recalling some
well-known systems, before going on to the linear case where we give
several systems with different properties. We conclude with a
characterisation of linear numeral systems.

09:15-10:15 Session 24B: Parallel SAT (POS)
Location: FH, Hörsaal 3
09:15
Dolius: A Distributed Parallel SAT Solving Framework

ABSTRACT. Over the years, parallel SAT solving becomes more and more important. However, most of state-of-the-art parallel SAT solvers are portfolio-based ones. They aim at running several times the same solver with different parameters. In this paper, we propose a tool called Dolius, mainly based on the divide and conquer paradigm. In contrast to most current parallel efficient engines, Dolius does not need shared memory, can be distributed, and scales well when a large number of computing units is available. Furthermore, our tool contains an API allowing to plug any SAT solver in a simple way.

09:45
Validating Unsatisfiability Results of Clause Sharing Parallel SAT Solvers

ABSTRACT. As satisfiability (SAT) solver performance has improved, so has their complexity, which make it more likely that SAT solvers contain bugs. One important source of increased complexity is clause sharing in parallel SAT solvers. SAT solvers can emit a proof of unsatisfiability to gain confidence that the their results are correct. Such proofs must contain deletion information in order to check them efficiently. Computing deletion information is easy and cheap for parallel solvers without clause sharing, but tricky for parallel solvers with clause sharing.

We present a method to generate unsatisfiability proofs from clause sharing parallel SAT solvers. We show that the overhead of our method is small and that the produced proofs can be validated in a time similar to the solving (CPU) time. However, proofs produced by parallel solvers without clause sharing can be checked in a similar to the solving (wall-clock) time. This raises the question whether our method can be improved such that the checking time of proofs from parallel solvers without clause sharing is comparable to the time to check proofs from parallel solver with clause sharing.

09:15-10:15 Session 24C: Contributed Talks: Proof Complexity (PC)
Location: FH, Seminarraum 101B
09:15
Quasipolynomial Size Frege Proofs of Frankl's Theorem on the Trace of Finite Sets

ABSTRACT. We extend results of Bonet, Buss and Pitassi on Bondy's Theorem and of Nozaki, Arai and Arai on Boolobas's Theorem by proving that Frankl's Theorem on the trace of sets has quasipolynomial size Frege proofs. For constant values of the parameter t, we prove that Frankl's Theorem has polynomial size, constant depth proofs from instances of the pigeonhole principle.

09:45
n^O(log log n) size monotone proofs of the weak pigeonhole principle
SPEAKER: Anupam Das

ABSTRACT. We present nO(log log n)-size proofs in the (tree-like) monotone sequent calculus of the pigeonhole principle with (1 + ε)n pigeons and n holes, for ε = 1/polylog n. This improves on the bound of nO(log n) inherited from proofs of the usual pigeonhole principle, with n + 1 pigeons and n holes, by Atserias et al., and supports the more general conjecture that the monotone sequent calculus polynomially simulates Hilbert-Frege systems.

09:15-10:15 Session 24D (LSB)
Location: FH, Seminarraum 134A
09:15
An algebraic approach for inferring and using symmetries in rule-based models
SPEAKER: Jerome Feret

ABSTRACT. Symmetries arise naturally in rule-based models, and under various forms. Besides automorphisms between site graphs, which are usually built within the semantics, symmetries can take the form of pairs of sites having the same capability of interactions, of some protein variants behaving exactly the same way, or of some linear, planar, or 3D molecular complexes which could be see modulo permutations of their axis and/or mirror-image symmetries.

In this talk, we propose a unifying handling of symmetries in Kappa. We follow an algebraic approach, that is based on the single pushout semantics of Kappa. We model classes of symmetries as finite groups of transformations between site graphs, which are compatible with the notion of embedding (that is to say that it is always possible to restrict a transformation that is applied with the co-domain of an embedding to the domain of this embedding) and we provide some assumptions that ensure that symmetries are compatible with pushouts. Then, we characterize when a set of rules is symmetric with respect to a group of transformations and, in such a case, we give sufficient conditions so that this group of transformations induces a forward bisimulation and/or a backward bisimulation over the population semantics.

09:15-10:15 Session 24E: Invited talk (LOLA)
Location: FH, Seminarraum 107
09:15
De haut en bas: relating high-level and low-level abstractions
SPEAKER: Nick Benton

ABSTRACT. We need to reason about low-level programs because they are what actually runs on our machines. As low-level languages tend to be messy and complex, so does proving things about them. But a slightly subtler and more interesting problem is that the things we want to prove about such programs are often phrased in terms of high-level notions, such as being a valid implementation of a particular function, or behaving like something with a particular high-level type. It can be far from obvious how to even translate such specifications into terms that make sense at the low level, let alone verify that programs meet them. I’ll discuss the problem, and ways it can be addressed, drawing on examples from compositional correctness and type soundness of compilers, and the design of logics for low-level programs.

09:25-10:15 Session 25: Opening and invited talk (IWC)
Location: FH, Seminarraum 303
09:25
On the Formalization of Lambda-Calculus Confluence and Residuals
10:15-10:45Coffee Break
10:45-12:45 Session 26A: Invited Talks (LCC)
Location: FH, Zeichensaal 3
10:45
Some Reflections on Definability and Complexity
11:45
Crane Beach Revisited
10:45-12:45 Session 26B (Linearity)
Location: FH, Seminarraum 136
10:45
Undecidability of Multiplicative Subexponential Logic

ABSTRACT. Subexponential logic is a variant of linear logic with a family of exponential connectives---called subexponentials---that are indexed and arranged in a pre-order. Each subexponential has or lacks associated structural properties of weakening and contraction. We show that classical propositional multiplicative linear logic extended with one unrestricted and two incomparable linear subexponentials can encode the halting problem for two register Minsky machines, and is hence undecidable.

11:15
Superstructural Reversible Logic

ABSTRACT. Linear logic refines conventional logic by being a ``resource-conscious'' logic. We analyze the notion of resource maintained by linear logic proofs and argue that it only focuses on a ``spatial'' dimension omitting another important ``temporal'' dimension. We generalize (or further refine using even more structural rules) linear logic to maintain combined spatial and temporal resources. The resulting logic is reversible and thus allows a study of reversible computation from a proof-theoretic perspective.

11:45
A Linear/Producer/Consumer model of Classical Linear Logic

ABSTRACT. This paper defines a new proof- and category-theoretic framework for classical linear logic that separates reasoning into one linear regime and two persistent regimes corresponding to ! and ?. The resulting linear/producer/consumer (LPC) logic puts the three classes of propositions on the same semantic footing, following Benton's linear/non-linear formulation of intuitionistic linear logic. Semantically, LPC corresponds to a system of three categories connected by adjunctions that reflect the linear/producer/consumer structure. The paper's metatheoretic results include admissibility theorems for the cut and duality rules, and a translation of the LPC logic into category theory. The work also presents several concrete instances of the LPC model, including one based on finite vector spaces.

12:15
Cut Elimination in Multifocused Linear Logic
SPEAKER: unknown

ABSTRACT. We study cut elimination for a multifocused variant of linear logic in the sequent calculus. The multifocused normal form of proofs yields problems that do not appear in the standard focused system, related to the constraints in grouping rule instances inside focusing phases. We show that cut elimination can be performed in a sensible way even though the proof requires specific lemmas to deal with multifocusing phases, and discuss the difficulties arising with cut elimination when considering normal norms of proofs in linear logic.

10:45-13:00 Session 26C: Concurrency, Unfolding, Pi-Calculus, Reaction Networks (WPTE)
Location: FH, Hörsaal 4
10:45
Verifying Optimizations for Concurrent Programs

ABSTRACT. While program correctness for compiled languages depends fundamentally on compiler correctness, compiler optimizations are not usually formally verified due to the effort involved, particularly in the presence of concurrency. In this paper, we present a framework for stating and reasoning about compiler optimizations and transformations on programs in the presence of relaxed memory models. The core of the framework is the PTRANS specification language, in which program transformations are expressed as rewrites on control flow graphs with temporal logic side conditions. We demonstrate our technique by verifying the correctness of a redundant store elimination optimization in a simple LLVM-like intermediate language, relying on a theorem that allows us to lift single-thread simulation relations to simulations on multithreaded programs.

11:15
Inverse Unfold Problem and Its Heuristic Solving

ABSTRACT. Unfold/fold transformations have been widely studied in various programming paradigms and are used in program transformations, theorem proving, and so on. This paper, by using an example, show that restoring an one-step unfolding is not easy, i.e., a challenging task, since some rules used by unfolding may be lost. We formalize this problem by regarding one-step program transformation as a relation. Next we discuss some issues on a specific framework, called pure-constructor systems, which constitute a subclass of conditional term rewriting systems. We show that the inverse of $T$ preserves rewrite relations if $T$ preserves rewrite relations and the signature. We propose a heuristic procedure to solve the problem, and show its successful examples. We improve the procedure, and show examples for which the improvement takes effect.

11:45
Short break
SPEAKER: Break
12:00
Structural Rewriting in the Pi-Calculus
SPEAKER: David Sabel

ABSTRACT. We consider reduction in the synchronous pi-calculus with replication, without sums. Usual definitions of reduction in the pi-calculus use a closure w.r.t. structural congruence of processes. In this paper we operationalize structural congruence by providing a reduction relation for pi-processes which also performs necessary structural conversions explicitly by rewrite rules. As we show, a subset of structural congruence axioms is sufficient. We show that our rewrite strategy is equivalent to the usual strategy including structural congruence w.r.t. the observation of barbs and thus w.r.t. may- and should-testing equivalence in the pi-calculus.

12:30
Attractor Equivalence: An Observational Semantics for Reaction Networks
SPEAKER: unknown

ABSTRACT. We study observational semantics for networks of chemical reactions as used in systems biology. Reaction networks without kinetic information, as we consider, can be identified with Petri nets. We present a new observational semantics for reaction networks that we call the attractor equivalence. The main idea of the attractor equivalence is to observe reachable attractors and reachability of an attractor divergence in all possible contexts. The attractor equivalence can support powerful simplifications for reaction networks as we illustrate at the example of the Tet-On system. Alternative semantics based on bisimulations or traces, in contrast, do not support all needed simplifications.

10:45-12:45 Session 26D: Isabelle tutorial for beginners II (Isabelle)
Location: FH, Zeichensaal 1
10:45
Isabelle tutorial for beginners (part 2)
SPEAKER: Tobias Nipkow

ABSTRACT. This is a hands-on tutorial introduction to the proof assistant Isabelle/HOL. Slide presentations alternate with practical exercises: partipants should bring their own laptop with Isabelle installed. There will be seasoned Isabelle tutors to help you using the system. The tutorial takes 3.5h and has three sections:

  1. Recursion and induction
    The initial introduction concentrates on defining small functional programs and proving properties about them by induction.
  2. Logic and Automation
    This section introduces more of the logic and the available automation, including the famous Sledgehammer that harnesses external automatic theorem provers.
  3. Structured proofs
    This section introduces Isabelle's structured proof language Isar, which combines readability (via block-structured natural-deduction proofs) with conciseness (via the rich proof automation infrastructure).

See also Programming and Proving with Isabelle/HOL.

NOTE: Please install this <a href="http://isabelle.in.tum.de/website-Isabelle2014-RC0/">Isabelle release candidate</a> before coming to the tutorial.

10:45-12:45 Session 26E: Isabelle workshop (Isabelle)
Location: FH, Seminarraum 138C
10:45
The CoCon Experiment

ABSTRACT. We report on the experience of designing, formally verifying and using CoCon, a conference management system with verified document confidentiality.

10:55
The CAVA Automata Library
SPEAKER: Peter Lammich

ABSTRACT. We report on the graph and automata library that is used in the fully verified LTL model checker CAVA. Most components of CAVA use some type of graphs or automata.

A common automata library simplifies assembly of the components to a working model checker and reduces redundancy.

The CAVA automata library provides a hierarchy of graph and automata classes, together with some standard algorithms. Its object oriented design allows for sharing of algorithms, theorems, and implementations between the different types of graphs, and also simplifies adding new graph types. Moreover, it is integrated into the Automatic Refinement Framework, supporting automatic refinement of the abstract automata types to efficient data structures.

Note that the CAVA automata library is work in progress. Currently, it is very specifically tailored towards the requirements of the CAVA model checker. Nevertheless, the formalization techniques allow an extension of the library to a wider scope. Moreover, they are not limited to graph libraries, but apply to class hierarchies in general.

11:20
From Types to Sets in Isabelle/HOL

ABSTRACT. HOL types are naturally interpreted as nonempty sets­—this intuition is reflected in the type definition rule for the HOL-based systems (including Isabelle/HOL), where a new type can be defined whenever a nonempty set is exhibited. However, in HOL this definition mechanism cannot be applied inside proof contexts. We analyze some undesired consequences of this limitation and propose a more expressive type-definition rule that addresses it. The new expressive power opens the opportunity for a package that relativizes type-based statements to more flexible set-based variants—to streamline this process, we further implement a rule that transforms the implicit type-class constraints into explicit assumptions. Moreover, our tool is an interesting use case of Lifting and Transfer.

11:45
Towards abstract and executable multivariate polynomials in Isabelle

ABSTRACT. This work in progress report envisions a library for multivariate polynomials developed jointly by experts from computer theorem proving (CTP) and computer algebra (CA). The urgency of verified algorithms has been recognised in the field of CA, but the cultural gap to CTP is considerable; CA users expect high usability and efficiency. This work collects the needs of CA experts and reports on the design of a proof-of-concept prototype in Isabelle/HOL. The CA requirements have not yet been fully settled, and its development is still at an early stage. The authors hope for lively discussions at the Isabelle Workshop.

12:15
Tutorial: Isabelle/jEdit for seasoned Isabelle users

ABSTRACT. Isabelle/jEdit is the default user-interface and Prover IDE (PIDE) for Isabelle. After several years of development of Isabelle/PIDE concepts and implementation of Isabelle/Scala infrastructure, Isabelle/jEdit was first released in October 2011 as a proper application. Each subsequent release made significant progress, so that we can speak already of about 5 generations of Isabelle Prover IDEs today.

This short tutorial is meant to help seasoned users of Isabelle, who have not yet discovered the full potential of the jEdit text editor and the Isabelle/jEdit Prover IDE, to get up-to-date. The demonstration will show established functionality of Isabelle2013-2 (December 2013).

10:45-12:15 Session 26F: Computing with Graphs (TermGraph)
Location: FH, Seminarraum 325/2
10:45
Needed Computations Shortcutting Needed Steps
SPEAKER: Sergio Antoy

ABSTRACT. We define a compilation scheme for a constructor-based strongly-sequential graph rewriting system which shortcuts some needed steps. The object code is a constructor-based graph rewriting system as well that is normalizing for the original system when using an innermost strategy. Hence, the object code can be easily implemented by eager functions in a variety of programming languages. Then, we modify our object code in a way that avoids totally or partially the construction of the contracta of some needed steps of a computation. When computing normal forms in this way, both memory consumption and execution time are reduced compared to ordinary rewriting computations in the original system.

11:15
Proving Termination of Unfolding Graph Rewriting for General Safe Recursion
SPEAKER: Naohi Eguchi

ABSTRACT. We present a new termination proof and complexity analysis of unfolding graph rewriting which is a specific kind of infinite graph rewriting expressing the general form of safe recursion. We introduce a termination order over sequences of terms together with an interpretation of term graphs into sequences of terms. Unfolding graph rewrite rules expressing the general safe recursion can be embedded into the termination order by the interpretation, yielding the polynomial runtime complexity.

11:45
Nested Term Graphs

ABSTRACT. We report on work in progress on `nested term graphs' for formalizing higher-order terms (e.g. finite or infinite lambda-terms), including those expressing recursion (e.g. terms in the lambda-calculus with letrec). The idea is to represent the nested scope structure of a higher-order term by a nested structure of term graphs.

Based on a signature that is partitioned into atomic and nested function symbols, we define nested term graphs both intensionally, as tree-like recursive graph specifications that associate nested symbols with usual term graphs, and extensionally, as enriched term graph structures. These definitions induce corresponding notions of bisimulation between nested term graphs. Our main result states that nested term graphs can be implemented faithfully by first-order term graphs.

10:45-12:50 Session 26G: Contributed Talks (GSB)
Location: FH, Seminarraum 138A
10:45
Comparing Sequent Calculi via Hilbert-style Axioms

ABSTRACT. In this talk I will consider the question of how to compare and evaluate the expressive strength of different formats of Gentzen-style systems. More precisely, the general question is which logics can be captured in the different frameworks. Of course some results are clear: e.g. since every standard sequent can be seen as a hypersequent with only one component or as a nested sequent without the nesting, every logic which can be captured in the framework of standard sequent calculus can be captured in the framework of hypersequent or nested sequent calculi as well. Other results are given by explicitly translating one framework into another, see e.g. [Goré and Ramanayake:2012].

While such results certainly establish important connections between the different frameworks, they only provide information on the expressive strength of one Gentzen-style framework relative to another one. Yet we are also interested in the absolute expressive strength and in particular the limits of what can be expressed in the different frameworks in a meaningful way. The qualification "in a meaningful way'' here is important, since of course given any set A of formulae the sequent calculus with groundsequents =>a for every a in A will serve to derive exactly the formulae in A, even in a cut-free way. Thus to make this question non-trivial we also need to restrict the format of the rules in the different frameworks.

Comparing the different frameworks and formats of rules finally can be done by identifying characterisations for them in a single common framework. Since we would like to compare extensions of sequent calculus, this should be a framework outside the class of Gentzen-style frameworks, and ideally the characterisations should provide a way of quickly determining which Gentzen-style framework is suitable for a particular logic. For this purpose here we choose the framework of Hilbert-style axiom systems. This gives a high degree of flexibility and allows in particular to capture non-normal modal logics and logics based on other than classical propositional logic. A characterisation of a specific Gentzen-style framework together with a specific format of rules then is given by an ideally purely syntactically defined class of axioms together with translations from rules of the considered format and framework into axioms of this class and vice versa. The prime example for this method is Kracht's result characterising those modal temporal logics which can be captured by structural rules in a display calculus satisfying Belnap's conditions for cut elimination as exactly the logics axiomatisable by primitive axioms [Kracht:1996]. Another example is the characterisation of structural rules in a sequent or hypersequent calculus for substructural logics by certain levels of the substructural hierarchy from [Ciabattoni et al:2008]. Apart from providing a first step for the construction of analytic calculi from Hilbert axioms, this method also yields a way to show limitative results: By showing that a certain logic cannot be axiomatised by axioms of a specific (syntactically given) form we can show that the logic cannot be captured by the framework and format of rules corresponding to this class of axioms.

In this talk I will discuss some recent results in this spirit concerning logical rules of different formats in the frameworks of standard sequent calculi and hypersequent calculi. The logics under consideration are propositional modal logics such as Gödel Löb logic GL or the logic of 2-transitive Kripke frames, and more generally, extensions of classical or intuitionistic propositional logic with additional not necessarily unary connectives.

11:10
Sequent Systems for Classical Modal Logics

ABSTRACT. This paper develops sequent calculi for several classical modal logics. Utilizing a polymodal translation of the standard modal language, we are able to establish a base system for the minimal classical modal logic from which we generate extensions in a modular, and uniform, manner. We also briefly suggest potential applications to epistemic logic.

11:35
Lyndon Interpolation for the Modal Mu-Calculus

ABSTRACT. We prove Lyndon interpolation for the modal mu-calculus by applying so-called circular proofs. One of the proof systems for the modal mu-calculus is based on non-well-founded derivation trees with a global condition which roughly says that in each infinite branch, there must be an outermost greatest fixed point unfolded infinitely many often. We analyse regular proof trees (so-called circular proofs) in this proof system and obtain the Lyndon interpolation for the modal mu-calculus.

12:00
On cuts and cut-elimination in modal fixed point logics
SPEAKER: Thomas Studer

ABSTRACT. We present recent developments and discuss open questions concerning syntactic cut-elimination for modal fixed point logics.

12:25
Display-type calculi and their cut elimination metatheorem

ABSTRACT. The range of non-classical logics has been rapidly expanding, driven by influences from other fields which have opened up new opportunities for applications. However, the hurdles preventing a standard proof-theoretic development for these logics are due precisely to some of their defining features which make them suitable for applications, such as e.g. their not being closed under uniform substitution, or the fact that (the semantic interpretations of) key connectives are not adjoints.

These difficulties caused the existing proposals in literature to be often ad hoc, not easily generalisable, and more in general lacking a smooth proof-theoretic behaviour. In particular, the difficulty in smoothly transferring results from one logic to another is a problem in itself, since these logics typically come in large families (consider for instance the family of dynamic logics), and hence proof-theoretic approaches which uniformly apply to each logic in a given family are in high demand.

In this talk we focus on the core technical aspects of a proof-theoretic methodology and set-up closely linked to Belnap's display logic and Sambin's basic logic. The present set-up, which we refer to as display-type calculi, generalizes display calculi in two aspects: by allowing multi-type languages, and by dropping the full display property. The generalisation to a multi-type environment makes it possible to introduce specific tools enhancing expressivity, which have proved useful e.g. for a smooth proof-theoretic treatment of multi-modal and dynamic logics. The generalisation to a setting in which full display property is not required makes it possible to account for logics which admit connectives which are neither adjoints nor residuals.

One technical aspect which guarantees the cut elimination meta-theorem to go through for display-type calculi, even in the absence of the full display property, concerns the strengthening of the separation property (requiring principal formulas in introduction rules to appear in isolation) to the visibility property. Visibility requires all active formulas in introduction rules to occur in isolation. This property was recognized to be crucial for the cut elimination theorem of basic logic. However, in the present set-up of display-type calculi, visibility is also weakened, in the sense that, in order to account for logics that are not closed under uniform substitution, principal formulas in axioms are not required to occur in isolation.

In the proposed talk, we will illustrate the basic principles of the multi-type environment, and also how the above combination of weakenings, strengthenings of the separation property concurs to guaranteeing the cut elimination meta-theorem for display-type calculi. Time permitting, we will also discuss some difficulties that still arise in the case of PDL and some ideas towards treating predicative logics.

10:45-13:00 Session 26H: QBF Proofs and Certificates, DQBF (QBF)
Location: FH, Hörsaal 2
10:45
On the Relation between Resolution Calculi for QBFs and First-order Formulas
SPEAKER: Uwe Egly

ABSTRACT. Quantified Boolean formulas (QBFs) generalize propositional formulas by admitting quantifications over propositional variables. QBFs can be viewed as (restricted) formulas of first-order predicate logic and easy translations from QBFs to first-order formulas exist. We analyze the effect of such a translation on the complexity of resolution proofs. More precisely, we show that there are classes $(\Phi_i)_{i\geq 1}$ of QBFs where any Q-resolution refutation of $\Phi_i$ has length super-polynomial in $i$, but the translated version of $\Phi_i$ possesses a short resolution refutation. Moreover we discuss that first-order predicate logic can be used to succinctly represent refutations of QBFs obtained in different QBF calculi.

11:15
Resolution Paths and Term Resolution

ABSTRACT. We define Q(D)-term resolution, a family of proof systems for true Quantified Boolean Formulas (QBFs). These systems are generalizations of term resolution that capture the proof system(s) used by DepQBF to generate proofs of true formulas. DepQBF implements a version of the QCDCL algorithm that is parameterized by a so-called dependency scheme, a mapping that associates each formula with a binary relation on its variables that represents potential variable dependencies. Similarly, Q(D)-term resolution is parameterized by a dependency scheme D. We show that Q(Dres)-term resolution is sound, where Dres is the so-called Resolution-path Dependency Scheme. This entails soundness of Q(Dst)-term resolution, where Dst is the Standard Dependency Scheme currently implemented in DepQBF.

11:45
Efficient Extraction of QBF (Counter)models from Long-Distance Resolution Proofs

ABSTRACT. Resolution is a fundamental part of modern search and learning based QBF solvers, and the extraction of Skolem-function models and Herbrand-function countermodels from QBF resolution proofs is a key enabler for various applications. Although long-distance resolution (LQ-resolution) came into existence a decade ago, its superiority to short-distance resolution (Q-resolution) was not recognized until recently. It remains open whether there exists a polynomial time algorithm for model and countermodel extraction from LQ-resolution proofs, although an efficient algorithm does exist for the Q-resolution counterparts. This paper settles this open problem affirmatively by constructing a linear-time extraction procedure. Experimental results show the distinct benefits of the proposed method in extracting high quality certificates.

12:15
Dependency Quantified Boolean Formulas - Challenges, Applications, First Lines of Attack

ABSTRACT. We will give an introduction into Dependency Quantified Boolean Formulas (DQBF)
which generalize Quantified Boolean Formulas (QBF).
After briefly reviewing the complexity of the problem we will
present the partial equivalence checking problem (PEC) which requires
general DQBF formulations for exact solutions.
To solve PEC (which checks whether a given partial implementation of a combinational circuit can still
be extended to a complete design that is equivalent to a given full specification)
we give a linear transformation from PEC to the question whether a
Dependency Quantified Boolean Formula (DQBF) is satisfied.
We present first algorithmic solutions to the satisfiability of DQBF.
Preliminary experimental results show the feasibility of our approach
as well as the inaccuracy of QBF approximations, which were used for deciding PEC so far.

10:45-13:00 Session 26I: HOL Workshop (HOL)
Location: FH, CAD 2
10:45
Writing proof automation for HOL4

ABSTRACT. The HOL-to-CakeML translator, which is described in Myreen and Owens's ICFP'12 paper, is a nice self-contained piece of proof-automation. I propose describing its implementation. If the audience is mostly consisting of beginners, this talk can be geared towards a tutorial-style introduction into how proof automation for HOL can be implemented. If, on the other hand, the audience consists mostly of HOL experts, then this talk can serve as a discussion of methods by which proof automation is constructed.

11:15
Hack Session 1
SPEAKER: Everyone
10:45-11:45 Session 26J: Regular talks (IWC)
Location: FH, Seminarraum 303
10:45
Certification of Confluence Proofs using CeTA

ABSTRACT. CeTA was originally developed as a tool for certifying termination proofs which have to be provided as certificates in the CPF-format. Its soundness is proven as part of IsaFoR, the Isabelle Formalization of Rewriting. By now, CeTA can also be used for certifying confluence and non-confluence proofs. In this system description, we give a short overview on what kind of proofs are supported, and what information has to be given in the certificates. As we will see, only a small amount of information is required and so we hope that CSI will not stay the only confluence tool which can produce certificates.

11:15
Confluence and Critical-Pair-Closing Systems
SPEAKER: Nao Hirokawa

ABSTRACT. In this note we introduce critical-pair-closing systems which are aimed at analyzing confluence of term rewriting systems. Based on the notion two new confluence criteria are presented. One is a generalization of weak orthogonality based on relative termination, and another is a partial solution to the RTA Open Problem #13.

10:45-11:00 Session 26K: ORE Opening (ORE)
Location: FH, Seminarraum 101A
10:45-13:00 Session 26L (UNIF)
Location: FH, Seminarraum 104
10:45
Unification Modulo Common List Functions
SPEAKER: unknown

ABSTRACT. Reasoning about data types is an important research area with many applications, such as formal program verification. Virtually all commonly used programming languages use lists or arrays and for certain Logic Programming languages, formal analysis via unification is a natural fit. Lists are also useful in the study of satisfiability modulo theories, which unification has strong connections to. In this paper, we investigate the unification problem modulo theories of lists. The different theories are obtained by considering observer functions of increasing complexity. These observer functions are right cons, reverse, and fold right (reduce.) In practice reduce is not a first-order function; we turn it into a first-order function by treating the binary function to be "folded" over the list as an uninterpreted function, i.e., as a constructor.

11:15
Matching with respect to general concept inclusions in the Description Logic EL
SPEAKER: unknown

ABSTRACT. Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that the matching problem is NP-complete. It then took almost 10 years before this NP-completeness result could be extended from matching to unification in EL. The next big challenge was then to further extend these results from matching and unification without a TBox to matching and unification w.r.t. a general TBox, i.e., a finite set of general concept inclusions. For unification, we could show some partial results for general TBoxes that satisfy a certain restriction on cyclic dependencies between concepts, but the general case is still open. For matching, we were able to solve the general case: we can show that matching in EL w.r.t. general TBoxes is NP-complete. We also determine some tractable variants of the matching problem.

11:45
Unification in the normal modal logic Alt1
SPEAKER: unknown

ABSTRACT. In the ordinary modal language, the normal modal logic Alt1 is the least normal logic containing the formula <>x->[]x. It can also be defined as the normal modal logic determined by the class of all deterministic frames. The unification problem in Alt1 is to determine, given a formula F(x1,...,xn), whether there exists formulas G1,...,Gn such that F(G1,...,Gn) is in Alt1. We demonstrate that the unification problem in Alt1 is decidable in polynomial space.

12:15
On Asymmetric Unification and the Combination Problem in Disjoint Theories (Extended Abstract)
SPEAKER: unknown

ABSTRACT. Asymmetric unification is a new paradigm for unification modulo theories that introduces irreducibility constraints on one side of a unification problem. It has important applications in symbolic cryptographic protocol analysis. However many facets of asymmetric unification that are of particular interest, including its behavior under combinations of disjoint theories, remain poorly understood. In this paper we give an overview of a new formulation of the method for unification in the combination of disjoint equational theories developed by Baader and Schulz that gives a new unification method for asymmetric unification in the combination of disjoint theories.

12:35
Hierarchical Combination of Matching Algorithms (Extended Abstract)
SPEAKER: unknown

ABSTRACT. We present a new algorithm for the matching problem in the combination of non-disjoint equational theories. The method is an extension of our recent work on the hierarchical combination of unification algorithms.

10:45-13:00 Session 26M: Analysis (POS)
Location: FH, Hörsaal 3
10:45
Post Mortem Analysis of SAT Solver Proofs
SPEAKER: Laurent Simon

ABSTRACT. Conflict-Driven Clause Learning algorithms are well known from an engineer point of view. Thanks to Minisat, their designs are well understood, and most of their implementations follow the same ideas, with essentially the same components. Same heuristics, fast restarts, same learning mechanism. However, their efficiency has an important drawback: they are more and more like complex systems and harder and harder to handle. Unfortunately, only a few works are focusing on understanding them rather than improving them. In most of the cases, their studies are often based on a generate and test pattern: An idea is added to an existing solver and if it improves its efficiency the idea is published and kept. In this paper, we analyse ``{\em post-mortem}'' the proofs given by one typical CDCL solvers, Glucose. The originality of our approach is that we only consider it has a {\em resolution proofs} builder, and then we analyze some of the proof characteristics on a set of selected UNSAT instances, by shuffling each of them 200 times. We particularly focus on trying to characterize useless and useful clauses in the proof as well as proofs shapes. We also show that despite their incredible efficiency, roughly 90\% of the time spent in a CDCL is useless for the final proof.

11:15
Formula partitioning revisited
SPEAKER: Zoltan Mann

ABSTRACT. Dividing a Boolean formula into smaller independent sub-formulae can be a useful technique for accelerating the solution of Boolean problems, including SAT and #SAT. Nevertheless, and despite promising early results, formula partitioning is hardly used in state-of-the-art solvers. In this paper, we show that this is rooted in a lack of consistency of the usefulness of formula partitioning techniques. In particular, we evaluate two existing and a novel partitioning model, coupled with two existing and two novel partitioning algorithms, on a wide range of benchmark instances. Our results show that there is no one-size-fits-all solution: for different formula types, different partitioning models and algorithms are the most suitable. While these results might seem negative, they help to improve our understanding about formula partitioning; moreover, the findings also give some guidance as to which method to use for what kinds of formulae.

11:45
New CNF Features and Formula Classification

ABSTRACT. In this paper, we try to bridge this gap to be able to use a solver as black box but still exploit its configuration space. We propose new features that can be extracted quickly from a CNF formula compared to the existing features of SATzilla. Then, we compare these features with the known features and construct a black box SAT solver selects a configuration based on the features and a classifier that uses random decision forests. Experiments on application and hand crafted benchmarks show that the constructed classifier on the one hand improves the performance of the SAT solvers. On the other hand, the comparison shows that the set of new features can be computed very fast and results in a more precise classification.

12:15
Typical-case complexity and the SAT competitions
SPEAKER: Zoltan Mann

ABSTRACT. The aim of this paper is to gather insight into typical-case complexity of the Boolean Satisfiability (SAT) problem by mining the data from the SAT competitions. Specifically, the statistical properties of the SAT benchmarks and their impact on complexity are investigated, as well as connections between different metrics of complexity. While some of the investigated properties and relationships are "folklore" in the SAT community, this study aims at scientifically showing what is true from the folklore and what is not. The standard approach of the SAT competitions is to use benchmarks to assess the efficiency of the submitted solvers; this talk takes a complimentary point of view: using the solvers to assess the complexity of the benchmarks.

10:45-11:45 Session 26N: Invited Talk (Albert Atserias) (PC)
Location: FH, Seminarraum 101B
10:45
Weak Pigeonhole Principles, Circuits for Approximate Counting, and Bounded-Depth Proofs

ABSTRACT. I will start this talk by giving an overview of the status of the weak pigeonhole principle (WPHP) in proof complexity and the connection between this problem and some fundamental questions on the computational complexity of approximate counting. Then I will discuss some recent progress on the main remaining question about WPHP, namely whether the known quasipolynomial-size depth-2 proof is optimal in terms of size. We show that a relativized version of the WPHP that also has quasipolynomial-size depth-2 proofs does indeed require quasipolynomial size to prove in depth-2. To our knowledge, this gives the first example of a natural principle with matching but superpolynomial upper and lower bounds in a natural proof system.

10:45-13:00 Session 26P: Theorem Proving and MOOCs (IFIP-WG16)
Location: FH, Seminarraum 101C
10:45
Rewriting, Proofs and Transition Systems
SPEAKER: Gilles Dowek

ABSTRACT. We give a new proof of the decidability of reachability in alternating pushdown systems, showing that it is a simple consequence of a cut-elimination theorem for some logic.

11:30
Semantically-Guided Goal-Sensitive Theorem Proving

ABSTRACT. SGGS, for Semantically-Guided Goal-Sensitive theorem proving, is a new inference system for first-order logic. It was inspired by the idea of generalizing to first-order logic the model-based style of the Davis-Putnam-Logemann-Loveland (DPLL) procedure for propositional logic. Model-based reasoning is a key feature of SAT solvers, SMT solvers, and model-constructing decision procedures for specific theories, and a crucial ingredient to their practical success. However, model-based reasoning in first-order logic is challenging, because the logic is only semi-decidable, the search space is infinite, and model representation is harder than in the propositional case. SGGS meets the challenge by realizing a seemingly rare combination of properties: it is model-based a la DPLL; semantically guided by an initial interpretation, to avoid blind guessing in an infinite search space; proof confluent, to avoid backtracking, which may be cumbersome for first-order problems; goal-sensitive, which is important when there are many axioms or a large knowledge base; and it uses unification to avoid enumeration of ground terms, which is inefficient, especially for rich signatures. In terms of operations, SGGS combines instance generation, resolution, and constraints, in a model-centric approach: it uses sequences of constrained clauses to represent models, instance generation to extend the model, resolution and other inferences to repair it. This talk advertises SGGS to the rewriting community, presenting the main ideas in the method: a main direction for future work is extension to first-order logic with equality, which requires rewrite-based reasoning. A manuscript including all aspects of SGGS, the technical details, the proofs of refutational completeness and goal-sensitivity, a comparison with other work (e.g., resolution with set of support, the disconnection calculus, the model evolution calculus, the Inst-Gen method) and more references, is available.

12:00
Impacts of the Digital Revolution on two Main Activities of Scientists: Teaching and Publishing
12:30
Discussion on Massive Open Online Courses and Overlay Journals
10:45-13:00 Session 26Q (LSB)
Location: FH, Seminarraum 134A
10:45
Using answer set programming to integrate large-scale heterogeneous information about the response of a biological system
SPEAKER: Anne Siegel

ABSTRACT. The response of a biomolecular system is usually investigated thanks to heterogenous and incomplete data. More precisely, to address a control a biological system, we may have at hand only partial knowledge about molecular interactions together with observations of a subset of molecules. The issue we wish to address is how can take benefit of this multi-scale amount of information. In this talk, we will explain how answer set programming can be used both to model in a uniform langage several problem of data integration and to solve difficult combinatorial problems related to heregeneity and data incompleteness. All together, these modelling and solving technologies allow identifying robust controllers of biological systems. We will illustrate this on issues about metabolic networks and logical signaling networks.

11:30
Attractor equivalence: An observational semantics for reaction networks

ABSTRACT. We study observational semantics for networks of chemical reactions as used in systems biology. Reaction networks without kinetic information, as we consider, can be identified with Petri nets. We present a new observational semantics for reaction networks that we call the attractor equivalence. The main idea of the attractor equivalence is to observe reachable attractors and reachability of an attractor divergence in all possible contexts.  The attractor equivalence can support powerful simplifications for reaction networks as we illustrate at the example of the TetOn system.  Alternative semantics based on bisimulations or traces, in contrast, do not support all needed simplifications.

12:15
A Logical Framework for Systems Biology

ABSTRACT. We propose a novel approach for the formal verification of biological systems based on the use of a modal linear logic. 

We show how such a logic can be used, with worlds as instants of time, as an unified framework to encode both biological systems and temporal properties of their dynamic behaviour.

To illustrate our methodology, we consider a model of the P53/Mdm2 DNA-damage repair mechanism. We prove several properties that are important for such a model to satisfy and serve to illustrate the promise of our approach.  We formalize the proofs of these properties in the Coq Proof Assistant, with the help of a Lambda Prolog prover for partial automation of the proofs. 

10:45-13:00 Session 26R: Verification (ACL2)
Location: FH, Hörsaal 7
10:45
Modeling Algorithms in SystemC and ACL2
SPEAKER: John O'Leary

ABSTRACT. We describe the formal language MASC, based on a subset of SystemC and intended for modeling algorithms to be implemented in hardware. By means of a special-purpose parser, an algorithm coded in SystemC is converted to a MASC model for the purpose of documentation, which in turn is translated to ACL2 for formal verification. The parser also generates a SystemC variant that is suitable as input to a high-level synthesis tool. As an illustration of this methodology, we describe a proof of correctness of a simple 32-bit radix-4 multiplier.

11:15
Development of a Translator from LLVM to ACL2
SPEAKER: unknown

ABSTRACT. In our current work a library of formally verified software components is to be created, and assembled, using the Low-Level Virtual Machine (LLVM) intermediate form, into subsystems whose top-level assurance relies on the assurance of the individual components. We have thus undertaken a project to build a translator from LLVM to the applicative subset of Common Lisp accepted by the ACL2 theorem prover. Our translator produces executable ACL2 formal models, allowing us to both prove theorems about the translated models as well as validate those models by testing. The resulting models can be translated and certified without user intervention, even for code with loops, thanks to the use of the \texttt{def::ung} macro which allows us to defer the question of termination. Initial measurements of concrete execution for translated LLVM functions indicate that performance is nearly 2.4 million LLVM instructions per second on a typical laptop computer. In this paper we overview the translation process and illustrate the translator's capabilities by way of a concrete example, including both a functional correctness theorem as well as a validation test for that example.

11:45
An ACL2 Mechanization of an Axiomatic Framework for Weak Memory

ABSTRACT. Proving the correctness of programs written for multiple processors is a challenging proglem, due in no small part to the weaker memory guarantees afforded by most modern architectures. In particular, the existence of store buffers means that the programmer can no longer assume that writes to different locations become visible to all processors in the same order. However, all practical architectures do provide a collection of weaker guarantees about memory consistency across processors, which enable the programmer to write provably correct

12:15
Using ACL2 to Verify Loop Pipelining in Behavioral Synthesis
SPEAKER: unknown

ABSTRACT. Behavioral synthesis involves compiling an Electronic System-Level (ESL) design into its Register-Transfer Level (RTL) implementation. Loop pipelining is one of the most critical and complex transformations employed in behavioral synthesis. Certifying the loop pipelining algorithm is challenging because there is a huge semantic gap between the input sequential design and the output pipelined implementation making it infeasible to verify their equivalence with automated sequential equivalence checking techniques. We discuss our ongoing effort using ACL2 to certify loop pipelining transformation. The completion of the proof is work in progress. However, some of the insights developed so far may already be of value to the ACL2 community. In particular, we discuss the key invariant we formalized, which is very different from that used in most pipeline proofs. We discuss the needs for this invariant, its formalization in ACL2, and our envisioned proof using the invariant. We also discuss some trade-offs, challenges, and insights developed in course of the project.

10:45-13:00 Session 26S: Contributed Talks (DTP)
Location: FH, Seminarraum 325/1
10:45
Terminal semantics for codata types in intensional Martin-Löf type theory
SPEAKER: unknown

ABSTRACT. In this work, we study the notions of relative comonad and comodule over a relative comonad, and use these notions to give a terminal coalgebra semantics for the coinductive type families of streams and of infinite triangular matrices, respectively, in intensional Martin-Löf type theory. Our results are mechanized in the proof assistant Coq.

11:15
A Unifying Framework for Primitive Ontological Relations in Dependent Type Theory
SPEAKER: unknown

ABSTRACT. In the development of foundational ontologies, a challenging problem which has not yet received a satisfying solution is the logical co-existence of part-whole relations and subsumption within a single framework. In this paper, we will explain how dependent type theory, and more precisely the Calculus of Inductive Constructions based on intuitionistic logic has a sufficient power for representing and reasoning coherently with these two relations.

11:45
Domain Specific Languages of Mathematics
SPEAKER: unknown

ABSTRACT. We present a small number of examples of using the computer science perspective to the teaching of continuous mathematics by focusing on the domain-specific languages underlying the mathematical areas.

12:15
Relational ornaments
SPEAKER: unknown

ABSTRACT. We propose a relational reformulation of McBride’s ornaments, which leads to new constructions that are not possible with the original formulation.

10:45-11:45 Session 26T (DCM)
Location: FH, Seminarraum 138B
10:45
A Simple Parallel Implementation of Interaction Nets in Haskell
SPEAKER: Wolfram Kahl

ABSTRACT. Due to their ``inherent parallelism'', interaction nets have since their introduction been considered as an attractive implementation mechanism for functional programming. We show that a simple highly-concurrent implementation in Haskell can achieve promising speed-ups on multiple cores.

11:15
Some observations for the parallel implementation of interaction nets
SPEAKER: Shinya Sato

ABSTRACT. There have been several studies about the parallel implementation of interaction nets. Here we look at the related question: when is an interaction net system suitable for parallel evaluation? Some systems are sequential, some have the potential to be highly parallel. This first investigation aims to throw some light on this topic, and perhaps help pave the way to wider use of this technology for parallel evaluation. In this short paper we focus on presenting experimental evidence in a number of small case studies.

10:45-12:45 Session 26U: Contributed talks (LOLA)
Location: FH, Seminarraum 107
10:45
Abstract Self Modifying Machines

ABSTRACT. We describe a new framework for self-modifying programs. Our goal is to show how to extract program abstraction focusing on self-modifications. On the first hand, we use a abstract machine which makes explicit some typical behavior, such as turning data into executable code and vice versa. Moreover memory is also separated between data (what we can read and write) and code (what we can execute). On the other hand, we add another level of granularity in memory location, in order to build a more static program abstraction.

11:15
Programming and Verifying with Effect Handling and Iteration
SPEAKER: unknown

ABSTRACT. Effect handling is a novel paradigm in programming that complements the established algebraic approach to programming with effects. Intuitively, algebraic effects are those which distribute over sequential composition and hence can affect the control flow only in a quite conservative fashion. Effect handling, by contrast, allows for expressing more powerful constructions, the most basic example being exception handling. More generally, effect handling can be viewed as a lightweight, well-behaved alternative to control operations like call/cc. We develop novel semantic foundations for effect handling in the presence of iteration, based on cofree extensions of monads. In the larger perspective we view our current work as a prerequisite for designing a generic relatively complete Hoare calculus from programs with algebraic effects, iteration and handling.

11:45
Tensorial logic with algebraic effects

ABSTRACT. Tensorial logic is a primitive logic of tensor and negation which refines linear logic by relaxing the hypothesis that negation is involutive. One reason for introducing the logic is that its proof-nets coincide with innocent strategies between dialogue games. In this talk, we explain how to extend tensorial logic with various notions of algebraic effects. By way of illustration, we establish that the resulting notion of tensorial proof net coincides in the particular case of local stores with the notion of visible strategy formulated by Abramsky and McCusker in their study of Idealized Algol.

12:15
Compiling Effectful Terms to Transducers: Prototype Implementation of Memoryful Geometry of Interaction
SPEAKER: unknown

ABSTRACT. We present a prototype implementation of the memoryful GoI framework of [Hoshino, Muroya and Hasuo, CSL-LICS 2014] that translates lambda terms with algebraic effects to transducers. Those transducers can be thought of as “proof nets with memories” and are constructed in a compositional manner by means of coalgebraic component calculi. The transducers thus obtained can be simulated in our tool, too, helping us to scrutinize the step-by-step interac- tions that take place in higher-order effectful computation.

10:50-13:00 Session 27 (CLC)
Location: FH, Dissertantenraum E104
10:50
A Translation of Intersection and Union Types for the lambda-mu Calculus

ABSTRACT. We introduce an intersection and union type system for the lambda-mu calculus, which includes a restricted version of the traditional union-elimination rule. Strong normalisation of terms typable by the system is shown via a translation of intersection and union types into intersection and product types, relying on the result of strong normalisation of terms typable by van Bakel, Barbanera and de'Liguoro's system.

11:10
Dualized Type Theory
SPEAKER: unknown

ABSTRACT. We propose a new bi-intuitionistic type theory called Dualized Type Theory (DTT). It is a type theory with perfect intuitionistic duality, and corresponds to a single-sided polarized sequent calculus. We prove DTT strongly normalizing, and prove type preservation. DTT is based on a new propositional bi-intuitionistic logic called Dualized Intuitionistic Logic (DIL) that builds on Pinto and Uustalu's logic L. DIL is a simplification of L which removes several admissible inference rules from L while maintaining consistency and completeness. Furthermore, DIL is defined using a dualized syntax by labeling formulas and logical connectives with polarities thus reducing the number of inference rules needed to define the logic. We give a direct proof of consistency, but prove completeness by reduction to L.

11:30
A proof-theoretic view on scheduling in concurrency

ABSTRACT. This paper elaborates on a new approach of the question of the proof-theoretic study of concurrent interaction called "proofs as schedules". Observing that proof theory is well suited to the description of confluent systems while concurrency has non-determinism as a fundamental feature, we develop a correspondence where proofs provide what is needed to make concurrent systems confluent, namely scheduling. In our logical system, processes and schedulers appear explicitly as proofs in different fragments of the proof language and cut elimination between them does correspond to execution of a concurrent system. This separation of roles suggests new insights for the denotational semantics of processes and new methods for the translation of pi-calculi into prefix-less formalisms (like solos) as the operational counterpart of translations between proof systems.

12:00
A type system for Continuation Calculus
SPEAKER: unknown

ABSTRACT. Continuation Calculus (CC), introduced by Geron and Geuvers [2], is a simple foundational model for functional computation. It is closely related to lambda calculus and term rewriting, but it has no variable binding and no pattern matching. It is Turing complete and evaluation is deterministic. Notions like “call-by-value” and “call-by-name” computation are available by choosing appropriate function definitions: e.g. there is a call-by-value and a call-by-name addition function. In the present paper we extend CC with types, to be able to define data types in a canonical way, and functions over these data types, defined by iteration. The iteration scheme comes in two flavors: a call-by-value and a call-by-name iteration scheme. Data type definitions follow the so-called “Scott encoding” of data, as opposed to the more familiar “Church encoding”. We give examples of how to define functions over data types and how to reason with these functions.

12:30
Confluence for classical logic through the distinction between values and computations
SPEAKER: Ralph Matthes

ABSTRACT. We apply an idea originated in the theory of programming languages - monadic meta-language with a distinction between values and computations - in the design of a calculus of cut-elimination for classical logic.

The cut-elimination calculus we obtain comprehends the call-by-name and call-by-value fragments of Curien-Herbelin's lambda-bar-mu-mu-tilde-calculus without losing confluence, and is based on a distinction of "modes" in the proof expressions and "mode" annotations in types. Modes resemble colors and polarities, but are quite different: we give meaning to them in terms of a monadic meta-language where the distinction between values and computations is fully explored. This meta-language is a refinement of the classical monadic language previously introduced by the authors, and is also developed in the paper.

11:00-13:00 Session 28A: Evaluation and Benchmarks (ORE)
Location: FH, Seminarraum 101A
11:00
TROvE: a Graphical Tool to Evaluate OWL Reasoners
SPEAKER: Luca Pulina

ABSTRACT. In this paper we present TROvE(Tool for Rapid OWL Reasoner Evaluation), a tool aimed to offer to a non-expert user the possibility to evaluate OWL reasoners on several reasoning tasks by means of a simple “push-button” solution.

11:20
Using OpenStreetMap Data to Create Benchmarks for Description Logic Reasoners
SPEAKER: Guohui Xiao

ABSTRACT. Engines for query answering over ontological knowledge bases are becoming increasingly popular and important in areas like the Semantic Web or information integration. They are mostly designed to answer queries over ontologies expressed using various Description Logics and in the presence of large amounts of instance data. This computational task, known as ontology-based query answering (OQA), is an important component in the more general area of ontology-based data access. Unfortunately, it has been acknowledged that judging the performance of current OQA reasoners and their underlying algorithms is difficult due to the lack of publicly available benchmarks that consist of large amounts of real-life instance data. In this paper we describe how benchmarks for OQA systems can be created from the publicly available OpenStreetMap (OSM) geospatial data. To this end, we first develop a formalization of OSM and present a rule-based language to specify the rules to extract instance data from OSM data. The declarative nature of the approach allows various variants of a benchmark to be created via small modifications to the rules of the specification. We describe a highly flexible engine to create a benchmark from a given OSM map and a given set of rules and present some evaluation results.

11:40
A Scalable Benchmark for OBDA Systems: Preliminary Report
SPEAKER: Davide Lanti

ABSTRACT. In ontology-based data access (OBDA), the aim is to provide a high-level conceptual view over potentially very large (relational) data sources by means of a mediating ontology. The ontology is connected to the data sources through a declarative specification given in terms of mappings that relate each (class and property) symbol in the ontology to an (SQL) view over the data. Although prototype OBDA systems providing the ability to answer SPARQL queries over the ontology are available, a significant challenge remains: performance. To properly evaluate OBDA systems, benchmarks tailored towards the requirements in this setting are needed. OWL benchmarks, which have been developed to test the performance of generic SPARQL query engines, however, fail at 1) exhibiting a complex real-world ontology, 2) providing challenging real world queries, 3) providing large amounts of data, and the possibility to test a system over data of increasing size, and 4) capturing important OBDA-specific measures related to the rewriting-based query answering approach in OBDA. In this work, we propose a novel benchmark for OBDA systems based on a real world use-case adopted in the EU project Optique. We validate our benchmark on the system Ontop, showing that it is more adequate than previous benchmarks not tailored for OBDA.

12:00
Evaluating OWL 2 Reasoners in the Context Of Checking Entity-Relationship Diagrams During Software Development

ABSTRACT. This paper evaluates the performances of the OWL 2 reasoners HermiT, FaCT++ and TReasoner in the context of an ontological decision support system in designing entity-relationship diagrams during software development. First, I described a developed ontology which is the knowledge base of the developed application for designing databases. In the first set of experiments I compared how the classification and realization time of the DBOM ontology varied when increasing the ABox with ERD elements individuals. In the second set of experiments the consistency checking time of the DBOM ontology was estimated by increasing the ABox with ERD elements individuals.

12:20
Just: a Tool for Computing Justifications w.r.t. ELH Ontologies
SPEAKER: Michel Ludwig

ABSTRACT. We introduce the tool Just for computing justifications for general concept inclusions w.r.t. ontologies formulated in the description logic EL extended with role inclusions. The computation of justifications in Just is based on saturating the input axioms under all possible inferences w.r.t. a consequence-based calculus. We give an overview of the architecture of the tool and we conclude with an experimental evaluation of its performance when applied on several practical ontologies.

12:40
Android Went Semantic: Time for Evaluation

ABSTRACT. Applications for mobile devices could often show a more intelligent behaviour by using a semantic reasoner to discover new knowledge. Unfortunately, using Description Logic reasoners on Android devices is not trivial. In this paper we continue our previous work on investigating the use of semantic reasoners on mobile devices. In particular, we port some new OWL~2 EL reasoners to Android and analyze the results of some experiments measuring the performance of several OWL~2 DL and OWL~2 EL reasoners on Android smartphones and tablets.

11:00-13:00 Session 28B: Technical session (FWFM)
Location: FH, Seminarraum 134
11:00
Chekofv: Crowd-sourced Formal Verification
SPEAKER: unknown

ABSTRACT. Over the past year, we have been developing Chekofv, a system for crowd-sourced formal verification. Chekofv starts with an at- tempt to verify a given C program using the source code analysis plat- form Frama-C. Every time the analysis loses precision while analyzing looping control-flow, Chekofv tries to obtain an invariant to regain precision using crowd-sourcing. To that end, Chekofv translates the problem of finding loop invariants into a puzzle game and presents it to players of a game, Xylem, that is being developed as part of the Chekofv system. In this paper, we report on the design and implementation of the Chekofv system, the challenges and merits of gamification of the invariants detection problem, and problems and obstacles that we have encountered so far.

11:30
Using Esoteric Language for Teaching Formal Semantics

ABSTRACT. The talk presents an approach to undergraduate teaching of formal semantics of programs. It explains what is formal semantics by developing operational, denotational and axiomatic semantics for an ``esoteric'' language. We hope that the approach will be interesting to communities of Program Theoreticians and Software Engineers, that it will help better education to bridge a cultural gap between Formal Methods theories and Software Engineering practice.

12:00
Introducing Formal Verification with Lego

ABSTRACT. “In the end, you are a mathematician, not a computer scientist” or “Have we not already discovered everything in computer science?”. Which theoretical computer scientist has not heard a similar sentence when trying to explain hir research to a layperson? Promotion of theoretical computer science, and formal methods in particular, is mainly hindered by the high level of abstraction commonly used in the field, and probably furthermore complicated by a lack of education of the people thereupon.

We present in this paper an educational experiment intended to explain the importance and aim of formal methods, along with the challenges they present. It uses Lego robots as an anchor to the real world, introducing fun into the presentation. Primarily targeting high-school students, it forms a solid and adaptable basis to reach various audiences.

12:30
Explaining the decompositionality of MSO using applications to combinatorics
SPEAKER: Tomer Kotek

ABSTRACT. Abstract. Courcelle's celebrated theorem forms a strong theoretical basis for the development of formal methods. This extended abstract argues that a good intuition into the underlying ideas behind Courcelle's theorem can be achieved by considering the applications of these ideas to combinatorics. Technical notions regarding compuation are eliminated and replaced with simple notions of recurrence relations.

12:00-13:00 Session 29A: Regular talks (IWC)
Location: FH, Seminarraum 303
12:00
Critical Pairs in Network Rewriting

ABSTRACT. This extended abstract breifly introduces rewriting of networks (directed acyclic graphs with the extra structure needed to serve as expressions for PROducts and Permutations categories) and describes the critical pairs aspects of this theory.

12:30
On Proving Confluence of Conditional Term Rewriting Systems via the Computationally Equivalent Transformation
SPEAKER: Naoki Nishida

ABSTRACT. This paper improves the existing criterion for proving confluence of a normal conditional term rewriting system (CTRS) via the Serbanuta-Rosu transformation, a computationally equivalent transformation of CTRSs into unconditional term rewriting systems (TRS), showing that a weakly left-linear normal CTRS is confluent if the transformed TRS is confluent. Then, we discuss usefulness of the optimization of the Serbanuta-Rosu  transformation, which has informally been proposed in the literature. 

12:00-13:00 Session 29B: Contributed Talks: Resolution (PC)
Location: FH, Seminarraum 101B
12:00
Total Space in Resolution

ABSTRACT. We show Ω(n2) lower bounds on the total space used in resolution refutations of random k-CNFs over n variables, and of the graph pigeonhole principle and the bit pigeonhole principle for n holes. This answers the long-standing open problem of whether there are families of k-CNF formulas of size O(n) requiring total space Ω(n2) in resolution, and gives the first truly quadratic lower bounds on total space. The results follow from a more general theorem showing that, for formulas satisfying certain conditions, in every resolution refutation there is a memory configuration containing many clauses of large width.

12:30
From Small Space to Small Width in Resolution
SPEAKER: Mladen Miksa

ABSTRACT. In 2003, Atserias and Dalmau established that the space complexity of formulas in resolution is an upper bound on the width needed to refute them. Their proof is beautiful but somewhat mysterious in that it relies on tools from finite model theory. We give an alternative proof that works by simple syntactic manipulations of resolution refutations. As a by-product, we develop a "black-box" technique for proving space lower bounds.

12:00-13:00 Session 29C (DCM)
Location: FH, Seminarraum 138B
12:00
Quantitative semantics for higher-order probabilistic and quantum computation
ABSTRACT.
Various models of Linear Logic have been defined in categories of
vector spaces (more generally modules) and linear functions. These
models provide semantics of quite expressive functional programming
languages, where programs using their inputs exactly once correspond
to linear functions and programs using their inputs an arbitrary
number of times (like recursive programs) correspond to power series.

Quantitative semantics are particularly suitable for interpreting
non-deterministic computations, such as probabilistic and quantum
algorithms. The addition between vectors expresses a kind of
superposition of atomic states and the scalars a quantitative
estimation of this superposition.

In this talk I will introduce to the main ideas of quantitative
semantics and I will present the most recent results achieved in the
framework of higher-order probabilistic and quantum computation.
13:00-14:30Lunch Break
14:00-16:00 Session 30: Invited Talks (LCC)
Location: FH, Zeichensaal 3
14:00
On the power of fixed-point logic with counting
SPEAKER: Anuj Dawar

ABSTRACT. In 1982, Neil Immerman proposed an extension of fixed-point logic by means of counting quantifiers (which we denote FPC) as a logic that
might express all polynomial-time properties of unordered structures.  It was eventually proved (by Cai, Fürer and Immerman) that there are
polynomial-time graph properties that are not expressible in FPC. Nonetheless, FPC is a powerful and natural fragment of the complexity class P.  In this talk, I will demonstrate this point by reviewing three recent results on the expressive power of FPC.

 

15:00
Reasoning about transitive closure in Immerman's style
SPEAKER: Mooly Sagiv
14:30-16:00 Session 31A (Linearity)
Location: FH, Seminarraum 136
14:30
Continuations, closed reduction and process networks
SPEAKER: Ian Mackie

ABSTRACT. In the early 1990s it was shown that the linear lambda-calculus
(the usual lambda-calculus restricted so that copying and
discarding are not allowed) is an internal language for symmetric
monoidal closed categories. With the introduction of a natural numbers
object-represented in the internal language with numbers and an
iterator-it was shown, using a result of Robert Paré and Leopoldo
Román, that all primitive recursive functions could be represented.

Around the same time interaction nets were introduced as a simple
graph rewriting system that generalise proof nets of linear
logic. Rules have a linearity constraint, and as a consequence,
explicit copying and discarding are required. One area of application
of interaction nets is the representation of the lambda-calculus,
where many different systems have been developed.

Kahn process networks (1970s) are a model of computation based on a
collection of sequential, deterministic processes that communicate
through unbounded channels. They are well suited for modelling
stream-based computations, but are in no way restricted to this
application.

In this talk we connect the above by giving an interaction net
encoding of process networks. We give just one part of this story and
focus on the linear lambda-calculus with iterators. As part of the
compilation, we require an understanding of closed reduction, and this
in turn will make use of continuations.

15:30
Type Classes for Lightweight Substructural Types
SPEAKER: Edward Gan

ABSTRACT. Linear and substructural types are powerful tools, but adding them to standard functional programming languages often means introducing extra annotations and typing machinery. We propose a lightweight substructural type system design that recasts the structural rules of weakening and contraction as type classes; we demonstrate this design in a prototype language, Clamp.

Clamp supports polymorphic substructural types as well as an expressive system of mutable references. At the same time, it adds little additional overhead to a standard Damas–Hindley–Milner type system enriched with type classes. We have established type safety for the core model and implemented a type checker for Clamp in Haskell.

14:30-16:00 Session 31B: Conditional Rewriting, Polymorphic Calculi (WPTE)
Location: FH, Hörsaal 4
14:30
On Proving Soundness of the Computationally Equivalent Transformation for Normal Conditional Term Rewriting Systems by Using Unravelings
SPEAKER: Naoki Nishida

ABSTRACT. In this paper, we show that the SR transformation, a computationally equivalent transformation proposed by Serbanuta and Rosu, is sound for weakly left-linear normal conditional term rewriting systems (CTRS). Here, soundness for a CTRS means that reduction of the transformed unconditional term rewriting system (TRS) creates no undesired reduction for the CTRS. We first show that every reduction sequence of the transformed TRS starting with a term corresponding to the one considered on the CTRS is simulated by the reduction of the TRS obtained by the simultaneous unraveling. Then, we use the fact that the unraveling is sound for weakly left-linear normal CTRSs.

15:00
Contextual Equivalences in Call-by-Need and Call-By-Name Polymorphically Typed Calculi (Preliminary Report)

ABSTRACT. This paper presents a call-by-need polymorphically typed lambda-calculus with letrec, case, constructors and seq. The typing of the calculus is modelled in a system-F style. Contextual equivalence is used as semantics of expressions. We also define a call-by-name variant without letrec. We adapt several tools and criteria for recognizing correct program transformations to polymorphic typing, in particular an inductive applicative simulation.

15:30
Verifying the Correctness of Tupling Transformations based on Conditional Rewriting
SPEAKER: Yuki Chiba

ABSTRACT. Chiba et al.\ (2010) proposed a framework of program transformation by templates based on term rewriting. Their framework can deal with tupling, which improve efficiency of programs. Outputs of their framework, however, may not always more efficient than inputs. In this paper, we propose a technique to show the correctness of tupling based on conditional term rewriting. We give an extended equational logic in order to add conditional rules.

14:30-16:00 Session 31C: Isabelle workshop (Isabelle)
Location: FH, Seminarraum 138C
14:30
Proving Gödel’s Incompleteness Theorems

ABSTRACT. Gödel’s two incompleteness theorems have been proved in Isabelle. The nominal package was used to handle variable binding in conjunction with de Bruijn indexes. This is the first mechanical proof of the second incompleteness theorem.

15:00
Isabelle/jEdit NEWS

ABSTRACT. This system demo will give an overview of NEWS for Isabelle/jEdit in the forthcoming Isabelle2014 release, which is anticipated for summer 2014. Both the prover and the integration with the underlying jEdit editor have been refined in various ways. Here is an incomplete list of notable items: improved completion mechanism, integrated spell-checker, hyperlink navigation within the editor, URL and browser support, support for auxiliary files, and official SML'97 IDE support.

See also Isabelle2014-RC0, which is the same pre-release snapshot that will be used in the Isabelle tutorial.

15:30
Programming TLS in Isabelle/HOL

ABSTRACT. Isabelle/HOL is not just a theorem prover, it has become a functional programming language. Algebraic datatypes and (pure) recursive functions are defined with various packages and compiled to executable code with the code generator. In this work, we explore whether and how this programming language is suitable for developing applications, which are stateful, interact with the environment, and use external libraries. To that end, we have implemented a prototype of the TLS network protocol as a case study. We present a model of interaction in HOL and its compilation, and discuss on the challenges in application development that the theorem prover/HOL Isabelle poses.

14:30-15:30 Session 31D: Joint IWC / TermGraph Invited Talk (IWC and TermGraph)
Location: FH, Seminarraum 325/2
14:30
An Introduction to Higher-Dimensional Rewriting Theory
SPEAKER: Samuel Mimram
14:30-16:05 Session 31E: Invited Talk and Contributed Talks (GSB)
Location: FH, Seminarraum 138A
14:30
Cyclic proof for quantitative logics
SPEAKER: Alex Simpson

ABSTRACT. I will talk about ongoing work with Matteo Mio (University of Cambridge) to build useful proof systems for quantitative fixed-point logics. Our approach adapts cyclic-proof-based sequent calculi to the quantitative setting. Several technical issues arise, not all of which have been resolved. The overall aim of the programme is to develop machinery for reasoning about probabilistic concurrent systems. But the talk will focus on proof-theoretic issues, in a purely logical setting.

15:15
Modular Systems for Intuitionistic Modal Logics in Nested Sequents
SPEAKER: Sonia Marin

ABSTRACT. In this talk we show for each of the modal axioms d, t, b, 4, and 5 an equivalent set of inference rules in a nested sequent system, such that, when added to the basic system for the intuitionistic modal logic IK, the resulting system admits cut elimination. The result is similar to the one presented at "Gentzen Systems and Beyond 2011" about classical modal logics. We use a combination of structural and logical rules to achieve modularity.

15:40
An Intuitionisticaly based Description Logic

ABSTRACT. This article presents iALC, an intuitionistic version of the classical description logic ALC, based on the framework for constructive modal logics presented by Simpson (1994) and related to description languages, via hybrid logics, by de Paiva (2006). This article corrects and extends the presentation of iALC appearing in de Paiva et al (2010). It points out the difference between iALC and the intuitionistic hybrid logic presented in de Paiva (2006). Completeness and soundness proofs are provided. A Sequent Calculus for iALC and a discussion on the computational complexity is taken. It is worth mentioning that iALC is used to formalize legal knowledge, and in fact, was specifically designed to this goal.

 

14:30-16:00 Session 31F: QBF Applications (1): Incremental QBF Solving (QBF)
Location: FH, Hörsaal 2
14:30
Incremental QBF Reasoning for the Verification of Partial Designs
SPEAKER: Paolo Marin

ABSTRACT. Incremental solving methods made SAT attractive for industrial use as core technology of numerous electronic design automation tools, e.g. for verification tasks like Bounded Model Checking (BMC). On the other hand, when dealing with partial designs or unknown specifications, SAT formulas are not expressive enough, whereas a description via quantified Boolean formulas (QBF) is more adequate. 

Persuaded by the success of incremental SAT, we explore various ways to tackle QBF problems incrementally and thereby make this technology available. We developed the first incremental QBF solver based on the state-of-the-art QBF solver QuBE, which can now reuse some information from previous iterations, therefore pruning the search space. 

However, the need for a preprocessing QBF phase, that in general cannot be paired with incremental solving because of the non-predictable elimination of variables in the future incremental steps, posed the question of incremental QBF preprocessing. We study an approach for retaining the QBF formula being preprocessed while extending its clauses and prefix incrementally. This procedure yields effectively preprocessed QBF formulas, nevertheless it may come together with long runtimes. We consider various heuristics to dynamically skip incremental preprocessing when its overhead is not compensated by the reduced solving time anymore.

We experimentally analyze from both the point of view of the effectiveness of the single procedure and how a generic QBF solver can profit from it how the different methods positively affect BMC for partial designs (i.e. designs containing so-called blackboxes which represent unknown parts). The goal of this problem is to disprove realizability, that is, to prove that an unsafe state is reachable no matter how the blackboxes are implemented. On a domain of partial design benchmarks, engaging incremental QBF methods significant performance gains over non incremental BMC can be achieved.   

15:00
Reactive Synthesis using (Incremental) QBF Solving

ABSTRACT. Reactive synthesis algorithms construct a correct reactive system fully automatically from a given formal specification. Besides the construction of full systems, such synthesis algorithms are also used for program sketching and automatic program repair. This talk will explain how reactive synthesis can be realized efficiently with QBF solving, and how recently proposed approaches for incremental QBF solving can be exploited. We present first experimental results and compare our QBF-based solution to SAT- and BDD-based implementations.

15:30
Conformant Planning as a Case Study of Incremental QBF Solving

ABSTRACT. We consider planning with uncertainty in the initial state as a case study of incremental quantified Boolean formula (QBF) solving. We report on experiments with a workflow to incrementally encode a planning instance into a sequence of QBFs. To solve this sequence of incrementally constructed QBFs, we use our general-purpose incremental QBF solver DepQBF. Since the generated QBFs have many clauses and variables in common, our approach avoids redundancy both in the encoding phase and in the solving phase. Experimental results show that incremental QBF solving outperforms non-incremental QBF solving. Our results are the first empirical study of incremental QBF solving in the context of planning. Based on our case study, we argue that incremental QBF solving also has the potential to improve QBF-based workflows in other application domains.

14:30-16:00 Session 31G: HOL Workshop (HOL)
Location: FH, CAD 2
14:30
HOL4 Hidden Features
SPEAKER: Anthony Fox

ABSTRACT. I propose briefly talking about the following HOL4 “hidden features”.

  1. The function HolKernel.syntax_fns. This function simplifies the task of writing *Syntax files, resulting in more compact code and reducing the likelihood of making mistakes. I’d like to encourage HOL4 developers to make use of this function. I’d
    also like to seek feedback on whether or not it is a good idea to update old *Syntax files.
  2. The function EmitTeX.datatype_thm_to_string. This is useful for anyone using EmitML, however it is very easy to overlook because it is located under EmitTeX.
  3. There are various useful library functions located under the examples directory, such as l3-machine-code/common/utilsLib and machine-code/hoare-triple/ helperLib.For example, there are the functions: utilsLib.add_datatypes, utilsLib.HYP_CONV_RULE and helperLib.quote_to_strings. This last function has a variant elsewhere, namely l3-machine-code/lib/assemblerLib.quote_ to_strings. It would be good to discuss the issue of finding better homes for some of these functions.
  4. The theories numposrep, ASCIInumbers and bitstring. The bit-string theory is relatively new and I propose discussing how it is related to these other two theories, as well as to words theory. I may also discuss some tools bitstringLib.
15:00
New Styles of Proof
SPEAKER: Ramana Kumar

ABSTRACT. I propose to talk in general about the tensions between different approaches to constructing proofs: maintainability, construction speed, replay speed, readability, etc. and also to highlight a few relatively unknown tactics, packages, and ways of using them: lcsymtacs, match_rename etc., quant heuristics (maybe), writing conversions, using MATCH_MP. I hope to both share and learn tips and tricks for writing proofs effectively.

For another kind of hidden feature, I propose to highlight the issue tracker and other resources that are being used, and could be used more effectively, to improve HOL development. Along these lines, I also propose regular developer workshops.

(Additionally, if there is time, I would propose some longer-term feature requests including some that already appear in the issue tracker, and others such as partial functions, BNF datatypes, logic programming in proof automation, and mechanisms for locale-style development.)

15:30
Hack Session 2
SPEAKER: Everyone
14:30-15:30 Session 31H: Ontologies (ORE)
Location: FH, Seminarraum 101A
14:30
Exploring Reasoning with the DMOP Ontology
SPEAKER: C. Maria Keet

ABSTRACT. We describe the Data Mining OPtimization Ontology (DMOP), which was developed to support informed decision-making at various choice points of the knowledge discovery (KD) process. DMOP contains in-depth descriptions of DM tasks, data, algorithms, hypotheses, and workflows. Its development raised a number of non-trivial modeling problems, the solution to which demanded maximal exploitation of OWL 2 representational potential. The choices made led to v5.4 of the DMOP ontology. We report some evaluations on processing DMOP with a standard reasoner by considering different DMOP features.

14:50
An update on Genomic CDS, a complex ontology for pharmacogenomics and clinical decision support

ABSTRACT. Genetic data can be used to optimize drug treatment based on the genetic profiles of individual patients, thereby reducing adverse drug events and improving the efficacy of pharmacotherapy. The Genomic Clinical Decision Support (Genomic CDS) ontology utilizes Web Ontology Language 2 (OWL 2) reasoning for this task. The ontology serves a clear-cut medical use case that requires challenging OWL 2 DL reasoning. We present an update of the Genomic CDS ontology which covers a significantly larger number of clinical decision support rules and where inconsistencies present in previous versions of the ontology have been removed.

15:10
A Family History Knowledge Base in OWL 2

ABSTRACT. This paper presents a challenging family history knowledge-base (FHKB) authored in OWL 2 DL. Originally, the FHKB was designed to act as a tool for education, especially about OWL 2’s features and the use of automated reasoners. As a result, the FHKB has been constructed to maximise use of inference. For individuals representing people, only genealogical assertions on parentage and sparse assertions of siblinghood are given explicitly. All other genealogical inferences are driven by a rich property hierarchy, property characteristics and subproperty chains. A rich collection of entailments are generated, but reasoners struggle to handle a version with all of Robert’s known relatives.

14:30-16:00 Session 31I: Invited Talk 2 and Regular Talk (UNIF)
Location: FH, Seminarraum 104
14:30
On the Limits of Second-Order Unification
SPEAKER: Jordi Levy

ABSTRACT. Second-Order Unification is a problem that naturally arises when applying automated deduction techniques with variables denoting predicates. The problem is undecidable, but a considerable effort has been made in order to find decidable fragments, and understand the deep reasons of its complexity. Two variants of the problem, Bounded Second-Order Unification and Linear Second-Order Unification --where the use of bound variables in the instantiations is restricted--, have been extensively studied in the last two decades.  In this paper we summarize some decidability/undecidability/complexity results, trying to focus on those that could be more interesting for a wider audience, and involving less technical details.

15:30
From Admissibility to a New Hierarchy of Unification Types
SPEAKER: unknown

ABSTRACT. Motivated by the study of admissible rules, a new hierarchy of ``exact'' unification types is introduced where a unifier is more general than another unifier if all identities unified by the first are unified by the second. A Ghilardi-style algebraic interpretation of this hierarchy is presented that uses exact algebras rather than projective algebras. Examples of equational classes distinguishing the two hierarchies are also provided.

14:30-15:30 Session 31J: Invited talk (POS)
Location: FH, Hörsaal 3
14:30
Lingeling Essentials, A Tutorial on Design and Implementation Aspects of the the SAT Solver Lingeling
SPEAKER: Armin Biere

ABSTRACT. One of the design principles of the state-of-the-art SAT solver Lingeling is to use as compact data structures as possible. These reduce memory usage, increase cache efficiency and thus improve run-time, particularly, when using multiple solver instances on multi-core machines, as in our parallel portfolio solver Plingeling and our cube and conquer solver Treengeling. The scheduler of a dozen inprocessing algorithms is an important aspect of Lingeling as well. In this talk we explain these design and implementation aspects of Lingeling and discuss new direction of solver design.

14:30-15:30 Session 31K: Invited Talk (Iddo Tzameret) (PC)
Location: FH, Seminarraum 101B
14:30
Are matrix identities hard instances for strong proof systems?
SPEAKER: Iddo Tzameret

ABSTRACT. I will first talk about the complexity of generating identities of matrix rings. And demonstrate an unconditional lower bound on the minimal number of generators needed to generate a matrix identity, where the generators are substitution instances of elements from any given finite basis of the matrix identities.

Based on the above, I will argue that matrix identities may be good hard candidates for strong proof systems, and I will discuss an initial study of this approach under different settings and assumptions. Under certain conditions, this approach can potentially lead up to exponential-size lower bounds on arithmetic proofs, which are proofs of polynomial identities operating with arithmetic circuits and whose axioms are the polynomial-ring axioms (these proofs serve as an algebraic analogue of the Extended Frege propositional proof system). I will also discuss shortly the applicability of our approach to strong propositional proof systems.

Based on a joint work with Fu Li.

14:30-16:00 Session 31L: Normalization and Music (IFIP-WG16)
Location: FH, Seminarraum 101C
14:30
Basic Normalization
SPEAKER: Nao Hirokawa

ABSTRACT. The Normalization Theorem (O'Donnell 1977) states that for every left-normal orthogonal rewrite system, the leftmost outermost strategy reduces any term to the normal form if it exists. Although the theorem ensures the normalization property of important systems like Combinatory Logic, the left-normality condition rules out many functional programs. We revisit the problem to seek a solution for normalization of the leftmost outermost strategy.

15:15
Rhythm Tree Rewriting

ABSTRACT. In traditional western music notation, the durations of notes are expressed hierarchically by recursive subdivisions. This led naturally to a tree representation of melodies widespread in systems for assistance to music authoring. We will see how term rewriting techniques can be applied to computer music problems, in particular to the problem of rhythm quantization: the transcription of a sequence of dates (real time values) into a music score. Besides the question of rhythm representation, an interesting problem in this context is the design and concise description of large rewrite rule sets and decision problems for these descriptions.

14:30-16:00 Session 31M (LSB)
Location: FH, Seminarraum 134A
14:30
TBA
15:15
Model Checking the Evolution of Gene Regulatory Networks
ABSTRACT.
Evolutionary biologists use a weighted model of gene regulatory networks (GRNs) to define and simulate the robustness of biological 
systems against genetic perturbations.  By replacing simulation with model checking we achieve higher degrees of assurance, precision, 
and scalability.  We first present an algorithm for computing the robustness of a GRN with respect to viability properties expressed in 
linear temporal logic.  We then employ symbolic bounded model checking to compute the space of GRNs that satisfy a given property, 
which amounts to synthesizing a set of linear constraints on symbolic weights.  Finally, we compute the robustness of the satisfying 
space against perturbations.  We apply our method on a number of gene regulatory networks and compare its performance and 
precision against simulation-based approaches.
14:30-16:00 Session 31N: Keynote Magnus Myreen (ACL2)
Location: FH, Hörsaal 7
14:30
Machine-code verification: experience of tackling medium-sized case studies using 'decompilation into logic’
SPEAKER: Magnus Myreen

ABSTRACT. All software executes in the form of machine code. Ideally, software verification should reach down to precise statements about the concrete machine code that runs.

During my PhD, I developed infrastructure in the HOL4 theorem prover for verification of programs at the level of machine code. The infrastructure includes a programming logic and a proof-producing decompiler and compiler.

In the subsequent five years, this infrastructure has been used in two medium-sized case studies: construction of a verified implementation of read-eval-print loop for Lisp, which can run Jared Davis' Milawa prover; and verification of the accuracy of the GCC compiler's output for the C implementation of the seL4 microkernel. These case studies resulted in HOL4 theorems about thousands of lines of concrete x86-64 and ARM machine code, respectively.

In this talk, I will explain the existing infrastructure for machine-code verification; describe how the infrastructure had to adapt to the case studies; and reflect on future research directions.

This talk describes work done in collaboration with Anthony Fox (ARM ISA specification), Jared Davis (Milawa theorem prover), Thomas Sewell and Gerwin Klein (seL4 microkernel).

14:30-16:00 Session 31P: Contributed Talks (DTP)
Location: FH, Seminarraum 325/1
14:30
An Embedded Hardware Description Language using Dependent Types
SPEAKER: unknown

ABSTRACT. We present a Embedded Domain-Specific Language (EDSL) for hardware description using the dependently-typed language Agda as host. Circuits modeled in this EDSL can be simulated, exported to VHDL netlists, and have properties proven about their functional correctness.

15:00
Type-Directed Editing for Dependently-Typed Programs.
SPEAKER: unknown

ABSTRACT. Dependent types allow a programmer to state sophisticated properties of programs. Recent demonstrations suggest that types should be leveraged during programming and editing to direct editing actions and assist the programmer with constructing type correct programs in the first instance.

Toward this end, we are exploring type-directed editing. Type-directed editing proposes to offer programmers a rich set of high-level programming actions which both respect type-correctness of programs and leverage typing information to assist the programmer. Our goal is to give a semantics for editing in which every editing action results in a type-correct program, and to implement these semantics with a well-engineered and friendly user interface. We hypothesize that by removing the search for a typechecking program in the space of text strings from the programming process, we can increase the productivity of programmers.

15:30
Tool Demonstration: An IDE for Programming and Proving in Idris
SPEAKER: unknown

ABSTRACT. Dependently typed programming languages have a rich type system, which enables developers to combine proofs with programs, sometimes even eroding the boundary between the activities of proving and programming. This introduces new challenges for integrated development environments that must support this boundary. Instead of reimplementing large parts of a compiler, such as a type checker and a totality checker, we extended the Idris compiler with a structured, machine-readable protocol for interacting with other programs. Furthermore, we developed an IDE for Idris in Emacs, which uses this structured input and output mode, that attempts to combine features from both proof assistant interfaces and rich programming environments such as SLIME. The Idris extension turned out to be generally useful, and has been used in applications such as an IRC bot, an interactive website for trying out Idris, and support for other editors.

14:30-15:30 Session 31Q: Technical session (FWFM)
Location: FH, Seminarraum 134
14:30
Towards the Application of Formal Methods in Process Engineering

ABSTRACT. Process engineering deals with the design, operation and optimization of different physical processes such as chemical reactions, biological mechanisms and petroleum processing. Recent technological advancements and short time to market in almost every industry bring new challenges and hence the complexity in the process engineering systems. One of the main techniques to analyze such models is to represent underlying systems using signal-flow-graphs which provide a systematic procedure to evaluate the system performance in the form of transfer functions. Traditionally, the analysis of signal-flow-graphs based models has been carried out by the paper-and-pencil based proofs and numerical techniques. In this paper, we present an overview of using theorem proving to formally analyze processing engineering models. The main motivation of this work is twofold: First, the application of formal methods in new domain to improve the analysis accuracy; Second, an effort to reduce the gap between formal methods and different engineering communities such as mechanical, chemical and engineering management.

15:00
An Example Demonstrating the Requirement of Fully Formal Verification Method

ABSTRACT. Concurrency and synchronization is an important behavior of hardware and software systems. Finding defects in the late phase of system development process is not acceptable anymore in industrial applications because of its highly capital investment in modern complex systems. This paper demonstrates system verification by two approaches: model checking and proof checker. Here verification of a concurrent running architecture is demonstrated in both approaches to exhibit the requirement of fully formal verification method in order to generate a better hardware and software quality. To support this, we will describe how the proof checker is used for complete verification of a system in terms of any input size and complexity.

14:30-16:00 Session 31R (DCM)
Location: FH, Seminarraum 138B
14:30
Cellular Automata are Generic
SPEAKER: unknown

ABSTRACT. We show that any effective algorithm over any arbitrary domain can be simulated by a particular kind of dynamic cellular automaton

15:00
Quantum Circuits for the Unitary Permutation Problem
SPEAKER: unknown

ABSTRACT. We consider the Unitary Permutation problem which consists, given n unitary gates U_1,…,U_n and a permutation σ of {1,…,n}, in applying the unitary gates in the order specified by σ, i.e. in performing U_σ(n)…U_σ(1).

This problem has been introduced and investigated by Colnaghi et al. where two models of computations are considered. This first is the (standard) model of query complexity: the complexity measure is the number of calls to any of the unitary gates U_i in a quantum circuit which solves the problem. The second model provides quantum switches and treats unitary transformations as inputs of second order. In that case the complexity measure is the number of quantum switches. In this seminal paper it has been shown that the problem can be solved within n^2 calls in the query model and n(n-1)/2 quantum switches in the new model.

We refine these results by proving that nlog_2(n)+\Theta(n) quantum switches are necessary and sufficient to solve this problem, whereas n^2-2n+4 calls are sufficient to solve this problem in the standard quantum circuit model. We prove, with an additional assumption on the family of gates used in the circuits, that n^2-o(n^{7/4+r}) queries are required, for any r>0. The upper and lower bounds for the standard quantum circuit model are established by pointing out connections with the permutation as substring problem introduced by Karp.

15:30
Propositional Logics Complexity and the sub-formula Property

ABSTRACT. In 1979 Richard Statman proved, using proof-theory, that the purely implicational fragment of Intuitionistic Logic (\mil) is PSPACE-complete. He showed a polynomially bounded translation from full Intuitionistic Propositional Logic into its implicational fragment. By the PSPACE-completeness of S4, proved by Ladner, and the G\"{o}del translation from S4 into Intuitionistic Logic, the PSPACE-completeness of \mil~ is drawn. %% The sub-formula principle for a deductive system for a logic \Log{L} states that whenever ${\gamma_1,\ldots,\gamma_k\}\vdash_{\Log{L}}\alpha$ there is a proof in which each formula occurrence is either a sub-formula of $\alpha$ or of some of $\gamma_i$. In this work we extend Statman's result and show that any propositional (possibly modal) structural logic satisfying a particular statement of the sub-formula principle is PSPACE-complete. As a consequence, EXPTIME-complete propositional logics, such as PDL and the common-knowledge epistemic logic with at least 2 agents satisfy this particular sub-formula principle, if and only if, PSPACE=EXPTIME.

14:30-16:00 Session 31S (CLC)
Location: FH, Dissertantenraum E104
14:30
Infinitary Classical Logic: Recursive Equations and Interactive Semantics

ABSTRACT. In this paper, we present an interactive semantics for derivations in an infinitary extension of classical logic. The formulas of our language are possibly infinitary trees labeled by propositional variables and logical connectives. We show that in our setting every recursive formula equation has a unique solution. As for derivations, we use an infinitary variant of Tait-calculus to derive sequents.

The interactive semantics for derivations that we introduce in this article is presented as a debate (interaction tree) between a test T (derivation candidate, Proponent) and an environment ¬S (negation of a sequent, Opponent). We show a completeness theorem for derivations, that we call interactive completeness theorem: the interaction between T (test) and ¬S (environment) does not produce errors (i.e., Proponent wins) just in case T comes from a syntactical derivation of S.

15:00
Separable Sequent Calculus for First-order Classical Logic (Work in Progress)

ABSTRACT. This paper presents Separable Sequent Calculus as an extension of Gentzen’s first-order classical sequent calculus with Herband-style structural rules (subformula contraction and weakening). Every proof can be transformed into separated form, in which all logical rules precede all structural rules, a result in the spirit of Gentzen’s midsequent theorem or sharpened Hauptsatz (where all propositional rules precede all quantifier rules), and Herbrand’s theorem.

Every separated proof has a direct graph-theoretic semantics as a *combinatorial proof*. Since a standard Gentzen proof is a special case of a separable proof, every Gentzen proof has a combinatorial proof semantics via its separated form. The semantics identifies sequent proofs which differ by rule transpositions (such as pushing a contraction below a weakening). Since combinatorial proofs can be verified in polynomial time (conjectured linear time), they constitute a syntax in their own right, arguably a canonical abstraction of sequent calculus modulo inessential rule transposition.

15:20
Cut-Elimination in Schematic Proofs and Herbrand Sequents
SPEAKER: unknown

ABSTRACT. In a recent paper [2], a procedure was developed extending the first-order CERES method [1] so that it can handle cut-elimination in a schematic first-order calculus. The goal of this work was to circumvent the problems reductive cut elimination methods face when the LK calculus is extended by an induction rule. The schematic calculus can be considered a replacement for certain types of induction. In this work, we used the schematic CERES method to analyse a proof formalized in a schematic sequent calculus. The statement being proved is a simple mathematical statement about total functions with a finite range. The goal of proof analysis using the first-order CERES method [1] has been to produce an ACNF (Atomic Cut Normal Form) as the final output of cut-elimination. However, due to the complexity of the schematic method, the value and usefulness of an ACNF quickly vanishes; it is not easily parsable by humans. The Herbrand sequent corresponding to an ACNF turned out to be a valuable, compact and informative structure, which may be considered the essence of a cut-free proof in first-order logic [3]. We provide a method for extracting a schematic Herbrand sequent from the formalized proof and hint at how ,in future work, we can generalize the procedure to handle a class of proofs by a suitable schematic language and calculus, and not just for a particular instance.

15:40
Stratified Nested Linear Logic

ABSTRACT. Implicit computational complexity (ICC) is the field aiming to characterize complexity classes by syntactic restrictions on models of computation. For example, Light Linear Logic (LLL) is a linear logic subsystem characterizing Ptime. Finding more expressive characterizations of $Ptime$ is a big challenge in ICC. This led, for instance, to L^4 and MS, which are orthogonal extensions of LLL.

Here, we define a linear logic subsystem: Stratified Nested Linear Logic (SNLL) which extends LLL, L^4 and MS. This system is not just the union of those systems as it can type lambda-terms which are typed by none of the above systems.

15:00-16:00 Session 32: Invited talk (LOLA)
Location: FH, Seminarraum 107
15:00
On Machines, Virtual Memory and Successful System Verification

ABSTRACT. Formal verification of programs and systems lacks a concept of objective and absolute correctness. One may only claim correctness with respect to a model. How detailed can we make this model? At which point does our desire for completeness infringe upon project tractability? For operating system verification in particular, overly naïve models result in poor performance and ineffective use of hardware features. On the other hand, overly complex models result in human resource exhaustion and a graveyard of ambitious projects that burnt out in the race for realism.

In this talk, I will discuss these tradeoffs based upon my long experience with the L4.verified project, a successful 20 person-year effort to verify the functional correctness of the seL4 microkernel. I will also discuss my own efforts in searching for an elegant semantics for reasoning about virtual memory based on separation logic.

15:30-16:00 Session 33B: Regular talk (IWC)
Location: FH, Seminarraum 303
15:30
Confluence of linear rewriting and homology of algebras

ABSTRACT. We introduce the notion of higher dimensional linear rewriting system for presentations of algebras generalizing the notion of noncommutative Gröbner bases. We show how to use this notion to compute homological invariants of associative algebras, related to confluence properties of presentations of these algebras. Our method constitutes a new application of confluence in algebra.

15:30-16:00 Session 33C: Formalization (POS)
Location: FH, Hörsaal 3
15:30
Generic CDCL -- A Formalization of Modern Propositional Satisfiability Solvers

ABSTRACT. Modern propositional satisfiability (or SAT) solvers are very powerful due to recent developments on the underlying data structures, the used heuristics to guide the search, the deduction techniques to infer knowledge, and the formula simplification techniques that are used during pre- and inprocessing. However, when all these techniques are put together, the soundness of the combined algorithm is not guaranteed any more, and understanding the complex dependencies becomes non-trivial. In this paper we present a small set of rules that allows to model modern SAT solvers in terms of a state transition system. With these rules all techniques which are applied in modern SAT solvers can be modeled adequately. Furthermore, we show that this set of rules results is sound, complete and confluent. Finnaly, we compare the proposed transition system to related systems, and show how widely used solving techniques can be modeled.

15:30-16:00 Session 33E: Contributed Talk: Complexity of SAT algorithms (PC)
Location: FH, Seminarraum 101B
15:30
Lower bounds for splittings by linear combinations

ABSTRACT. A typical DPLL algorithm for the Boolean satisfiability problem splits the input problem into two by assigning the two possible values to a variable; then it simplifies the two resulting formulas. In the talk we consider an extension of the DPLL paradigm. Our algorithms can split by an arbitrary linear combination of variables modulo two. These algorithms quickly solve formulas that explicitly encode linear systems modulo two, which were used for proving exponential lower bounds for conventional DPLL algorithms.

We prove exponential lower bounds on the running time of DPLL with splitting by linear combinations on 2-fold Tseitin formulas and on formulas that encode the pigeonhole principle.

Raz and Tzameret introduced a system R(lin) that operates with disjunctions of linear equalities with integer coefficients. We consider an extension of the resolution proof system that operates with disjunctions of linear equalities over GF(2); we call this system Res-Lin. Res-Lin can be p-simulated in R(lin) but currently we do not know any superpolynomial lower bounds in R(lin). Tree-like proofs in Res-Lin are equivalent to the behavior of our algorithms on unsatisfiable instances. We prove that Res-Lin is implication complete and also prove that Res-Lin is polynomially equivalent to its semantic version.

15:30-16:00 Session 33F: Proof Nets (TermGraph)
Location: FH, Seminarraum 325/2
15:30
Injectivity of Relational Semantics for (Connected) MELL Proof-Nets via Taylor Expansion

ABSTRACT. We show that: (1) the Taylor expansion of a cut-free MELL proof-structure R with atomic axioms is the (most informative part of the) relational semantics of R; (2) every (connected) MELL proof-net is uniquely determined by the element of order 2 of its Taylor expansion; (3) the relational semantics is injective for (connected) MELL proof-nets.

16:00-16:30Coffee Break
16:30-17:45 Session 34A: Invited Talk and Closing (LCC)
Location: FH, Zeichensaal 3
16:30
The Variable Hierarchy on Ordered Graphs
17:30
Closing Remarks
SPEAKER: Neil Immerman
16:30-18:30 Session 34B (Linearity)
Location: FH, Seminarraum 136
16:30
Ludics without Designs I: Triads

ABSTRACT. In this paper, we introduce the concept of triad. Using this notion, we study, revisit, discover and
rediscover some basic properties of ludics from a very general point of view.

17:00
Wave-Style Token Machines and Quantum Lambda Calculi
SPEAKER: unknown

ABSTRACT. Particle-style token machines are a way to interpret proofs and programs, when the latter are written following the principles of linear logic. In this paper, we show that token machines also make sense when the programs at hand are those of a simple quantum lambda-calculus. This, however, requires generalizing the concept of a token machine to one in which more than one particle travel around the term at the same time. The presence of multiple tokens is intimately related to entanglement and allows to give a simple operational semantics to the calculus, coherently with the principles of quantum computation.

17:30
Geometry of Resource Interaction – A Minimalist Approach
SPEAKER: Marco Solieri

ABSTRACT. The Resource λ-calculus (RC) is a variation of the λ-calculus where arguments can be superposed and must be linearly used. Hence it is a model for non-deterministic and linear programming languages, and the target language of the Taylor expansion of λ-terms.

In a strictly typed restriction of RC, we study the notion of path persistency and we define a Geometry of Interaction inspired by the dynamic algebra. We show that the GoI is invariant under reduction and characterises persistency by means of path regularity. Our construction is also complexity aware, in the sense that it is able to count the non-zero addends in the normal forms a resource term.

18:00
A new point of view on the Taylor expansion of proof-nets and uniformity

ABSTRACT. We introduce (in the multiplicative and exponential fragment of linear logic) the notion of protodifferential net. We then define a coherence relation on the set of proto-differential nets and prove the analogue of the result proven for differential lambda-terms, which does not hold for differential nets: a set of proto-differential nets is the subset of the proto-Taylor expansion of some proof-structure if and only if it is a clique.

16:30-18:00 Session 34C: Conditional Term Rewriting, Complexity and WPTE Closing (WPTE)
Location: FH, Hörsaal 4
16:30
Notes on Structure-Preserving Transformations of Conditional Term Rewrite Systems
SPEAKER: Karl Gmeiner

ABSTRACT. Transforming conditional term rewrite systems (CTRSs) into unconditional systems (TRSs) is a common approach to analyze properties of CTRSs via the simpler framework of unconditional rewriting. In the past many different transformations have been introduced for this purpose. One class of transformations, so-called unravelings, have been analyzed extensively in the past.

In this paper we provide an overview on another class of transformations that we call structure-preserving transformations. In these transformations the structure of the conditional rule, in particular their left-hand side is preserved in contrast to unravelings. We provide an overview of transformations of this type and define a new transformation that improves previous approaches.

17:00
A Complexity Preserving Transformation from Jinja Bytecode to Rewrite Systems
SPEAKER: unknown

ABSTRACT. We revisit known transformations from object-oriented bytecode programs to rewrite systems from the viewpoint of runtime complexity. Suitably generalising the constructions proposed in the literature, we define an alternative representation of Jinja bytecode (JBC) executions as computation graphs from which we obtain a representation of JBC executions as constrained rewrite systems. We show that the transformation is complexity preserving. We restrict to non-recursive methods and make use of heap shape pre-analyses.

17:30
Closing
16:30-18:00 Session 34D: Isabelle workshop (Isabelle)
Location: FH, Seminarraum 138C
16:30
Using Isabelle/HOL to Develop and Maintain Separation Invariants for an Operating System (Extended Abstract)

ABSTRACT. We describe work on an Isabelle/HOL model for the specification of a separation kernel done within the EURO-MILS (http://www.euromils.eu/) project. We chose to extensible records to specify the state. By an example of a theory specifying a group of "event" API calls, it is shown how lemmas on local state are used for obtaining proof obligations for a global separation property.  

17:00
A Simpl Shortest Path Checker Verification

ABSTRACT. Using current verification tools to verify complex algorithms in reasonable time is challenging. Certifying algorithms compute not only an output but also a witness certifying that the output is correct. A checker for a certifying algorithm is a simple program that decides whether the witness is correct for a particular input and output. Verification of checkers is feasible and leads to trustworthy computations. 

In previous work, we verified checkers from the algorithmic library LEDA using the interactive theorem prover Isabelle/HOL as a backend to the automatic code verifier VCC. More recently, we showed that verification can be carried out completely within Isabelle/HOL and compared this to the previous approach concluding that the more recent approach is more trustworthy with comparable efforts. Here, we use the more recent framework, and implement a shortest path checker algorithm for graphs with nonnegative edge weights in the imperative pseudo-code language Simpl.  Moreover, we formally verify the checker using Isabelle/HOL.

17:20
Towards Structured Proofs for Program Verification (Ongoing Work)

ABSTRACT. The first step in program verification is often a call to a verification condition generator (VCG) which decomposes a program into a set of logical formulas, the verification conditions. These verification conditions must then be discharged manually.

The process of constructing the verification conditions loses much of the structure of the original program. In this article, we extend the VCG to attach annotations to the verification conditions and hence preserve the program structure. We implemented this approach in the Isabelle theorem prover. This allows us to replace unstructured proof scripts by structured Isar proofs.

17:40
Amortized Complexity Verified
SPEAKER: Tobias Nipkow

ABSTRACT. A framework for the analysis of the amortized complexity of (functional) data structures is formalized in Isabelle/HOL and applied to two non-trivial examples: skew heaps and splay trees. In the same spirit we show that the move-to-front algorithm for the list update problem is 2-competitive.

16:30-17:30 Session 34E: Implementations (TermGraph)
Location: FH, Seminarraum 325/2
16:30
An Implementation Model for Interaction Nets
SPEAKER: Shinya Sato

ABSTRACT. To study implementations and optimisations of interaction net systems we propose a calculus to allow us to reason about nets, a concrete data-structure that is in close correspondence with the calculus, and a low-level language to create and manipulate this data structure. These work together so that we can describe the compilation process for interaction nets, reason about the behaviours of the implementation, and study the efficiency and properties.

17:00
Fixed Point Theory for Consistent Rewriting in Logic of Action with Reverse

ABSTRACT. This paper studies an action structure to recursively derive actions by rewriting, where the action is abstract as in process algebra and has its reverse action. A representation method of action denotations with action defaults is given by fixed point theory, where the denotations are owing to coexistence of actions and their reverses. As the next method, the reverse action is thought of as multiplicative inverse in a semiring, which can be formed by algebraic compositions of action structures to primarily express sequences of actions with the action denotations. In both ways, consistency to classify each action and its reverse into different sorts is respected.

16:30-18:10 Session 34F: Contributed Talks (GSB)
Location: FH, Seminarraum 138A
16:30
Link formulas in implication fragments of substructural logics

ABSTRACT. Two well-known implication fragments of substructural logics are BCI (fragment of logic without contraction and weakening) and BCK (fragment of logic without contraction). The technique of link formulas was developed by the present author first for BCI, as a tool to solve an open problem. Then, a modification of the technique turned out to work for BCK and delivered a proof of its structural incompleteness. The technique can be used to obtain relatively easy proofs of admissibility of certain rules in BCK and BCI, yielding structural incompleteness proofs of several logics in the vicinity.

I will present the technique in a uniform way, applicable to BCI and BCK, as well as to the implication fragments of the relevant logic R and the intuitionistic logic. Some applications (old and new) will be presented to illustrate the technique.

16:55
A proof theoretic approach to standard completeness
SPEAKER: Paolo Baldi

ABSTRACT. We show extensions of the proof theoretic approach to standard completeness, based on the elimination of the density rule in suitable hypersequent calculi.

17:20
On Affine Logic and Łukasiewicz Logic
SPEAKER: Rob Arthan

ABSTRACT. We report on a study of intutionistic fragments of Łukasiewicz logic. We present, in a structured human readable form, syntactic proofs of a number of important formulas originally found with the assistance of the Prover9 automated theorem prover. The identities include homomorphism properties that we put to work in an analysis of the double negation translations of Kolmogorov, Gödel, Gentzen and Glivenko.

17:45
Simpler Proof Net Quantifiers: Unification Nets (Work in Progress)

ABSTRACT. Girard formulated an approach to proof net quantifiers over a series of three papers. Since the approach uses explicit witnesses for existential quantifiers, it suffers from two problems: limited proof identification and non-local cut elimination. This paper introduces a more abstract approach, unification nets, which solves these problems by leaving existential witnesses implicit: a unification net is merely an axiom linking.

16:30-18:00 Session 34G: QBF Applications (2), The QBF Gallery 2014 (QBF)
Location: FH, Hörsaal 2
16:30
Synthesis of distributed systems using QBF

ABSTRACT. We revisit the interactive consistency problem introduced
by Pease, Shostak and Lamport.  We first show that their
algorithm does not achieve interactive consistency if
faults are transient, even if faults are non-malicious. 
We then present an algorithm that achieves interactive
consistency in the presence of non-malicious, asymmetric
and transient faults, but only under an additional
guaranteed delayed ack assumption.

We discovered our algorithm
using an automated synthesis technique that is based on
bounded model checking and QBF solving.  Our synthesis
technique is general and simple, and it is a promising
approach for synthesizing distributed algorithms.
We also illustrate the use of our synthesis approach on
a self-stabilizing mutual exclusion algorithm introduced by Dijkstra.

 

 

 

17:00
The QBF Gallery 2014
SPEAKER: unknown
16:30-18:30 Session 34H (HOL)
Location: FH, CAD 2
16:30
Discussion Session
SPEAKER: Everyone

ABSTRACT. Interspersed with our hack sessions on implementing and discussing bugs and new features for HOL4, we will have time for discussion of e.g. future directions for HOL workshops, cool applications and theorem proving ideas, impromptu talks, Q&A, etc.

17:00
Hack Session 3
SPEAKER: Everyone
16:30-18:15 Session 34I: Regular talks, confluence competition (CoCo) and closing (IWC)
Location: FH, Seminarraum 303
16:30
Normalization Equivalence of Rewrite Systems

ABSTRACT. Métivier (1983) proved that every confluent and terminating rewrite system can be transformed into an equivalent canonical rewrite system. He also proved that equivalent canonical rewrite systems which are compatible with the same reduction order are unique up to variable renaming. In this note we present simple and formalized proofs of these results. The latter result is generalized to the uniqueness of normalization equivalent reduced rewrite systems.

17:00
Non-E-overlapping and weakly shallow TRSs are confluent (Extended abstract)

ABSTRACT. This paper briefly sketches that "non-E"-overlapping and weakly-shallow TRSs are confluent" by extending reduction graph in our previous work (M.Sakai and M.Ogawa. Weakly-non-overlapping non-collapsing shallow term rewriting systems are confluent. Information Processing Letters vol.110, 2010) by introducing constructor expansion. A term is weakly shallow if each defined function symbol appears either at the root or in the ground subterms, and a TRS is weakly shallow if the both sides of rules are weakly shallow. The non-E-overlapping property is undecidable for weakly shallow TRSs and a decidable sufficient condition is the strongly non-overlapping condition. A Turing machine can be simulated by a weakly shallow TRS (p.27 in J.W.Klop, Term Rewriting Systems, 1993); thus the word problem is undecidable,in contrast to shallow TRSs.

17:30
Confluence Competition
SPEAKER: Harald Zankl

ABSTRACT. The 3rd Confluence Competition (CoCo 2014) runs live. The competition is part of the FLoC'14 Olympic Games.

16:30-17:10 Session 34J: Reasoners (ORE)
Location: FH, Seminarraum 101A
16:30
Mini-ME 2.0: powering the Semantic Web of Things

ABSTRACT. This paper presents an updated version of Mini-ME, a mobile reasoner for the Semantic Web of Things. Building upon previous stronger elements, i.e. computational efficiency and support for non-standard inference services, novel features have been added. Particularly, the Concept Covering reasoning task for request answering via service/resource composition has been included among allowed inferences, Protégé plugins have been released and the support for OWLlink protocol is now available. As a proof of concept, two use cases are presented, both in the mobile and ubiquitous computing field: a wireless semantic sensor network and a mobile semantic augmented reality scenario.

16:50
Incremental and Persistent Reasoning in FaCT++

ABSTRACT. Reasoning in complex DLs is well known to be expensive. However, in numerous application scenarios, the ontology in use is either never modified at all (e.g., in query answering), or the amount of updates is negligible in comparison with the whole ontology (e.g., minor manual edits, addition of a few individuals). In order to efficiently support these scenarios, FaCT++ implements the following two techniques: persistent and incremental reasoning. In persistent reasoning mode, after classification, the reasoner saves its internal state, including computed information (e.g., concept taxonomy) on a persistent medium; the next use of the ontology will not require classification to be performed from scratch, but just reading an input file. In incremental reasoning mode, the reasoner is notified of a change and identifies a (usually small) portion of its internal state that is affected by the change. This is the only part that requires recomputation. This approach can lead to greater overall efficiency, when compared with having to reload and reclassify the whole ontology.

16:30-18:00 Session 34K (UNIF)
Location: FH, Seminarraum 104
16:30
Constraint Manipulation in SGGS
SPEAKER: unknown

ABSTRACT. SGGS (Semantically-Guided Goal-Sensitive theorem proving) is a clausal theorem-proving method, with a seemingly rare combination of properties: it is first order, DPLL-style model based, semantically guided, goal sensitive, and proof confluent. SGGS works with constrained clauses, and uses a sequence of constrained clauses to represent a tentative model of the given set of clauses. A basic building block in SGGS inferences is splitting, which partitions a clause into clauses that have the same set of ground instances. Splitting introduces constraints and their manipulation, which is the subject of this paper.

17:00
Two-sided unification is NP-complete
SPEAKER: unknown

ABSTRACT. It is generally accepted that to unify a pair of substitutions $\theta_1$ and $\theta_2$ means to find out a pair of substitutions $\eta'$ and $\eta''$ such that the compositions $\theta_1\eta'$ and $\theta_2\eta''$ are the same. Actually, unification is the problem of solving linear equations of the form $\theta_1X=\theta_2Y$ in the semigroup of substitutions. But some other linear equations on substitutions may be also viewed as less common variants of unification problem. In this paper we introduce a two-sided unification as the process of bringing a given substitution $\theta_1$ to another given substitution $\theta_2$ from both sides by giving a solution to an equation $X\theta_1Y=\theta_2$. Two-sided unification finds some applications in software refactoring as a means for extracting instances of library subroutines in arbitrary pieces of program code. In this paper we study the complexity of two-sided unification and show that this problem is NP-complete by reducing to it the bounded tiling problem.

17:30
Nominal Anti-Unification
SPEAKER: unknown

ABSTRACT. We study nominal anti-unification, which is concerned with computing least general generalizations for given terms-in-context. In general, the problem does not have a least general solution, but if the set of atoms permitted in generalizations is finite, then there exists a least general generalization which is unique modulo variable renaming and alpha-equivalence. We present an algorithm that computes it. The algorithm relies on a subalgorithm that constructively decides equivariance between two terms-in-context.

16:30-18:00 Session 34L: Applications and closing (POS)
Location: FH, Hörsaal 3
16:30
iDQ: Instantiation-Based DQBF Solving

ABSTRACT. Dependency Quantified Boolean Formulas (DQBF) are obtained by adding Henkin quantifiers to Boolean formulas and have seen growing interest in the last years. Since deciding DQBF is NEXPTIME-complete, efficient ways of solving it would have many practical applications. Still, there is only few work on solving this kind of formulas in practice. In this paper, we present an instantiation-based technique to solve DQBF efficiently. Apart from providing a theoretical foundation, we also propose a concrete implementation of our algorithm. Finally, we give a detailed experimental analysis evaluating our prototype iDQ on several DQBF as well as QBF benchmarks.

17:00
Instance-based Selection of CSP Solvers using Short Training
SPEAKER: unknown

ABSTRACT. Many different approaches for solving Constraint Satisfaction Problems (CSP) (and related Constraint Optimization Problems (COP)) exist. However, there is no single solver that performs well on all classes of problems and many portfolio approaches for selecting a suitable solver based on simple syntactic features of the input CSP instance are developed. Unlike other existing portfolio approaches for CSP, our methodology is based on training with very short timeouts significantly reducing overall training time. Still, thorough evaluation has been performed on large publicly available corpora and our portfolio method gives good results, improves upon the efficiency of single state-of-the-art tools used in comparison, and is comparable to classical methods that use long timeout during the training phase.

17:30
Incremental SAT-based Method with Native Boolean Cardinality Handling for the Hamiltonian Cycle Problem
SPEAKER: Takehide Soh

ABSTRACT. The Hamiltonian cycle problem (HCP) is a problem of finding a spanning cycle in a given graph. HCP is NP-complete and has been known as an important problem due to its close relation to the travelling salesman problem (TSP). TSP can be seen as an optimization variant of finding a minimum cost cycle.

In this paper, we propose an incremental SAT-based method for solving HCP. The increase of clauses often prevents SAT-based methods from being scalable. Our method reduces the number of encoded clauses by relaxing constraints and using the native handling of cardinality constraint which is realized by a special propagator in Sat4j.

Experimental evaluations are carried out on four benchmark sets and we have compared our incremental SAT-based method with an existing eager SAT-based method and specialized methods.

16:30-17:30 Session 34M: Contributed Talks: Proof Complexity (PC)
Location: FH, Seminarraum 101B
16:30
Narrow Proofs May Be Maximally Long

ABSTRACT. We prove that there are 3-CNF formulas over n variables that can be refuted in resolution in width w but require resolution proofs of size nΩ(w). This shows that the simple counting argument that any formula refutable in width w must have a proof in size nO(w) is essentially tight. Moreover, our lower bounds can be generalized to polynomial calculus resolution (PCR) and Sherali-Adams, implying that the corresponding size upper bounds in terms of degree and rank are tight as well. Our results do not extend all the way to Lasserre, however - the formulas we study have Lasserre proofs of constant rank and size polynomial in both n and w.

17:00
Parity games and propositional proofs
SPEAKER: Neil Thapen

ABSTRACT. Parity games are a class of two player games played by moving a token around a finite graph. They have important applications in automata theory, logic and verification. The main computational problem for such games is to decide which player has a winning strategy. This problem is reducible to a search problem in the intersection of the classes PLS and PPAD, but is not known to be in P, despite intensive research work.

A propositional proof system is weakly automatizable if there is a polynomial time algorithm which separates satisfiable formulas from formulas which have a short refutation in the system, with respect to a given length bound. We show that if resolution is weakly automatizable, then the decision problem for parity games is in P.

We also give simple proofs that the same holds for depth-1 propositional calculus (where resolution has depth 0) with respect to the related classes of mean payoff games and simple stochastic games.

We define a new type of combinatorial game and prove that resolution is weakly automatizable if and only if one can separate, by a set decidable in polynomial time, the games in which the first player has a positional winning strategy from the games in which the second player has a positional winning strategy.

Our main technique is to show that a suitable weak bounded arithmetic theory proves that both players in a game cannot simultaneously have a winning strategy, and then to translate this proof into propositional form.

16:30-18:00 Session 34N: Discussion on RTA, Open Problems, and Business Meeting (IFIP-WG16)
Location: FH, Seminarraum 101C
16:30
Discussion on the Future of RTA (+ TLCA)
SPEAKER: Georg Moser
17:00
Discussion on the Open Problems List in Rewriting
17:30
Business Meeting
SPEAKER: Jürgen Giesl
16:30-17:30 Session 34P (LSB)
Location: FH, Seminarraum 134A
16:30
The role of domain specific languages in simulation studies - Analyzing the impact of membrane dynamics on the Wnt signaling pathway

ABSTRACT. In a simulation study aimed at developing a model that answers questions of interest about a target system, phases of model development and refinement are interwoven with phases of experimentation. During this process, a variety of experiments are typically executed, including parameter scans, optimization, or sensitivity analysis. Domain-specific languages, their expressiveness and succinctness, play a central role, not only to describe and revise a model, but also to specify and generate experiments, and thus to facilitate their reproduction. We will elucidate this role with the external domain-specific language ML-Rules (a rule-based multi-level modeling language for biochemical systems), SESSL (an internal domain-specific language for specifying simulation experiments), and a case study analyzing the impact of membrane dynamics on the Wnt signaling pathway.

16:30-18:00 Session 34R: Contributed Talks (DTP)
Location: FH, Seminarraum 325/1
16:30
Scrapping Your Dependently-Typed Boilerplate is Hard

ABSTRACT. More complex programs that model programming languages or specific domain information end up with large nested data structures, and as a consequence changing even simple properties require a full traversal of the structure. The Haskell community has developed various techniques that try to reduce this boilerplate by providing generic frameworks querying and traversal, such as Scrap Your Boilerplate (SYB)[1] and Uniplate [2]. Since many of the modern dependently-typed languages such as Agda[3] or Idris[4] are inspired by Haskell, it would be reasonable to expect that such frameworks could be provided in these languages. However, as I will show in my presentation there are some unresolved issues that prevent these approaches from being used as-is.

17:00
On a style of presentation of type systems

ABSTRACT. We outline a way of formulating and explaining type systems/functional programming languages that we believe could be both logically well-founded and didactic. We propose to discuss the details for both simple and dependent type systems.

17:30
Sequential decision problems and a computational theory of avoidability
SPEAKER: unknown

ABSTRACT. please, see uploaded PDF file

16:30-18:00 Session 34S (DCM)
Location: FH, Seminarraum 138B
16:30
Probabilistic Types and Function Overloading

ABSTRACT. We present a theory of types where, besides the function type, we introduce a new type constructor for probabilistic choice. This constructor allows for the selection of a particular type among a finite set of options according to a given probability. This theory induces a type assignment system for a probabilistic $\lambda$--calculus which we show to be sound with respect to a probabilistic term reduction. We also illustrate the use of this type system in order to express a new form of ad hoc polymorphism, which allows for a probabilistic selection of the code of a polymorphic function to be executed.

17:00
Differential privacy at work: Verification of approximate probabilistic programs and models for choosing epsilon
SPEAKER: unknown

ABSTRACT. This submission surveys some ongoing work on the application of differential privacy—a strong probabilistic privacy guarantee that has recently been under intensive study. We will discuss ideas introduced in previous works, with the overall theme of putting differential privacy to practical use. We have three main goals. First, we wish to explain the concept of differential privacy, and give an idea of why it is useful. Next, from a verification perspective, differential privacy can be seen as an approximate notion of equivalence between probabilistic programs. So, our second goal is to direct attention towards problems related to the probabilitic programming model, which has so far been less explored. Finally, verifying the differential privacy property is only half the challenge—as we will see, the property is parameterized by a numeric value ε, which controls the strength of the guarantee. Hence, a program might be successfully verified to be differentially private, but for an ε that gives a uselessly weak guarantee. Thus, our final, more open-ended goal is to investigate which values of ε give practically useful guarantees, and what this means for verification.

17:30
Interactive Particle Systems and Random Walks on Hypergraphs
SPEAKER: unknown

ABSTRACT. We study hypergraph analogues of interacting particle systems and random walks, notably generalizations of coalescing and annihilating random walks. Their definition is motivated by the problem of analyzing the expected running time of a local search procedure for the $k$-XOR SAT problem, as well as a certain constrained triad dynamics in the theory of social balance.

16:30-17:30 Session 34T: Contributed talks (LOLA)
Location: FH, Seminarraum 107
16:30
A Cross-Language Framework for Verifying Compiler Optimizations

ABSTRACT. Most compiler correctness efforts, whether based on validation or once-and-for-all verification, are tightly tied to the particular language(s) under consideration. Proof techniques may be replicated for other targets, and parts of the compiler chain may be shared for new input or output languages, but the extent to which common elements can be generalized across multiple targets has not been fully explored. In this paper, we lay out a general approach to specifying and verifying optimizations and transformations on low-level intermediate languages. By generalizing across elements such as concurrent memory models and single-thread operational semantics, we can build a library of facts that can be reused in verifying optimizations for dramatically different target languages, such as stack-machine and register-machine languages.

17:00
Call-by-Value in a Basic Logic for Interaction

ABSTRACT. In game semantics and related approaches to programming language semantics, programs are modelled by interaction dialogues. Such models have recently been used by a number of authors for the design of compilation methods, in particular for applications where resource control is important. The work in this area has focused on call-by-name languages. In this talk I will consider the compilation of call-by-value into a first-order target language by means of an interpretation in an interactive model. The main result is that Plotkin’s standard call-by-value CPS-translation and its soundness proof can be refined to target this intermediate language. This refined CPS-translation amounts to a direct compilation of the source language into a first-order language.

16:30-18:00 Session 34U (CLC)
Location: FH, Dissertantenraum E104
16:30
A sheaf model of the algebraic closure
SPEAKER: unknown

ABSTRACT. In constructive algebra one cannot in general decide the irreducibility of a polynomial over a field K. This poses some problems to showing the existence of the algebraic closure of K. We give a possible constructive interpretation of the existence of the algebraic closure of a field in characteristic 0 by building, in a constructive metatheory, a suitable site model where there is such an algebraic closure. One can then extract computational content from this model. We give examples of computation based on this model.

17:00
A "Game Semantical" Intuitionistic Realizability Validating Markov's Principle
SPEAKER: unknown

ABSTRACT. We propose a very simple modification of Kreisel's modified realizability in order to computationally realize Markov's Principle in the context of Heyting Arithmetic in all finite types. Intuitively, realizers correspond to arbitrary strategies in Hintikka-Tarski games, while in Kreisel's realizability they can only represent winning strategies. Our definition, however, does not employ directly game semantical concepts and remains in the style of functional interpretations. As term calculus, we employ a purely functional language, which is Godel's System T enriched with some syntactic sugar. Our work is different from the Dialectica in two ways: first, the realizability notion is in the style of Kreisel's and thus less technically involved; second, the term assignment to proofs follows the standard Curry-Howard correspondence for Heyting Arithmetic and thus is considerably simpler.

Note: talk extracted from the paper

http://perso.ens-lyon.fr/federico.aschieri/Types2013AschieriZorzi.pdf

to appear in the Post-Proceedings of TYPES 2013

17:20
Relative Computability and Uniform Continuity of Non-Extensional (aka Multivalued) 'Functions'
SPEAKER: Arno Pauly

ABSTRACT. In this short submission we report on (and advertise :-) our recent work that appeared in Journal of Logic & Analysis vol.5:7 (2013) 1–39:

A computable real function is necessarily continuous; and this remains true for computations relative to any oracle. Conversely, by the Weierstrass Approximation Theorem, every continuous f:[0;1]->R is computable relative to some oracle. In their search for a similar topological characterization of relatively computable MULTIvalued functions f:[0;1]=>R (also known as multi-functions or relations), Brattka and Hertling (1994) have considered two notions: weak continuity (which is weaker than relative computability) and strong continuity (which is stronger than relative computability). Observing that UNIFORM continuity plays a crucial role in the Weierstrass Theorem, we propose and compare several notions of uniform continuity for relations. Here, due to the additional quantification over values y in f(x), new ways arise of (linearly) ordering quantifiers -- yet none turns out as satisfactory. We are thus led to a concept of uniform continuity based on the Henkin quantifier; and prove it necessary for relative computability of compact real relations. In fact iterating this condition yields a strict hierarchy of notions each necessary -- and the omega-th level also sufficient -- for relative computability. A refined, quantitative analysis exhibits a similar topological characterization of relative polynomial-time computability.

17:40
Negative Translations and Heyting Algebra Expansions
SPEAKER: Tadeusz Litak

ABSTRACT. I discuss the behaviour of generalizations of standard negative translations - Kolmogorov, Kuroda, Goedel-Gentzen, Glivenko - in extensions of intuitionistic logics with additional propositional connectives and additional axioms. The main focus will be on extensions with a single, unary, normal "box-like" modality. The question, in other words, is: for which axioms the negative translation embeds faithfully the resulting classical modal logic into its intuitionistic counterpart? As it turns out, even the Kolmogorov translation can go wrong with rather natural modal axioms. Nevertheless, there are several sufficient syntactic and semantic criteria which ensure most of these translation play well with most standard normal modal logics.

17:10-17:15 Session 35: ORE closing (ORE)
Location: FH, Seminarraum 101A
18:00-19:00 Session 36 (UNIF)
Location: FH, Seminarraum 104
18:00
A Categorical Perspective on Pattern Unification (Extended Abstract)
SPEAKER: unknown

ABSTRACT. In 1991 Miller described a subset of the higher-order unification problem for the Simply Typed Lambda Calculus which admits most general unifiers, called the pattern fragment. This subset has been extended to more complex type theories and it is still used as the basis of modern unification algorithms in applications like proof search and type inference. Our contribution is a new presentation of the original unification algorithm that focuses on the abstract properties of the operations involved, using category theory as a structuring principle. These properties characterize a class of languages for which the algorithm can be reused.

18:30
Towards a better-behaved unification algorithm for Coq
SPEAKER: unknown

ABSTRACT. The unification algorithm is at the heart of a proof assistant like Coq. In particular, it is a key component in the refiner (the algorithm who has to fill implicit terms and missing type annotations), and the application of lemmas. Despite of being so important, however, the current unification algorithm of Coq is not properly documented, and implements certain heuristics that makes unification unpredictable and hard to understand.

In this work we discuss some of the problems with the current algorithm and present a new one, built from scratch, which aims at solving these issues. The new algorithm is properly documented, putting us on better grounds for a formally verified and optimized algorithm.