VSL 2014: VIENNA SUMMER OF LOGIC 2014
PROGRAM FOR FRIDAY, JULY 18TH, 2014
Days:
previous day
next day
all days

View: session overviewtalk overview

08:45-10:15 Session 86A: New and advanced Coq features (Coq)
Location: MB, Hörsaal 14A
08:45
Presentation of Three Neat Tricks in Coq 8.5
SPEAKER: Jason Gross

ABSTRACT. Coq 8.5 has a number of new features. It has more powerful universe polymorphism support. It allows tactics to be run at interpretation to construct other terms. The ability to switch from Gallina to Ltac in arbitrary locations nicely complements the constr: notation permitting the switch from Ltac to Gallina in tactics, and opens up many new possibilities. I propose to present three tricks involving these new features: tactics in terms allows the construction of tactics that recurse under binders; tactics in terms together with typeclasses allows overloading notations based on the type of their arguments; and there is a way to talk about universe levels explicitly, helped along by tactics in terms.

09:15
Towards a better-behaved unification algorithm for Coq
SPEAKER: Beta Ziliani

ABSTRACT. The unification algorithm is at the heart of a proof assistant like Coq. In particular, it is a key component in the refiner (the algorithm who has to fill implicit terms and missing type annotations), and the application of lemmas. Despite of being so important, however, the current unification algorithm of Coq is not properly documented, and implements certain heuristics that makes unification hard to understand and unpredictable. In this talk we are going to present a new unification algorithm, built from scratch, which focuses on it being simple, easy to understand, sound, and configurable to adapt to the different needs.

09:45
Proof-relevant rewriting strategies in Coq

ABSTRACT. We introduce various enhancements of the generalized rewriting system of Coq. First, we show how the framework can be lifted to proof-relevant relations using the newly introduced universe polymorphic definitions in Coq. Second, we introduce rewriting strategies as a monadic combinator library on top of this framework, ressembling the LogicT monad for proof-search (i.e., with backtracking, well-behaved choice and composition). These new features combine to provide a general tool for fine-tuned automated rewriting applicable not only to propositional relations but also general computational type-valued relations. Last, we will also present an idea to handle dependent rewriting, e.g. the ability to rewrite in the domain of a dependent product and get corresponding transportation obligations.

08:45-10:15 Session 86B: Tutorial: How do we get inductive invariants? (CAV)
Location: FH, Hörsaal 6
08:45
How do we get inductive invariants? (Part A)

ABSTRACT. Verifying the correctness of loop-free programs (or of general programs, up to bounded depth) is difficult: the state space explodes exponentially as the depth increases. Yet, the difficulty increases as we allow unboundedly many execution steps; proof approaches then generally rely on finding inductive invariants (properties shown to hold initially, then to remain true by induction).

Abstract interpretation attempts finding inductive invariants within a given domain, e.g. conjunctions of linear inequalities. The classical approach iterates a transformer until the property becomes inductive. In general, this approach may not terminate; thus termination is often enforced with a “widening” operator, which attempts at generalizing the iterates into an inductive property. Unfortunately, widening operators are brittle, with non-monotonic behaviors (supplying more information about a system may result in worse analysis outcomes!). Therefore, other approaches have been developed (policy iteration,…), which avoid this pitfall.

Finally, we shall discuss possible combinations of abstract interpretation and SMT-solving.

08:45-10:15 Session 86C: Summer School on Computational Logic: Probabilistic Logic Programming I (ICLP)
Location: Seminarraum 1/3 OPG
08:45
Summer School Opening
SPEAKER: Gopal Gupta
08:50
Probabilistic Logic Programming I
08:45-10:15 Session 86E: Fairness and Liveness (WST)
Location: FH, Seminarraum 101B
08:45
Fairness for Infinite-State Systems
SPEAKER: Heidy Khlaaf

ABSTRACT. In this paper we introduce the first known tool for symbolically proving fair-CTL properties of (infinite-state) integer programs. Our solution is based on a reduction to existing techniques for fairness-free CTL model checking. The key idea is to use prophecy variables in the reduction for the purpose of symbolically partitioning fair from unfair executions.

09:15
Reducing Deadlock and Livelock Freedom to Termination

ABSTRACT. In this paper we introduce a general method for proving deadlock and livelock freedom of concurrent programs with shared memory. Our goal in this work is to support programs which use locks stored in mutable data structures. The key to our technique is the observation that dependencies between locks can be abstracted using recursion and non-determinism in a sequential logic program such that termination of the abstraction implies deadlock and livelock freedom of the original program.

09:45
Specifying and verifying liveness properties of QLOCK in CafeOBJ
SPEAKER: unknown

ABSTRACT. We provide an innovative development of algebraic specifications and proof scores in CafeOBJ of QLOCK's safety and liveness properties. The particular interest of the development is two-fold: Firstly, it extends base specifications in order-sorted and rewriting logics to a meta-level, which requires behavioral logic, thus using the three logics together to achieve the proofs. Secondly, we use a search predicate and covering state patterns that allow us to prove the validity of a property over all possible one step transitions, by which safety and liveness properties in the base and meta level can be proven.

08:45-10:15 Session 86F: Hypervisors and dynamic data structures (VSTTE)
Location: FH, Zeichensaal 3
08:45
Store Buffer Reduction with MMUs
SPEAKER: Geng Chen

ABSTRACT. A fundamental problem in concurrent system design is to identify flexible programming disciplines under which weak memory models provide sequential consistency. For x86-TSO, a suitable reduction theorem for threads that communicate only through shared memory was given by Cohen and Schirmer [ITP2010]. However, this theorem cannot handle programs that edit their own page tables (e.g. memory mangers, hypervisors, and some device drivers). The problem lies in the interaction between a program thread and the hardware MMU that provides its address translation: the MMU cannot be treated as a separate thread (since it implicitly communicates with the program thread), nor as part of the program thread itself (since MMU reads do not snoop the store buffer of the program thread). We generalize the Cohen-Schirmer reduction theorem to handle programs that edit their page tables. The added conditions prevent the MMU of a thread from walking page table entries owned by other threads.

09:10
Separation Kernel Verification: the XtratuM Case Study
SPEAKER: unknown

ABSTRACT. The separation kernel concept was developed as an architecture to simplify formal kernel security verification, and is the basis for many implementations of integrated modular avionics in the aerospace domain. This paper reports on a feasibility study conducted for the European Space Agency, to explore the resources required to formally verify the correctness of such a kernel, given a reference specification and a implementation of same. The study was part of an activity called Methods and Tools for On-Board Software Engineering (MTOBSE) which produced a natural language Reference Specification for a Time-Space Partitioning (TSP) kernel, describing partition functional properties such as health monitoring, inter-partition communication, partition control, resource access, and separation security properties, such as the security policy and authorisation control. An abstract security model, and the reference specification were both formalised using Isabelle/HOL. The C sources of the open-source XtratuM kernel were obtained, and a Isabelle/HOL model of the code was semi-automatically produced. Refinement relations were written manually and some proofs were explored. We describe some of the details of what has been modelled and report on the current state of this work. We also make a comparison between our verification explorations, and the circumstances of NICTA's successful verification of the sel4 kernel.

09:35
Separation algebras for C verification in Coq

ABSTRACT. Separation algebras are a well-known abstraction to capture common structure of both permissions and memories in programming languages, and form the basis of models of separation logic. As part of the development of a formal version of an operational and axiomatic semantics of the C11 standard, we present a variant of separation algebras that is well suited for C verification.

Our variant of separation algebras has been fully formalized using the Coq proof assistant, together with a library of concrete implementations. These instances are used to build a complex permission model, and a memory model that captures the strict aliasing restrictions of C.

10:00
Automatically verified implementation of data structures based on AVL trees (SHORT)

ABSTRACT. I propose verified implementations of several data structures, including random-access lists and ordered maps. They are derived from a common parametric implementation of self-balancing binary trees in the style of Adelson-Velskii and Landis trees. The development of the specifications, implementations and proofs is carried out using the Why3 environment. The originality of this approach is the genericity of the specifications and code combined with a high level of proof automation.

08:45-10:15 Session 86G: Opening and Invited Talk (LaSh and QUANTIFY)
Location: MB, Aufbaulabor
08:45
LaSh and QUANTIFY Openings
SPEAKER: Lash
09:00
Instantiation-based reasoning, EPR encodings and all that

ABSTRACT.  Instantiation-based reasoning aims to combine the expressivity of first-order logic with the efficiency of propositional and SMT reasoning. Most instantiation- based methods are complete for first-order logic and are especially efficient for the effectively propositional fragment (EPR). It turns out that many problems such as model checking, bit-vector reasoning, finite model finding and QBF solving can be succinctly encoded into the EPR fragment. In this talk I will discuss recent developments in instantiation-based reasoning, EPR based encodings and related challenges.

 

08:45-10:15 Session 86H: Tutorial and a contributed talk (CSPSAT)
Location: MB, Zeichensaal 15
08:45
Tutorial (tentative): TBA
09:45
SAT Compilation for Constraints over Finite Structured Domains
SPEAKER: Alexander Bau

ABSTRACT. Due to the availability of powerful SAT solvers, propositional encoding is a successful technique of solving constraint systems over finite domains. As these domains are often flat and non-structured, we aim to extend this concept by enriching the underlying domain with user-defined product types and sum types. The co4 language is a syntactically subset of Haskell and allows to specify constraint systems over such enriched domains using algebraic data types, pattern-matching, higher-order functions and polymorphism. This paper gives an overview over the transformation into propositional logic and illustrates examples and use-cases for co4.

08:45-10:15 Session 86I: Contributed Talks (LATD)
Location: MB, Festsaal
08:45
Relatively filtral quasivarieties
SPEAKER: James Raftery

ABSTRACT. Improving a recent result of the author, it is shown here that, when a quasivariety K algebraizes a finitary sentential logic L, then L has a classical inconsistency lemma iff K is relatively filtral and its nontrivial members have only nontrivial subalgebras.  In the process, the theory of uniform congruence schemes, ideal varieties and filtrality is extended from varieties to quasivarieties, and it is proved that a quasivariety is relatively filtral iff it is relatively semisimple and has equationally definable principal relative congruences.  

09:15
On logics of varieties and logics of semilattices

ABSTRACT. This contribution addresses a general problem of abstract algebraic logic and its instantiation for the variety of semilattices, which is the underlying structure of a host of non-classical logics, in particular fuzzy logics. The general problem is how to associate a logic with a given variety. Since the general theory of abstract algebraic logic provides three procedures to associate a class of algebras with a given logic, our original problem gives rise to three different questions. We show that one of them cannot be answered affirmatively in general, while the other two always can. We determine and study the weakest solution. This leads to a natural definition of the notion of "the logic of a variety". We obtain some results on the poset of all logics of a variety generated by a primal algebra. In the second part we study the poset of the logics of the variety of semilattices, showing that it has a minimum and two maximals and is atomless, among other results.

09:45
Congruential deductive systems associated with equationally orderable varieties
SPEAKER: Ramon Jansana

ABSTRACT. We introduce the notion of the deductive system of the order of an equationally orderable quasivariety and we prove that for equationally orederable varieties this deductive system is congreuntial (fully selfextensional). We compare the result with previous results on deductive systems with the congruece property (selfextensional)and the deduction-detachment property or the property of conjunction.

08:45-10:15 Session 86J: Contributed Talks (LATD)
Location: MB, Hörsaal 15
08:45
Semantial and syntactial charaterisation of some extensions of the class of MV-algebras

ABSTRACT. We will consider MV-algebras as systems where A is a nonempty set of elements, 0 and 1 are distinct constant elements of A, + and · are binary operations on the elements of A, and - is a unary operation on elements of A. The class of all MV-algebras will be denoted be MV. It is known that the set Id(MV) of all identities fulfilled in the class MV determines a variety (i.e., nonempty class of algebras closed under subalgebras, homomorphic images and direct products) MV. Let Id(\tau) denote the set of all identities of type \tau. For a set S ⊆ Id(\tau) we denote by Cn(S) the deductive closure of S, i.e. Cn(S) is the smallest subset of Id(\tau) containing S such that: 1. x = x ∈ Cn(S) for every variable x; 2. p = q ∈ Cn(S) ⇒ q = p ∈ Cn(S); 3. p = q, q = r ∈ Cn(S) ⇒ p = r ∈ Cn(S); 4. Cn(S) isclosed under replacement; 5. Cn(S) is closed under substitution. If S = Cn(S) then S is called an equational theory. We will choose from the set Id(MV) a subset E. If the set E is a proper subset of the set Id(MV) and it is an equational theory, then its corresponding variety MV_E is bigger than MV with respect to inclusion. A natural question to describe a lattice of subvarieties of some bigger variety containing MV arises. A partial answer to this problem is presented in the paper. The research of subvarieties of the variety MV_E is a partial solution of the most general problem in this area: for a fixed language find all equational theories contained between the theory generated by the single identity x = x and the full theory determined by the single identity x = y. Logicians inquire about lattices of intermediate logics (for example between intuitionistic logic and classical logic), algebraists search for lattices of subvarieties. In our case the set E is related to the special structure of terms occurring in the identity. We consider a given type of algebras \tau : F → N, where F is a set of fundamental operation symbols and N is the set of non-negative integers. Let \Pi_F be the set of all partitions of F and let P ∈ \Pi_F. For any terms p and q of the type \tau, the identity p = q is P-compatible iff it is of the form x = x or both p and q are not variables and the outermost operation symbols in p and q belong to the same blok of the partition P. The notion of P-compatible identity was introduced by J. Płonka [7] and it is a generalization of an externally compatible identity introduced by W. Chromik in [2] and normal identity defined independently by J. Płonka [6] and I. I. Mel'nik [5]. An identity p = q of type \tau is externally compatible if it is of the form x = x or the most external fundamental operation symbols in p and q are identical (in other words, P-compatible identity is an externally compatible identity if P consists of singletons only). An identity p = q of type \tau is normal if it is of the form x = x or neither p nor q is a variable (i.e., P-compatible identity is a normal identity if P = {F}). For the variety V we will use the following notations: • P(V) - the set of all P-compatible identities satisfied in V, • Ex(V) - the set of all externally compatible identities satisfied in V, • N(V) - the set of all normal identities satisfied in V. • Id(V) - the set of all identities satisfied in V. The following inclusions are obvious: Ex(V) ⊆ P(V) ⊆ N(V) ⊆ Id(V). One can prove that P(V) is an equational theory. It is known that every equational theory corresponds to a variety of algebras. Let • V_P denotes the variety defined by P(V), • V_{Ex} denotes the variety defined by Ex(V), • V_N denotes the variety defined by N(V). It is a well known fact that the lattice of all equational theories extending the theory Id(V) is dually isomorphic to the lattice of all subvarieties of the variety V. Thus, for any partition P we have: V ⊆ V_N ⊆ V_P ⊆ V_{Ex}. Subvarieties of the variety MV have been studied by R. Grigolia, Y. Komori, A. Di Nola, and A. Lettieri. A. Lettieri and A. Di Nola gave equational bases for all MV. Y. Komori determined the lattice of subvarieties of the variety of MV. We will ask for the L(MV_{Ex}) lattice of subvarieties of the variety defined by the set Ex(MV). The full description of the lattice L(MV_{Ex}) is complicated and falls outside the scope of the talk. Of course, every subvariety of the class MV is also a proper subvariety of the variety determined the set Ex(MV). Beside giving a description of some chosen elements of the lattice L(MV_{Ex}) we will find all subdirectly irreducible algebras from the classes in the lattice L(MV_{Ex}) and we will give syntactical and semantical characterization of the class of algebras defined by P-compatible identities of MV-algebras. We use standard definitions from [1]. References [1] Burris, S., and H. P. Sankappanavar, A Course in Universal Algebra, Springer- Verlag, New York, 1981. [2] Chromik, W., `Externally compatible identities of algebras', Demonstratio Mathematica 23 (1990), 345--355. [3] Di Nola, A., and A. Lettieri, `Equational Characterization of All Varieties of MV-Algebras', Journal of Algebra 221 (1999), 463--474. [4] Yuichi Komori, `Super ŠLukasiewicz propositional logics', Nagoya Mathematical Journal 84 (1981), 119--133. [5] Mel'nik, I. I., `Nilpotent shifts of varieties', Mat. Zametki 14 (5) (1973) (in Russian). English translation: Math. Notes 14 (1973), 962--966. [6] Płonka, J., `On the subdirect product of some equational classes of algebras', Math. Nachr. 63 (1974), 303--305. [7] Płonka, J., `P-compatible identities and their applications to classical algebras', Math. Slovaca 40 (1) (1990), 21--30. [8] Tarski, A., `Equational logi and equational theories of algebras', in: H. A. Shmidt, K. Shütte, H. J. Thiele, (eds.), Contributions to Mathematical Logic, Nort Holland Publ. Co., Amsterdam, pp. 275--288.

09:15
The space of directions of a polyhedron

ABSTRACT. We study the Stone-Priestley dual space SpecSubP of the lattice of subpolyhedra of a compact polyhedron P, with motivations coming from geometry, topology, ordered-algebra, and non-classical logic. From the perspective of algebraic logic, our contribution is a geometric investigation of lattices of prime theories in Łukasiewicz logic, possibly extended with real constants.

The main result we announce here is that SpecSubP has a concrete description in terms of a non-Hausdorff completion of the space P which holds great geometric interest.

If time allows, we discuss selected consequences of our main result, including compactness of the subspace of minimal primes of SpecSubP, and the fundamental property of SubP of being a co-Heyting algebra.

09:45
Interpreting Lukasiewicz logic into Intuitionistic logic

ABSTRACT. Fixing countable sets of propositional variables X and Y, we write Form(X) for the set of formulæ of Łukasiewicz (infinite-valued propositional) logic L, and Form(Y) for the set of formulæ of Intuitionistic (propositional) logic Int.

We prove that there exists a deductively closed theory Θ in Int, and a function T : Form(X) ⟶ Form(Y) satisfying T(⊤)=⊥, such that, for each α,β∈Form(X), the following holds:
     β ⊢ α in L     if, and only if,     Θ∪{T(α)} ⊢ T(β) in Int.
As a corollary, a formula α∈Form(X) is provable in L if, and only if, ¬T(α) is provable in Int, modulo the theory Θ.

In future work we plan to investigate the properties of T and Θ. Some obvious questions to be addressed include axiomatisability of Θ, and computability of T.

08:45-09:45 Session 86L: Invited Speaker (ASA and FCS-FCC)
Location: MB, Hörsaal 7
08:45
Cryptosense: Formal Analysis of Security APIs from Research to Spin-Off
SPEAKER: Graham Steel

ABSTRACT. In this talk I'll describe how our research project into adapting formal analysis technqiues for cryptographic protocols to security APIs turned into an industry collaboration and finally a spin-off company, Cryptosense, that was created in September 2013. Though the company is still in its early stages, we've already learned a lot about the journey from academic results to commercial product. I'll talk about what we're doing now, what we're developing for the future, and what its like to transition from full time researcher to start-up CEO.

08:45-10:15 Session 86M: Invited Talk and Contributed Talk (DL and NMR)
Location: EI, EI 7
08:45
Invited Talk: Fragments of Logic, Language, and Computation

ABSTRACT. Amsterdam-style logicians view modal logic as a fragment of classical logic, and description logicians view their own formalisms in much the same way. Moreover, first-order logic itself can be viewed as a modest fragment of the higher-order logics of Frege and Russell, a fragment with useful model-theoretic properties. All in all, the fine structure of logic is a key topic in contemporary research, as the intensive study of (say) the 2-variable and various guarded fragments attest.

In this talk I want to consider the role of logical fragments in applications. I will focus on applications in natural language, as this is an area rich in non-monotonic and defeasible inference. Moreover, as my perspective is that of computational (rather than theoretical) linguistics, I am interested in efficient solutions to computational tasks - that is, in fragments of computation. Drawing on a running example involving applications of description logic and classical planning to a dialogue system, I will discuss the role of computation to provide 'pragmatic glue' that lets us work with small well-explored logical fragments, while simultaneously providing the dynamics required to model various forms of non-monotonicity.

09:45
On the Non-Monotonic Description Logic ALC+Tmin

ABSTRACT. In the last 20 years many proposals have been made to incorporate non-monotonic reasoning into description logics, ranging from approaches based on default logic and circumscription to those based on preferential semantics. In particular, the non-monotonic description logic ALC+Tmin uses a combination of the preferential semantics with minimization of a certain kind of concepts, which represent atypical instances of a class of elements. One of its drawbacks is that it suffers from the problem known as the property blocking inheritance, which can be seen as a weakness from an inferential point of view. In this paper we propose an extension of ALC+Tmin, namely ALC+T+min, with the purpose to solve the mentioned problem. In addition, we show the close connection that exists between ALC+T+min and concept-circumscribed knowledge bases. Finally, we study the complexity of deciding the classical reasoning tasks in ALC+T+min.

08:45-09:00 Session 86N: Workshop Opening (STAST)
Location: MB, Seminarraum 212/232
08:45-10:15 Session 86P: Invited speaker (A. Rybalchenko) (VPT)
Location: FH, Dissertantenraum E104
08:45
Towards Automated Proving of Relational Properties of Probabilistic Programs (Invited Talk)
SPEAKER: unknown

ABSTRACT. Some security properties go beyond what is expressible in terms of an individual execution of a single program. In particular, many security policies in cryptography can be naturally phrased as relational properties of two open probabilistic programs. Writing and verifying proofs of such properties is an error-prone task that calls for automation and tool support. One of the main difficulties in automating these proofs lies in finding adequate relational invariants for loops and specifications for program holes. In this talk we show how to automate proofs of relational properties of open probabilistic programs by adapting techniques for the automatic generation of universally quantified invariants in a non-relational setting.

09:00-10:15 Session 87B: Invited Talk (CHR and CICLOPS-WLPE)
Location: FH, Hörsaal 7
09:00
Coherent Logic and Applications
SPEAKER: Marc Bezem

ABSTRACT. Coherent logic (CL) is a fragment of first-order logic that is  more expressive than resolution logic in that it allows a  restricted form of existential quantification. Consequently,  many theories can be formulated directly in CL, for example,  without skolemization. CL has a simple proof theory yielding readable proofs. Thus it strikes a balance beween expressivity and efficiency of automated reasoning. In the talk we give an overview of CL and its application in automated reasoning, proof assistents, model finding, and constraint handling.

09:00-10:00 Session 87C (AISS)
Location: FH, Seminarraum 134
09:00
Decision Problems for Linear Recurrence Sequences
SPEAKER: Joel Ouaknine

ABSTRACT. Linear recurrence sequences (LRS), such as the Fibonacci numbers, permeate vast areas of mathematics and computer science. In this talk, we consider three natural decision problems for LRS, namely the Skolem Problem (does a given LRS have a zero?), the Positivity Problem (are all terms of a given LRS positive?), and the Ultimate Positivity Problem (are all but finitely many terms of a given LRS positive?). Such problems (and assorted variants) have applications in a wide array of scientific areas, such as theoretical biology (analysis of L-systems, population dynamics), economics (stability of supply-and-demand equilibria in cyclical markets, multiplier-accelerator models), software verification (termination of linear programs), probabilistic model checking (reachability and approximation in Markov chains, stochastic logics), quantum computing (threshold problems for quantum automata), discrete linear dynamical systems (reachability and invariance problems), as well as combinatorics, statistical physics, formal languages, etc.

09:00-18:00 Session 87D: FLoC Olympic Games Big Screen: OWL Reasoner Evaluation (ORE 2014) (joint session of 10 meetings)
Location: EI, Foyer
09:00
FLoC Olympic Games Big Screen: OWL Reasoner Evaluation (ORE 2014)
SPEAKER: unknown

ABSTRACT. OWL is a logic-based ontology language standard designed to promote interoperability, particularly in the context of the (Semantic) Web. The standard has encouraged the development of numerous OWL reasoning systems, and such systems are already key components of many applications. The 2nd OWL Reasoner Evaluation Competition (ORE 2014) compares and evaluates 11 OWL reasoning systems on a data set containing a wide range of ontologies from the web, obtained through a standard web crawl and the Google Custom Search API, and a snapshot from the well known BioPortal repository. Some user submitted ontologies spice up the corpus with particularly relevant and difficult test cases. The ontologies in the test set are binned by profiles to enable the participation of reasoners specialised on the EL and DL profiles of OWL. The evaluation is performed in three different categories that cover the important tasks of ontology classification, consistency checking, and ontology realisation (i.e., the computation of entailed types of individuals).

09:00-18:00 Session 87E: FLoC Olympic Games Big Screen: Satisfiability Modulo Theories solver competition (SMT-COMP 2014) (joint session of 10 meetings)
Location: FH, 2nd floor
09:00
FLoC Olympic Games Big Screen: Satisfiability Modulo Theories solver competition (SMT-COMP 2014)
SPEAKER: unknown

ABSTRACT. The 9th SMT competition seeks to spur interest, usefulness, and capability of SMT solvers in application-centric contexts. This year's competition has 20 solvers from 14 participating teams (a record number) with problems drawn from 147K benchmarks in 34 different logics; problems are expressed in a common, standardized format (SMTLIBv2). There are tracks measuring sequential and parallel performance and emulating batch and interactive application environments. The benchmarks are increasingly supplied by application environments that use SMT solvers to discharge constraint problems and verification conditions that arise in software verification, scheduling, component placement and other domains. The competition is run over a period of a month using the StarExec computation cluster. This year there is also a separate, experimental sibling competition on Separation Logic solvers and problems, also using the concrete syntax of SMTLIBv2 benchmarks.

09:00-10:15 Session 87F: Invited Tutorials (SAT- and SMT-Solvers) (APPA)
Location: FH, Seminarraum 325/1
09:00
Satisfiability Solvers
SPEAKER: Marijn Heule

ABSTRACT.  Satisfiability (SAT) solvers  have become powerful tools to solve a wide range of applications. In case SAT problems are satisfiable, it is easy to validate a witness. However, if SAT problems have no solutions, a proof of unsatisfiability is required to validate that result. Apart from validation, proofs of unsatisfiability are useful in several applications, such as interpolation and extracting a minimal unsatisfiable set  and in tools that use SAT solvers such as  theorem provers. We discuss the two main approaches for unsatisfiability proofs: resolution proofs and clausal proofs. Resolution proofs are used for several application, but they are hard to produce. Clausal proofs are easy to emit, but more costly to check. We compare both approaches, present how to produce and validate proofs, and provide some challenges regarding unsatisfiability proofs.

09:30
Proofs in Satisfiability Modulo Theories
SPEAKER: unknown

ABSTRACT. We discuss history, techniques, and challenges in producing and checking proofs from SMT solvers.

09:00-10:15 Session 87G: Invited Talk by Frank Wolter (iPRA)
Location: FH, Seminarraum 101A
09:00
Interpolation in Description Logic: A Survey
SPEAKER: Frank Wolter

ABSTRACT. We discuss the role of (variants of) Craig interpolation in description logic research. We start by considering Craig interpolation itself and how it is related to the structure of ontologies and the rewritability of queries in ontology-based data access. We then briefly discuss the role of parallel interpolation (introduced more recently by Parikh et al) for decomposing ontologies. Finally, we consider uniform interpolation - an extension of Craig interpolation in which the interpolant is independent from the formula on the right-hand side. Uniform interpolation has been the focus of significant research activity in Description Logic, with many theoretical results and first implemented systems.

I will not assume any background knowledge about Description Logics but will introduce the relevant notions as required.

09:00-10:15 Session 87H: Invited talk: Aarne Ranta (NLCS and NLSR)
Location: FH, CAD 2
09:00
Machine Translation: Green, Yellow, and Red (abstract)
SPEAKER: Aarne Ranta

ABSTRACT. The main stream in machine translation is to build systems that are able to translate everything, but without any guarantees of quality. An alternative to this is systems that aim at precision but have limited coverage. Combining wide coverage with high precision is considered unrealistic. Most wide-coverage systems are based on statistics, whereas precision-oriented domain-specific systems are typically based on grammars, which guarantee translation equality by some kind of formal semantics.

This talk introduces a technique that combines wide coverage with high precision, by embedding a high-precision semantic grammar inside a wide-coverage syntactic grammar, which in turn is backed up by a chunking grammar. The system can thus reach good quality whenever the input matches the semantics; but if it doesn't, the user will still get a rough translation. The levels of confidence can be indicated by using colours, whence the title of the talk.

The talk will explain the main ideas in this technique, based on GF (Grammatical Framework) and also inspired by statistical methods (probabilistic grammars) and the Apertium system (chunk-based translation), boosted by freely available dictionaries (WordNet, Wiktionary), and built by a community of over 50 active developers. The current system covers 11 languages and is available both as a web service and as an Android application.

Demo system: http://www.grammaticalframework.org/demos/translation.html

 

 

 

 

 

09:00-10:15 Session 87I: Keynote Talk (STAST)
Location: MB, Seminarraum 212/232
09:00
Tackling the Awareness-Behaviour Divide in Security: (step 1) Understand the User

ABSTRACT. Various factors influence user's behaviour and interactions with technology. This means security has a socio-technical element, that continues to present a challenge in research and attempts to improve security behaviour. Users may not be the enemy but their (un)intentional (mis)use of technology is certainly part of the problem in security. To solve this problem, we must do more than simply pay lip service to the need to address the human element; we need to systematically explore the environmental, social and personal influencers of behaviour within the context of cybersecurity. Those who seek to ensure cybersecurity must learn to utilise such influencers as efficiently as those who seek to exploit them. Awareness training is touted as the solution, awareness may be necessary but it is seldom sufficient. Psychological research and organisational reports suggest that increased user awareness alone is insufficient when it comes to changing actual behaviour. This may make users' behaviours seem irrational, but they are understandable if you appreciate the cognitive biases people are prone to and the heuristics they use when the time, effort and knowledge required to follow a "rational" decision making process outweighs the benefits perceived by the user. This talk provides a short overview of the issues worthy of exploration in security research and suggests several strategies on how to tackle the security awareness - behaviour divide.

09:00-10:15 Session 87J (QED)
Location: FH, Hörsaal 4
09:00
QED: a grand unified theory?
SPEAKER: John Harrison
09:35
How to prove the odd order from the four color theorem
09:15-10:15 Session 88A: Invited Talk by Christel Baier (CSL-LICS)
Location: FH, Hörsaal 1
09:15
Trade-Off Analysis Meets Probabilistic Model Checking

ABSTRACT. Probabilistic model checking (PMC) is a well-established and powerful method for the automated quantitative analysis of parallel distributed systems. Classical PMC-approaches focus on computing probabilities and expectations inMarkovian models annotated with numerical values for costs and utility, such as energy and performance. Usually, the utility gained and the costs invested are dependent and a trade-off analysis is of utter interest. In this paper, we provide an overview on various kinds of nonstandard multi-objective formalisms that enable to specify and reason about the trade-off between costs and utility. In particular, we present the concepts of quantiles, conditional probabilities and expectations as well as objectives on the ratio between accumulated costs and utility. Such multi-objective properties have drawn very few attention in the context of PMC and hence, there is hardly any tool support in state-of-the-art model checkers. Furthermore, we broaden our results towards combined quantile queries, computing conditional probabilities those conditions are expressed as formulas in probabilistic computation tree logic, and the computation of ratios which can be expected on the long-run.

 

09:15-10:15 Session 88B: Invited talk (ITRS)
Location: MB, Zeichensaal 14
09:15
Intersection Types, Game Semantics and Higher-Order Model Checking
SPEAKER: Luke Ong

ABSTRACT. Higher-order model checking is the model checking of infinite trees generated by simply-typed lambda calculus with recursion. Although the original decidability proof was based on game semantics, intersection types have played a prominent role in the subsequent development of algorithms for higher-order model checking. In this talk, I will introduce the approach to higher-order model checking based on intersection types, present connections with game semantics, and discuss recent applications of intersection types and game semantics to problems in higher-order computation.

09:15-10:15 Session 88C: Invited talk (LC)
Location: MB, Prechtlsaal
09:15
Synthesis for monadic logic over the reals
SPEAKER: Mark Reynolds

ABSTRACT. We say that a first-order monadic logic of order (FOMLO) sentence is satisfiable over the reals if there is some valuation for the monadic predicates which makes the formula true. Burgess and Gurevich showed that satisfiability for this logic is decidable. They built on pioneering work by Lauchli and Leonard who, in showing a similar result for linear orders in general, had presented some basic operations for the compositional building of monadic linear structures.

We look at some recent work in using these basic operations to give a synthesis result. That is, we present an algorithm which given a FOMLO sentence which is satisfiable over the reals, outputs a specific finite description of a model.

09:15-10:15 Session 88D: Invited Talk by Hoon Hong (PAS)
Location: FH, Seminarraum 138A
09:15
Test-Input Generation using Computational Real Algebraic Geometry
SPEAKER: Hoon Hong
09:15-10:15 Session 88E: Invited talk (EC2)
Location: FH, Seminarraum 101C
09:15
Does Wait-Freedom Matter?
SPEAKER: Kapil Vaswani

ABSTRACT. Algorithms that guarantee system-wide progress and throughput have seen significant research interest over the last few decades. System-wide progress becomes even more critical in the context of distributed storage systems, where process failures and network partitions can introduce arbitrary delays. In this talk, we consider the problem of designing wait-free distributed data structures. Recent research shows that certain data structures can indeed be designed to guarantee wait-freedom and consistency in the presence of node failures. We can even characterize conditions under which a wait-free, strongly consistent implementation of a data structure is not possible. On the other hand, we now beginning to understand why distributed algorithms which do not guarantee wait-freedom (such as Paxos), appear to be wait-free in practice. This leads to the following open question – does system-wide progress really matter?

09:15-10:15 Session 88F: Invited Talk (SMT)
Location: FH, Seminarraum 104
09:15
Automating the verification of floating-point algorithms
09:15-10:15 Session 88G: Invited Talk by Ulrich Berger (joint work with Tie Hou): Program Extraction in Church's Simple Theory of Types (PSC)
Location: FH, CAD 1
09:15
Program extraction in Church's simple theory of types (invited talk)
SPEAKER: unknown

ABSTRACT. In 1940 Church introduced a system of higher-order logic formulated as a simply typed lambda-calculus with base types i for individuals and o for propositions. In this system logical connectives and quantifiers appear as constants. One of the attractions of this system is the fact that it is on the one hand rather expressive, allowing the formalization of large parts of mathematics, and on the other hand very simple and convenient to handle formally. A variant of Church's theory is the logical basis for the system Isabelle/HOL and it also plays a role in the development of Martin-Loef Type Theory and the latest developments in Homotopy Type Theory

Church's theory was originally formulated with classical logic and a choice operator. If one uses intuitionistic logic instead, drops choice and adds monotone (co)inductive definitions, on obtains a system that, though impredicative, has a clear computational meaning. One can make this precise by a realizability interpretation combined with a theorem for witness-extraction. I will present this system, demonstrate its usefulness by means of examples and discuss its status from a constructive respectively classical point of view.

09:15-10:15 Session 88H: Invited talk (PCCR)
Location: FH, Hörsaal 3
09:15
CSPs and fixed-parameter tractability
SPEAKER: Dániel Marx

ABSTRACT. We survey fixed-parameter tractability results appearing in the context of satisfiability and constraint satisfaction. The focus of the talk is on explaining the different type of questions that can be asked and briefly summarizing the known results, without going into technical details.

09:15-10:15 Session 88I: Constraints (RCRA)
Location: MB, Hörsaal 12
09:15
A compression algorithm to improve generalized hypertree decomposition-based solving approaches for large extensional non binary CSP
SPEAKER: unknown

ABSTRACT. Many approaches exploiting Generalized Hypertree Decomposition are proposed in the literature for solving extensional Constraint Satisfaction Problems on finite domains. The most recent one, called FC-GHD, combines both the merits of an enumerative search algorithm which is memory efficient with those of the Hypertree Decomposition which is time efficient. Although FC-GHD is proved to be efficient, some large instances, however, remain unresolved because of the memory fault problem. This present article proposes to enhance the FC-GHD method with the notion of compressed table constraints to deal with memory explosion problems. This leads to a new compressed enumerative algorithm called CFC-GHD powerful when processing generalized hypertree decompositions. This paper describes this algorithm and presents the results obtained with this new approach. The experiments which were carried out validate the method from a practical viewpoint.

09:45
Dispensable Instantiations in Constraint Satisfaction Problems
SPEAKER: unknown

ABSTRACT. 'Simplifying' problems, by removing values or combinations of values, has been a primary approach to combinatorial complexity in constraint satisfaction problems. This paper provides a unifying framework for much of this previous work, and in the process presents new opportunities. It extends a framework for CSP properties provided by Bordeaux, Cadoli, and Mancini. New forms of substitutability and subproblem extraction are introduced. An algorithm for one form of substitutability, neighbourhood replaceability (called "removeability" by Bordeaux et al.), is presented along with preliminary experimental results.

09:15-10:15 Session 88J: Invited Talk: Matching Logic (HOPA)
Location: FH, Seminarraum 138C
09:15
TBA
SPEAKER: Grigore Rosu

ABSTRACT. TBA

09:45-10:25 Session 89A: Technical Session (ASA)
Location: MB, Zeichensaal 13
09:45
Nuances of the JavaCard API on the cryptographic smart cards - JCAlgTest project
SPEAKER: Petr Svenda

ABSTRACT. The main goal of this paper is to describe the JCAlgTest project. The JCAlgTest project is designed for an automatic testing of the degree of support for algorithms listed in the JavaCard API specification by a given smart card. Discussion of the results and trends drawn from the database of 28 different cryptographic smart cards is performed. Additional information about particular algorithm or protocol obtained via timing and power analysis is also described and (partially) available in JCAlgTest database.

09:45-10:15 Session 89B: Technical Session (FCS-FCC)
Chair:
Location: MB, Hörsaal 7
09:45
A Hybrid Analysis for Security Protocols with State
SPEAKER: unknown

ABSTRACT. Cryptographic protocols rely on message-passing to coordinate activity among principals. Each principal maintains local state in individual local sessions only as needed to complete that session. However, in some protocols a principal also uses state to coordinate its different local sessions. Sometimes the non-local, mutable state is used as a means, for example with smart cards or Trusted Platform Modules. Sometimes it is the purpose of running the protocol, for example in commercial transactions.

Many richly developed tools and techniques, based on well-understood foundations, are available for design and analysis of pure message-passing protocols. But the presence of cross-session state poses difficulties for these techniques.

In this paper we provide a framework for modeling stateful protocols. We define a hybrid analysis method. It leverages theorem-proving -- in this instance, the PVS prover -- for reasoning about computations over state. It combines that with an "enrich-by-need" approach -- embodied by CPSA -- that focuses on the message-passing part. As a case study we give a full analysis of the Envelope Protocol, due to Mark Ryan.

10:15-10:45Coffee Break
10:45-13:00 Session 90AA: Programming Logics (CSL-LICS)
Location: FH, Hörsaal 1
10:45
Subclasses of Presburger Arithmetic and the Weak EXP Hierarchy

ABSTRACT. It is shown that for any fixed $i>0$, the $\Sigma_{i+1}$-fragment of Presburger arithmetic, \emph{i.e.}, its restriction to $i+1$ quantifier alternations beginning with an existential quantifier, is complete for $\Sigma^{\text{EXP}}_{i}$, the $i$-th level of the weak EXP hierarchy. This result completes the computational complexity landscape for Presburger arithmetic, a line of research which dates back to the seminal work by Fischer \& Rabin in 1974. The second part of the paper applies some techniques developed in the first part in order to establish bounds on sets of naturals definable in the $\Sigma_1$-fragment of Presburger arithmetic: given a $\Sigma_1$-formula $\Phi(x)$, it is shown that the set of solutions is an ultimately periodic set whose period can be doubly-exponentially bounded from below and above.

11:15
On the Discriminating Power of Passivation and Higher-Order Interaction

ABSTRACT. This paper studies the discriminating power offered by higher-order concurrent languages, and contrasts this power with those offered by higher-order sequential languages (à la lambda-calculus) and by first-order concurrent languages (à la CCS). The concurrent higher-order languages that we focus on are Higher-Order pi-calculus (HOpi), which supports higher-order communication, and an extension of HOpi with passivation, a simple higher-order construct that allows one to obtain location-dependent process behaviours. The comparison is carried out by providing embeddings of first-order processes into the various languages, and then examining the resulting contextual equivalences induced on such processes. As first-order processes we consider both Labeled Transition Systems (LTSs) and Reactive Probabilistic Labeled Transition Systems (RPLTSs). The hierarchy of discriminating powers so obtained for RPLTSs is finer than that for LTSs; for instance, in the latter case, the additional discriminating power offered by passivation in concurrency is captured, in sequential languages, by the difference between the call-by-name and call-by-value evaluation strategies of an extended typed lambda-calculus.

11:45
Asymptotic behaviour in temporal logic

ABSTRACT. We study the "approximability" of unbounded temporal operators with time-bounded operators, as soon as some time bounds tend to infinity. More specifically, for formulas in the fragments PLTL$_\Diamond$ and PLTL$_\Box$ of the Parametric Linear temporal Logic of Alur et al., we provide algorithms for computing the limit entropy as all parameters tend to ∞. As a consequence, we can decide the problem whether the limit entropy of a formula in one of the two fragments coincides with that of its time-unbounded transformation, obtained by replacing each occurrence of a time-bounded operator into its time-unbounded version. The algorithms proceed by translation of the two fragments of PLTL into two classes of discrete-time timed automata and analysis of their strongly-connected components.

12:15
Weight Monitoring with Linear Temporal Logic: Complexity and Decidability

ABSTRACT. Many important performance and reliability measures can be formalized as the accumulated values of weight functions. In this paper, we introduce an extension of linear time logic including past (LTL) with new operators that impose constraints on the accumulated weight along path fragments. The fragments are characterized by regular conditions formalized by deterministic finite automata (monitor DFA). This new logic covers properties expressible by several recently proposed formalisms. We study the model-checking problem for weighted transition systems, Markov chains and Markov decision processes with rational weights. While the general problem is undecidable, we provide algorithms and sharp complexity bounds for several sublogics that arise by restricting the monitoring DFA.

10:45-13:00 Session 90AB: Proof Theory (CSL-LICS)
Location: FH, Hörsaal 5
10:45
On Context Semantics for Interaction Nets

ABSTRACT. Context semantics is a tool inspired by Girard' s geometry of interaction. It has had many applications from study of optimal reduction to proofs of complexity bounds. However, context semantics have been defined only on lambda-calculus and linear logic. In order to study other languages, in particular languages with more primitives (built-in arithmetics, pattern matching,…) we define a context semantics for a broader framework: interaction nets. These are a well-behaved class of graph rewriting systems. Here, two applications are explored. First, we define a notion of weight, based on context semantics paths, which bounds the length of reduction of nets. This could be used to prove strong complexity bounds for interaction net systems. Then, we define a denotational semantics for a large class of interaction net systems.

11:15
The Geometry of Synchronization
SPEAKER: unknown

ABSTRACT. We graft synchronization onto the most concrete model of Girard’s Geometry of Interaction, namely token machines. This is realized by introducing proof-nets for SMLL, an extension of multiplicative linear logic with a specific construct modeling synchronization points, and a multi-token abstract machine model for it. Interestingly, a correctness criterion ensures the absence of deadlock, this way linking logical and operational properties in a novel way.

11:45
A new correctness criterion for MLL proof nets

ABSTRACT. In Girard's original presentation, proof structures of Linear Logic are hypergraphs whose hyperedges are labeld by logical rules and vertices represent the connections between these logical rules. Presentations of proof structures based on interaction nets have the same kind of graphical flavour. Other presentations of proof structures use terms instead of graphs or hypergraphs. The atomic ingredient of these terms are variables related by axiom links. However, the correctness criteria developped so far are adapted to the graphical presentations of proof structures and not to their term-based presentations. We propose a new correctnes criterion for constant-free Multiplicative Linear Logic with Mix which applies to a coherence space structure that a term-based proof structure induces on the set of its variables in a straightforward way.

12:15
Non-Elementary Complexities for Branching VASS, MELL, and Extensions
SPEAKER: Ranko Lazić

ABSTRACT. We study the complexity of reachability problems on branching extensions of vector addition systems, which allows us to derive new non-elementary complexity bounds for fragments and variants of propositional linear logic. We show that provability in the multiplicative exponential fragment is non-elementary already in the affine case, and match this lower bound for the full propositional affine linear logic, proving its Tower-completeness. We also show that provability in propositional contractive linear logic is Ackermann-complete.

10:45-12:45 Session 90AC: Invited talk & useful tools (Coq)
Location: MB, Hörsaal 14A
10:45
What is Homotopy Type Theory?
SPEAKER: Daniel Licata

ABSTRACT. The Coq proof assistant is based on Martin-Loef's constructive type theory, as realized in the Calculus of Inductive Constructions.  Some recent mathematical models have provided a new perspective on constructive type theory, answering some fundamental questions, and suggesting certain new principles that can be added, such as higher inductive types and Voevodsky's univalence axiom.  In this talk, I will give a gentle introduction to homotopy type theory, and explain what impact it may have on developing proofs and programs in Coq.  Some questions that will be answered include: How is the logic of "homotopy propositions" different from the logic of Prop in CiC? What are univalence and higher inductive types, and what can they be used for? What changes have already been made to Coq to support homotopy type theory, and what might you expect in the future?  

11:45
QuickChick: Property-Based Testing for Coq
SPEAKER: unknown

ABSTRACT. Co-designing software or hardware systems and their formal proofs is an appealing idea, with the expectation that the rigor enforced by formal methods will percolate the whole design. In practice however, carrying out formal proofs while designing even a relatively simple system can be an exercise in frustration, with a great deal of time spent attempting to prove things about broken definitions, and countless iterations for discovering the correct lemmas and strengthening inductive invariants. We believe that property-based testing (PBT) can dramatically decrease the number of failed proof attempts and reduce the overall cost of producing formally verified systems. Despite the existence of experimental tools, Coq is still lagging behind proof assistants like Isabelle, which provides several mature PBT tools. We aim to improve the PBT support in Coq, while also investigating several innovations we could add into the mix like polarized mutation testing and a language-based approach to custom generation. We are also exploring whether PBT could bring more confidence to the implementation of Coq itself.

12:15
Proof-Pattern Search in Coq/SSReflect
SPEAKER: unknown

ABSTRACT. ML4PG is an extension of the Proof General interface of Coq, allowing the user to invoke machine-learning algorithms and find proof similarities in Coq/SSReflect libraries. In this talk, we will show the recent ML4PG features in action, using examples from the standard SSReflect library and HoTT library. We will compare ML4PG with traditional Coq searching tools and dependency graphs.

10:45-12:15 Session 90AD: Tutorial: How do we get inductive invariants? (CAV)
Location: FH, Hörsaal 6
10:45
How do we get inductive invariants? (Part B)

ABSTRACT. Verifying the correctness of loop-free programs (or of general programs, up to bounded depth) is difficult: the state space explodes exponentially as the depth increases. Yet, the difficulty increases as we allow unboundedly many execution steps; proof approaches then generally rely on finding inductive invariants (properties shown to hold initially, then to remain true by induction).

Abstract interpretation attempts finding inductive invariants within a given domain, e.g. conjunctions of linear inequalities. The classical approach iterates a transformer until the property becomes inductive. In general, this approach may not terminate; thus termination is often enforced with a “widening” operator, which attempts at generalizing the iterates into an inductive property. Unfortunately, widening operators are brittle, with non-monotonic behaviors (supplying more information about a system may result in worse analysis outcomes!). Therefore, other approaches have been developed (policy iteration,…), which avoid this pitfall.

Finally, we shall discuss possible combinations of abstract interpretation and SMT-solving.

10:45-11:45 Session 90AG (ITRS)
Location: MB, Zeichensaal 14
10:45
A Finite Model Property for Intersection Types

ABSTRACT. We show that the relational theory of intersection types known as BCD has the finite model property; that is, BCD is complete for its finite models. Our proof uses rewriting techniques which have as an immediate by product the polynomial time decidability of the preorder [ (although this also follows from the so called beta soundeness of BCD).

11:15
On Isomorphism of "Functional'' Intersection and Union Types

ABSTRACT. Type isomorphism is useful for retrieving library components, since a function in a library can have a type different from, but isomorphic to, the one expected by the user. Moreover type isomorphism gives for free the coercion required to include the function in the user program with the correct type. In presence of intersection and union types, type isomorphism is not a congruence and cannot be characterised in an equational way. A characterisation can still be given, which is quite complicated by the interference between functional and non functional types. The present paper faces this problem, by interpreting each atomic type as the set of functions mapping any argument into the interpretation of the type itself. This is done taking inspiration from the initial projection of Scott's inverse limit lambda-model. In particular, an isomorphism preserving reduction between types is defined. The main results of this paper are, on one hand, the fact that two types with the same normal form are isomorphic, on the other hand, an interesting condition assuring isomorphism between types.

10:45-13:00 Session 90AH: SAT Encodings and Semantic Labelling (WST)
Location: FH, Seminarraum 101B
10:45
Automating Elementary Interpretations
SPEAKER: Harald Zankl

ABSTRACT. We report on an implementation of elementary interpretations for automatic termination proofs.

11:15
A Satisfiability Encoding of Dependency Pair Techniques for Maximal Completion
SPEAKER: Haruhiko Sato

ABSTRACT. We present a general approach to encode termination in the dependency pair framework as a satisfiability problem, and include encodings of dependency graph and reduction pair processors. We use our encodings to increase the power of the completion tool Maxcomp.

11:45
Automated SAT Encoding for Termination Proofs with Semantic Labelling and Unlabelling
SPEAKER: Alexander Bau

ABSTRACT. We SAT-encode constraints for termination orders based on semantic labelling, argument filters, and recursive path orders, within the dependency pairs framework.

We specify constraints in a high-level Haskell-like language, and translate to SAT fully automatically by the CO4 compiler. That way, constraints can be combined easily.

This allows to write a single constraint for: find a model, and a sequence of ordering constraints for the labelled system, such that at least one original rule can be removed completely.

As an application, we produce a termination proof, verified by CeTA, for TRS/Aprove/JFP\_Ex31.

12:15
A Solution to Endrullis-08 and Similar Problems
SPEAKER: Alfons Geser

ABSTRACT. We prove the termination of the string rewriting system Endrullis-08, the termination of infinitely many related systems, and even the termination of their union, an infinite string rewriting system.

12:45
Discussion
SPEAKER: Everyone
10:45-13:00 Session 90AI: EPR, QBF, and QCSP (QUANTIFY)
Location: FH, Seminarraum 136
10:45
Quantifier handling in calculi for quantified Boolean formulas
SPEAKER: Uwe Egly

ABSTRACT. Quantified Boolean formulas (QBFs) generalize propositional formulas
by admitting quantifications over propositional variables. In this
talk, I will discuss the handling of quantifiers in different calculi
for QBFs. The considered calculi are different versions of
Q-resolution and sequent systems with different quantifier
rules. Since QBFs can be considered as a restriction of first-order
predicate logic where formulas are interpreted only over a two element
domain, I will also consider translations of QBFs to first-order
formulas together with first-order resolution. I will compare the
different calculi with respect to polynomial simulation and
exponentially separate the ones which instantiate the formula from the
ones where no instantiation is possible.

11:30
Encoding Reachability with Quantification

ABSTRACT. We report on techniques for translating bounded propositional reachability problems into Quantified Boolean Formulae (QBF). The techniques exploit the binary-tree structure of the QBF quantification to produce encodings logarithmic in the size of the instance and thus exponentially smaller than the corresponding SAT encoding with the same bound. We briefly describe our experimental results, and discuss the challenging structure that is implicit in these encodings.

12:00
EPR Encodings of Bit-Vector Problems Even With Quantifiers

ABSTRACT. Bit-precise reasoning is essential in many applications of Satisfiability Modulo Theories (SMT). Quantifier-free resp. quantified bit-vector formulas (QF_BV resp. BV) play an important role in hardware resp. software verification. In recent years, efficient approaches for solving such formulas have been developed. Most of the QF_BV solvers rely on bit-blasting, while BV solvers rely on quantifier instantiation. In [Kovasznai, Froehlich, & Biere, SMT 2012], we investigated the worst-case complexity of those logics and argued that bit-blasting resp. quantifier instantiation is exponential resp. double-exponential in general. We also introduced the criterion "bit-width boundedness" to bit-vector problems, as a property that seems to hold for most practical problems, and proved it to reduce the worst-case complexity by an exponential.
In this talk, we are going to give a survey on [Kovasznai, Froehlich, & Biere, CADE 2013], by proposing a polynomial translation from QF_BV to Effectively Propositional Logic (EPR) and by presenting our tool BV2EPR together with experimental data. Note that while our translation guarantees polynomiality for any QF_BV formula, it is too robust for bit-width bounded QF_BV problems and therefore cannot be considered very practical. In contrast, a translation from bit-width bounded BV problems to EPR might be of practical relevance. Therefore, in the rest of the talk, we will present ideas how to extend our translation further to BV formulas.

12:30
Beyond Q-Resolution and Prenex Form: A Proof System for Quantified Constraint Satisfaction
SPEAKER: Hubie Chen

ABSTRACT. We consider the quantified constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quantification, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context; we will discuss how, in a sense made precise, it generalizes Q-resolution and gives a clear proof of the soundness of Q-resolution.

 

As an application, we give an algebraic, Ehrenfeucht-Fraïssé-style characterization of the condition of not having bounded-width proofs (for a QCSP instance) and show that the resulting theory can be used to obtain tractability results for the QCSP.

 

This talk is based on the article available at http://arxiv.org/abs/1403.0222.

10:45-12:45 Session 90AJ (AISS)
Location: FH, Seminarraum 134
10:45
Parameterized Model Checking of Rendezvous Systems
SPEAKER: unknown

ABSTRACT. Distributed systems are often parameterized by the number of participating processes, and thus the parameterized model checking problem is equivalent to model checking an infinite state system. A standard technique for solving the parameterized model checking problem is to reduce it to the classic model checking problem of finitely many finite-state systems. This work considers some of the theoretical power and limitations of this technique. We focus on concurrent systems in which processes communicate via pairwise rendezvous, as well as the special cases of disjunctive guards and token passing; specifications are expressed in indexed temporal logic without the next operator; and the underlying network topologies are generated by suitable Monadic Second Order Logic formulas and graph operations. First, we settle the exact computational complexity of the parameterized model checking problem for some of our concurrent systems, and establish new decidability results for others. Second, we consider the cases that model checking the parameterized system can be reduced to model checking some fixed number of processes, the number is known as a cutoff. We provide many cases for when such cutoffs can be computed, establish lower bounds on the size of such cutoffs, and identify cases where no cutoff exists. Third, we consider cases for which the parameterized system is equivalent to a single finite-state system (more precisely a Buchi word automaton), and establish tight bounds on the sizes of such automata.

11:15
Model Checking Communicating Finite-State Machines using Local Temporal Logics
SPEAKER: Roy Mennicke

ABSTRACT. We investigate the complexity of the model checking problem for local temporal logics and communicating finite-state machines (CFMs). The input is a number of processes π, a temporal formula F, a CFM C, and a bound β on the maximal number of pending messages allowed. The question is whether all message sequence charts (MSCs) which are accepted by C and for which there exists an execution with β-bounded channels fulfill F.

We show that this problem can be solved in space polynomial in C, β, and F and exponential in π regarding temporal logics consisting of the usual modalities used in the context of concurrent systems.

We also state that if the modalities of the underlying temporal logic can be defined using formulas from the n-th level of the monadic quantifier alternation hierarchy, the uniform model checking problem (where the temporal logic is part of the input) is hard for (n+1)-fold exponential space and can be solved in (n+3)-fold exponential space. If we fix the temporal logic and the number of processes, we obtain matching upper and lower bounds.

11:45
Unary Pushdown Automata and Straight-Line Programs

ABSTRACT. We consider decision problems for deterministic pushdown automata over the unary alphabet (udpda, for short). Udpda are a simple computation model that accept exactly the unary regular languages, but can be exponentially more succinct than finite-state automata.

We complete the complexity landscape for udpda by showing that emptiness (and thus universality) is P-hard, equivalence and compressed membership problems are P-complete, and inclusion is coNP-complete. Our upper bounds are based on a translation theorem between udpda and straight-line programs over the binary alphabet (SLPs). We show that the characteristic sequence of any udpda can be represented as a pair of SLPs---one for the prefix, one for the lasso---that have size linear in the size of the udpda and can be computed in polynomial time. Hence, decision problems on udpda are reduced to decision problems on SLPs. Conversely, any SLP can be converted in logarithmic space into a udpda, and this forms the basis for our lower bound proofs. We show coNP-hardness of the ordered matching problem for SLPs, from which we derive coNP-hardness for inclusion. In addition, we complete the complexity landscape for unary nondeterministic pushdown automata by showing that the universality problem is Pi2P-hard, using a new class of integer expressions.

Our techniques have applications beyond udpda. We show that our results imply Pi2P-completeness for a natural fragment of Presburger arithmetic and coNP lower bounds for compressed matching problems with one-character wildcards.

12:15
Of stacks (of stacks (...) with blind counters) with blind counters

ABSTRACT. Recent work on automata with abstract storage revealed a class of storage mechanisms that proves quite expressive and amenable to various kinds of algorithmic analysis. The storage mechanisms in this class are obtained by \emph{building stacks} and \emph{adding blind counters}.

The former is to construct a new mechanism that stores a stack whose entries are configurations of an old mechanism. One can then manipulate the topmost entry, pop it if empty, or start a new one on top. Adding a blind counter to an old mechanism yields a new mechanism in which the old one and a blind counter can be used simultaneously. We call the resulting model \emph{stacked counter automaton}.

This talk presents results on expressivity, Parikh images, membership problems, and the computability of downward closures.

10:45-11:45 Session 90AK: Invited Tutorials (First- and Higher-order Automated Theorem Provers) (APPA)
Location: FH, Seminarraum 325/1
10:45
First-Order Automated Theorem Provers
11:15
Higher-Order Automated Theorem Provers
10:45-12:45 Session 90AL: Symbolic Computation (PAS)
Location: FH, Seminarraum 138A
10:45
Synthesis of Algorithms on Sets Represented as Monotone Lists

ABSTRACT. We choose to represent sets as sorted lists without duplications, which we call monotone lists. Without loss of generality, we assume that the elements of the sets are from an ordered domain, and we take increasingly sorted lists. This approach leads to more ecient algorithms then the ones for operating on sets represented as non-sorted lists. We use two functions: the function R which applied to a set returns its representation as a monotone list, and the function S which applied to a (monotone) list returns its corresponding set. These two functions are bijective and reverse to each other and represent the isomorphism between the domain of nite sets (with the usual signature) and the domain of monotone lists with the appropriate signature induced by this isomorphism. The synthesis problem consists in producing the algorithms implementing this signature.

Each synthesis starts as an inductive constructive proof which applies specic strategies and inference rules. These strategies and inference rules are based on the corresponding properties of sets (as monotone lists). We synthesize (by extracting from proofs) ve algorithms among with two predicates: Member and Inclusion; and three functions: Union, Intersection, and Difference.

In parallel with the process of algorithm synthesis we explore the theory of monotone lists.

11:15
New predicate transformer for symbolic modeling

ABSTRACT. The paper presents new predicate transformer for symbolic modeling. Predicate transformer is a symbolic function that is used for computing transitions of a system with states represented by means of first-order formulas. Systems under consideration are compositions of environments and agents inserted into these environments (insertion models). They are specified by means of basic protocols, which represent formal requirements to systems. Previous version of predicate transformer allowed only existential quantifiers in formulas. A new one allows both quantifiers but with some restriction on formulas. The use of universal quantifier is illustrated on the example of MESI protocol.

11:45
VTOS: A Finite State Machine Model of OS and Formalized Verification of Its Microkernel
SPEAKER: unknown

ABSTRACT. Operating system is a huge software with complicated functionality. The correctness of even single module cannot be proved through software test. Project seL4 has proved the correctness of each function of a microkernel operating system. The correctness of each single function of an operating system cannot prove its integrity and other security property. In this paper we proposed a OS model of finite state machine (OSMFSM). We design our VTOS(Verified Trusted Operating System), which is a microkernel operating system and aimed at being formally verified and trusted, according to this model. We define the state of the OSMFSM through the global objects in VTOS, and the transition function of the OSMFSM as a piecewise defined function through the functions in VTOS. Through establishing a domain for VTOS with the help of OSMFSM and another domain, which is isomorphic to the former, in Isabelle/HOL, we prove the correctness of some functions in the microkernel of VTOS formally. With the help of the concept of the invariability of the FSM, we propose a method of describing and proving the integrity of an OS.

12:15
Symbolic Problem Solving With Bidirectional Executable Representations
SPEAKER: Jakob Praher

ABSTRACT. Programs are typically written by humans and executed by computers in order to solve problems over a specific domain. A program - a sequence of instructions executable by a computer - represents the encoded algorithm for solving the problem. The algorithm and data structures are created by the mental understanding of the problem given. This representation also addresses details such as run time efficiency, understandability, or maintainability.

Conversely when a programmer tries to understand a program he/she decodes the program again into an abstract mental model. From this model the programmer can start to improve or change the program, which is again encoded and represented as a modified program.

In this paper we show a problem solving process that starts with finding, enhancing or inventing a logical theory over first order predicate logic. This theory represents the concepts of the problem domain, relations between the concepts, and operations. A theory is a set of formulae. Operations on terms of the theory correspond to functionality of the solution. These operations can be characterized by partial functions from given input to expected output. Requirements formally express the functionality of the operations and are commonly defined using input/output conditions. If an operation's input satisfies the input condition then operation's output (if obtainable) satisfies the output condition.

The process is incremental. It is important to capture the high level requirements. For representing the solution as a machine executable program, the theory should be sufficiently refined so that the requirements satisfy the problem and capture important properties of the execution environment. Detailed conditions make it possible to formally evaluate different design approaches. Design approach refers to the abstractions and methods used for framing the algorithms. As an example in the hash table design it is important to capture the concept of collisions in the table abstractly in order to compare different strategies how to handle collisions.

10:45-13:00 Session 90AM: Certification (VSTTE)
Location: FH, Zeichensaal 3
10:45
Up and Out: Scaling Formal Analysis Using Model-Based Development and Architecture Modeling

ABSTRACT. Complex systems are hierarchically constructed in layers, and systems are often constructed middle-out rather than top-down; compatibility with existing systems and availability of specific components influences high-level requirements.  This approach leads to an interplay between requirements and design: design choices made at higher levels of abstraction levy requirements on system components at lower levels of abstraction, and design choices restrict the set of possible requirements and acceptable deployment environments.   A consequence is that the designation of a "design choice" or "requirement" depends largely on one’s vantage point within the hierarchy of system components.  To support iterative design, requirements and architectural design should be more closely aligned: requirements models must account for hierarchical system construction, and architectural design notations must better support specification of requirements for system components.

 

In this presentation, I describe tools supporting iterative development of architecture and verification based on software models.  We represent the hierarchical composition of the system in the Architecture Analysis & Design Language (AADL), and use an extension to the AADL language to describe requirements at different levels of abstraction for compositional verification.  To describe and verify component-level behavior, we use Simulink and Stateflow and multiple analysis tools.

11:45
A logical analysis of framing for specifications with pure method calls
SPEAKER: unknown

ABSTRACT. For specifying and reasoning about object-based programs it is often attractive for contracts to be expressed using calls to pure methods. It is useful for pure methods to have contracts, including read effects to support local reasoning based on frame conditions. This leads to puzzles such as the use of a pure method in its own contract. These ideas have been explored in connection with verification tools based on axiomatic semantics, guided by the need to avoid logical inconsistency, and focusing on encodings that cater for first order automated provers. This paper adds pure methods and read effects to region logic, a first-order program logic that features frame-based local reasoning and a proof rule for linking of clients with modules to achieve end-to-end correctness by modular reasoning. Soundness is proved with respect to a conventional operational semantics and using the extensional ---i.e., relational---interpretation of read effects. The developments in this paper (a) potentially could guide the implementations of linking as used in modular verifiers and (b) should serve as basis for studying observationally pure methods and encapsulation in the setting of relational region logic.

12:10
A certifying frontend for (sub)polyhedral abstract domains

ABSTRACT. Convex polyhedra provide a relational abstraction of numerical properties for static analysis of programs by abstract interpretation. We describe a lightweight certification of polyhedral abstract domains using the Coq proof assistant. Our approach consists in delegating most computations to an untrusted backend and checking its outputs with a certified frontend. The backend is free to implement relaxations of domain operators (i.e. a subpolyhedral abstract domain) in order to trade some precision for more efficiency, but must produce hints about the soundness of its results. Previously published experimental results show that the certification overhead with a full-precision backend is small and that the resulting certified abstract domain has comparable performance to non-certifying state-of-the-art implementations.

12:35
Certification of Nontermination Proofs using Strategies and Nonlooping Derivations
SPEAKER: Sarah Winkler

ABSTRACT. The development of sophisticated termination criteria for term rewrite systems has led to powerful, complex tools to automatically obtain (non)termination proofs. While many techniques to establish termination have already been formalized, allowing to certify such proofs, this is not the case for nontermination. In particular, the proof checker CeTA was so far limited to (innermost) loops. In this paper we present an Isabelle/HOL formalization of an extended repertoire of nontermination techniques. Firstly, we formalized techniques for nonlooping nontermination. Secondly, the available strategies include (an extended version of) forbidden patterns, which does in particular cover outermost and context-sensitive rewriting. Finally, a mechanism to support partial nontermination proofs allows to further extend the applicability of our proof checker.

10:45-13:00 Session 90AN: LaSh morning 2: Contributed Talks (LaSh)
Location: MB, Aufbaulabor
10:45
Meta-level Representations in the IDP Knowledge Base System: Bootstrapping Inference Engine Development

ABSTRACT. Declarative systems aim at solving tasks by running inference engines on a specification, to free its users from having to specify how a task should be tackled. In order to provide such functionality, declarative systems themselves apply complex reasoning techniques, and, as a consequence, the development of such systems can be laborious work. In this paper, we demonstrate that the declarative approach can be applied to develop such systems, by tackling the tasks solved inside a declarative system declaratively. In order to do this, in many cases a meta-level representation of those specifications is required. Furthermore, by using the language of the system for the meta-level representation, it opens the door to bootstrapping: an inference engine can be implemented using the inference it performs itself.

One such declarative system is the IDP knowledge base system, based on the language FO(.)IDP, a rich extension of first-order logic. In this paper, we discuss how FO(.)IDP can support meta-level representations in general and which language constructs make those representations even more natural. Afterwards, we show how meta-FO(.)IDP can be applied to bootstrap its model expansion inference engine. We discuss the advantages of this approach: the resulting program is easier to understand, easier to maintain and more flexible.

11:15
Logical Machinery of Heuristics (Preliminary Report)

ABSTRACT. This paper is a preliminary report on a new declarative language that allows the programmer to specify heuristics (problem-specific inference methods) about the problem that is being solved. The heuristics that are defined in our language control and/or change the behavior of the underlying solver. In this way, we are able to attain problem-specific solvers that benefit from both the state-of-the-art general inference methods available in general-purpose solvers, and the problem-specific reasoning methods that are applicable only to this specific problem.

11:45
Modeling High School Timetabling as PartialWeighted maxSAT

ABSTRACT. High School Timetabling (HSTT) is a well known and wide spread problem. The problem consists of coordinating resources (e.g. teachers, rooms), time slots and events (e.g. lectures) with respect to various constraints. Unfortunately, HSTT is hard to solve and just finding a feasible solution for simple variants of HSTT has been proven to be NP-complete. In addition, timetabling requirements vary from country to country and because of this many variations of HSTT exist. Recently, researchers have proposed a general HSTT problem formulation in an attempt to standardize the problem from different countries and school systems.

In this paper, for the first time we provide a new detailed modeling of the general HSTT as SAT, in which all constraints are treated as hard constraints. In order to take into account soft constraints, we extend the previous model to Partial Weighted maxSAT. In addition, we present experimental results and compare to other approaches, using both artificial and real-world instances, all of which were taken from the Third International Timetabling Competition 2011 benchmark repository. Our approach gives competitive results and in some cases outperforms the winning algorithm from the competition.

12:15
Learning Optimal Bounded Treewidth Bayesian Networks via Maximum Satisfiability

ABSTRACT. Bayesian network structure learning is the well-known computationally hard problem of finding a directed acyclic graph structure that optimally describes given data. A learned structure can then be used for probabilistic inference. While exact inference in Bayesian networks is in general NP-hard, it is tractable in networks with low treewidth. This provides good motivations for developing algorithms for the NP-hard problem of learning optimal bounded treewidth Bayesian networks (BTW-BNSL). In this work, we develop a novel score-based approach to BTW-BNSL, based on casting BTW-BNSL as weighted partial Maximum satisfiability. We demonstrate empirically that the approach scales notably better than a recent exact dynamic programming algorithm for BTW-BNSL.

Submitted to LaSh’14 for presentation only.

Conference version of this paper appears as Jeremias Berg, Matti Järvisalo, and Brandon Malone. Learning Optimal Bounded Treewidth Bayesian Networks via Maximum Satisfiability. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS 2014), volume 33 of JMLR Workshop and Conference Proceedings, pages 86-95. JMLR, 2014.

12:45
Depart for Workshop Lunch
SPEAKER: Lash

ABSTRACT. The LaSh workshop lunch is included in the registration fee.  Contact the workshop organizers if you are not registered by wish to join the lunch.

10:45-12:45 Session 90AP: Invited talk and two contributed talks (CSPSAT)
Location: MB, Zeichensaal 15
10:45
Invited talk: MaxSat and SoftCSPs

ABSTRACT. The formalisms for MaxSat and SoftCSPs are strongly related much like SAT and CP. Both address the problem of constrained optimization. The formalism that is the 800-kg gorilla of this area is Mixed Integer Programming (MIPS). Of these three formalisms the technology for solving MIPs is the most developed, although MaxSat and SoftCSP solvers have also made significant progress. All three formalisms have focused on exploiting quite distinct algorithmic methods, and empirical evidence indicates that each of these formalisms can dominate the others on particular problems. A promising research direction is to explore the hybridization of these formalisms, and in this talk I will discuss some of the work that has been done and that is yet to be done in exploiting the disparate strengths of these formalisms.

11:45
Some New Tractable Classes of CSPs and their Relations with Backtracking Algorithms

ABSTRACT. In this paper, we investigate the complexity of algorithms for solving CSPs which are classically implemented in real practical solvers, such as Forward Checking or Bactracking with Arc Consistency (RFL or MAC).. We introduce a new parameter for measuring their complexity and then we derive new complexity bounds. By relating the complexity of CSP algorithms to graph-theoretical parameters, our analysis allows us to define new tractable classes, which can be solved directly by the usual CSP algorithms in polynomial time, and without the need to recognize the classes in advance. So, our approach allows us to propose new tractable classes of CSPs that are naturally exploited by solvers, which indicates new ways to explain in some cases the practical efficiency of classical search algorithms.

12:15
BreakIDGlucose: on the importance of row symmetry
SPEAKER: Jo Devriendt

ABSTRACT. Symmetry is a topic studied by both the Satisfiability (SAT) and Constraint Programming (CP) community. However, few attempts at transferring results between both communities have been made. This paper makes the link by explaining how symmetries from Constraint Satisfiaction Problems (CSP's) transfer to their direct SAT-encoding. We point out that many interesting symmetry groups of CP all map to piecewise row symmetries of their CSP's SAT-encoding, which -using a CP result- can be broken completely. We then show that the standard SAT symmetry approaches do not break these symmetries completely, so we address this issue by designing a new symmetry breaking preprocessor, implemented as the BreakIDGlucose SAT-solver. BreakIDGlucose's first place in the hard combinatorial track of 2013's SAT-competition proves the relevance of this approach.

10:45-12:45 Session 90AQ: Invited Talk (Fitting) and Tutorial (Baader) (LATD)
Location: MB, Festsaal
10:45
The Range of Realization Which modal logics have explicit counter parts
11:45
Tutorial: Fuzzy Description Logics (Part 1)
SPEAKER: Franz Baader
10:45-11:45 Session 90AR: Contributed talks to NLSR (NLSR)
Location: FH, CAD 2
10:45
Exposing Predictive Analytics through Natural Language Dialog

ABSTRACT. In industry and academics alike, there is a renewed interest in data processing in general, and in deriving value from widely available data more specifically. The applications developed to this end often have non-experts as audience, which has led to a lot of research into verbalizing data into natural language. However, the proposed solutions often include predictive technologies that, along with predicted future events or behaviours, also generate additional information such as probabilities and confidence scores. This gives rise to data that goes beyond future facts alone and that has to be communicated to its users in order to enable a correct interpretation of the predictions. In this paper we explore the additional natural language needed when applications offer predictive analytics technology to their users. We demonstrate how to modularly implement the grammars needed to verbalize predictive aspects, independent of (existing) conceptualizations and lexical resources available for the domain of the data.

11:15
Natural Language Access to Data

ABSTRACT. Knowledge and reasoning in a specific subject domain can greatly assist in natural language processing. This is demonstrated in the context of a question answering system for accessing data for business enterprise software.

10:45-13:00 Session 90AS (ParSearchOpt)
Location: MB, Hörsaal 4
10:45
A Portfolio Based Parallel SAT Solver with Multiple Deletion Strategies
SPEAKER: unknown

ABSTRACT. Conflict based clause learning is known to be an important component in Modern SAT solving. Because of the exponential blow up of the size of learnt clauses database, maintaining a relevant and polynomially bounded set of learnt clauses is crucial for the efficiency of clause learning based SAT solvers. In this paper, we first compare several criteria for selecting the most relevant learnt clauses with a simple random selection strategy. We then propose new criteria allowing us to select relevant clauses w.r.t. a given search state. Then, we use such strategies as a means to diversify the search in a portfolio based parallel solver. An experimental evaluation comparing the classical ManySAT solver with the one augmented with multiple deletion strategies, shows the interest of such approach.

11:15
Towards a Parallel Hierarchical Adaptive Solver Tool
SPEAKER: unknown

ABSTRACT. Constraintsatisfactionandcombinatorialoptimizationprob- lems, even when modeled with efficient metaheurisics such as local search remain computationally very intensive. Solvers stand to benefit significantly from execution on parallel systems, which are increasingly available. The architectural diversity and complexity of the latter means that these systems pose ever greater challenges in order to be effectively used, both from the point of view of the modeling effort and from that of the degree of coverage of the available computing resources. In this article we discuss impositions and design issues for a framework to make efficient use of various parallel architectures

11:45
On the Parallelization of Randomized Propagation-based Constraint Solving
SPEAKER: unknown

ABSTRACT. We propose to apply a general framework for estimating the parallel performance of Las Vegas algorithms to randomized propagation-based constraint solvers, i.e., when, in the labeling phase, both the variable to instantiate and its value are chosen randomly. Indeed, by analyzing the runtime of the sequential process (which is variable from one run to another because of the stochastic nature of the algorithm) and approximating this runtime distribution with statistical methods, the runtime behavior of the parallel process can be predicted by a model based on order statistics. We apply this approach to study the behavior of a randomized version of the Gecode solver on three classical CSPs problems, and compare the predicted performances to the results of an actual experimentation on parallel hardware up to 256 cores.

12:15
A Constraint-based Parallel Local Search for Disjoint Rooted Distance-Constrained Minimum Spanning Tree Problem
SPEAKER: unknown

ABSTRACT. Many network design problems arising in areas as diverse as VLSI circuit design, QoS routing, traffic engineering, and computational sustainability require clients to be connected to a facility under path-length constraints and budget limits. These problems can be modelled as Rooted Distance-Constrained Minimum Spanning-Tree Problem (RDCMST), which is NP-hard. These networks are vulnerable to a failure. Therefore, it is often important to ensure that all clients are connected to two or more facilities via edge-disjoint paths. We call this problem the Disjoint RDCMST (DRDCMST). We present a constraint-based parallel local search algorithm for solving DRDCMST. Traditional way of extending a sequential algorithm to run in parallel is to either perform portfolio-based search in parallel or to perform parallel neighbourhood search. We rather exploit the semantics of the constraints of the problem to perform multiple moves in parallel by ensuring that they are mutually independent. The ideas presented in this paper are general and can be adapted to any other problem. The effectiveness of our approach is demonstrated by experimenting with a set of problem instances taken from real-world passive optical network deployments in Ireland and the UK. Results show that performing moves in parallel can significantly reduce the time required by our local-search approach.

10:45-12:45 Session 90AT: Technical Session (ASA)
Location: MB, Zeichensaal 13
10:45
Banking security Wish list: notes from the edge
SPEAKER: George French
11:25
Well-typed generic smart-fuzzing for APIs

ABSTRACT. We demonstrate how to use a tiny domain-specific language for the description of APIs make it possible to perform black-box testing of APIs. Given an API, our testing framework behaves like regular client code and combines functions exported by the API to build elements of the data-types that are exported by the API. Once the API has been systematically exercised, the user can verify that some properties hold at the API level. In the submitted abstract, we use the running example of data-structures APIs, but the very same ideas apply to the smart-fuzzing of security APIs.

12:05
Caetus tool
SPEAKER: Josef Hertl

ABSTRACT. The goal of the paper is to develop a software for automated analysis of PKCS#11 tokens with focus on the security of stored cryptographic keys. The paper first outlines the PKCS#11 standard and the Cryptoki API specified by the standard, including the various components of the API. Then the paper describes the mechanics of cryptographic key protection in Cryptoki. It analyses the individual functions relevant to key protection and defines the first set of tests. Then the paper performs an analysis of the previously published attacks against keys stored on the device and also outlines the second set of tests with focus on discovering attack vulnerabilities. Afterwards the paper examines various ways to improve the security of Cryptoki by altering the standard. The discussion of the proposed restrictions takes into consideration their impact on practical usability of the tokens. In order to further evaluate the security of the tokens against possible attacks, the paper also utilizes model checker software. The results of the previously defined tests are used to compose a model that is sent to a model checker for further analysis. Since the developed software is written in Java, the paper also examines the Java PKCS#11 wrapper and discusses the advantages and disadvantages against using C for implementation. The most obvious advantage is that the software can be easily deployed on other operating systems, which is particularly useful for testing tokens that only work on Windows. The disadvantage is that the used wrapper does not implement all the Cryptoki functions, but the paper shows that all these limitations either do not matter or they can be worked around. The paper also presents the developed tool called Caetus. The tool first connects into the token, discovers the basic capabilities of the device and based on these capabilities it performs in-depth testing of Cryptoki functions. First part of the tests focuses on adherence to the PKCS#11 standard, which means the discovered issues do not necessarily mean security risks. The other set of tests attempts to discover vulnerability of the token to the previously published attacks and other trivial attacks. Finally the tool can use the discovered capabilities in order to compose a model specification which is then sent to the model checker. The tool is currently optimized for analysis of software tokens. The last part of the paper consists of tables with results of token testing and discussion of the results.

10:45-12:00 Session 90AU: Contributed Talks (DL and NMR)
Location: EI, EI 7
10:45
Towards Practical Deletion Repair of Inconsistent DL-programs
SPEAKER: unknown

ABSTRACT. Nonmonotonic Description Logic (DL-) programs couple nonmonotonic logic programs with DL-ontologies through queries in a loose way which may lead to inconsistency, i.e., lack of an answer set. Recently defined repair answer sets remedy this but a straightforward computation method lacks practicality. We present a novel evaluation algorithm for deletion repair answer sets based on support sets, which reduces evaluation of DL-Lite_A ontology queries to constraint matching. This leads to significant performance gains towards inconsistency management in practice.

11:10
An Argumentation System for Reasoning with Conflict-minimal Paraconsistent ALC
SPEAKER: unknown

ABSTRACT. The semantic web is an open and distributed environment in which it is hard to guarantee consistency of knowledge and information. Under the standard two-valued semantics everything is entailed if knowledge and information is inconsistent. The semantics of the paraconsistent logic LP offers a solution. However, if the available knowledge and information is consistent, the set of conclusions entailed under the three-valued semantics of the paraconsistent logic LP is smaller than the set of conclusions entailed under the two-valued semantics. Preferring conflict-minimal three-valued interpretations eliminates this difference.

Preferring conflict-minimal interpretations introduces non-monotonicity. To handle the non-monotonicity, this paper proposes an assumption-based argumentation system. Assumptions needed to close branches of a semantic tableaux form the arguments. Stable extensions of the set of derived arguments correspond to conflict minimal interpretations and conclusions entailed by all conflict-minimal interpretations are supported by arguments in all stable extensions.

11:35
Querying Inconsistent Description Logic Knowledge Bases under Preferred Repair Semantics
SPEAKER: unknown

ABSTRACT. Recently several inconsistency-tolerant semantics have been introduced for querying inconsistent description logic knowledge bases. Most of these semantics rely on the notion of a repair, defined as an inclusion-maximal subset of the facts (ABox) which is consistent with the ontology (TBox). In this paper, we study variants of two popular inconsistency-tolerant semantics obtained by replacing classical repairs by various types of preferred repair. We analyze the complexity of query answering under the resulting semantics, focusing on the lightweight logic DL-Lite_R. Unsurprisingly, query answering is intractable in all cases, but we nonetheless identify one notion of preferred repair, based upon priority levels, whose data complexity is "only" coNP-complete. This leads us to propose an approach combining incomplete tractable methods with calls to a SAT solver. An experimental evaluation of the approach shows good scalability on realistic cases.

10:45-12:15 Session 90AV: Technical session (EC2)
Location: FH, Seminarraum 101C
10:45
Wait-Freedom Made Practical
SPEAKER: Erez Petrank

ABSTRACT. With the proliferation of multicore systems, the design of concurrent algorithms and concurrent data structures to support them has becomes critical. Wait-free data structures provide a very basic and natural progress guarantee, assuring that each thread always makes progress when given enough CPU cycles. However, wait-free algorithms were considered difficult to design and too costly to be used in practice. Only the weaker lock-free guarantee, which allows almost all threads to starve, was achievable in practice. In a series of recent papers, we have shown that this pessimistic belief was false and wait-freedom is actually achievable efficiently in practice.

11:45
A new reduction for event-driven distributed programs
SPEAKER: unknown

ABSTRACT. We consider the problem of provably verifying that an asynchronous message-passing system satisfies its local assertions. We present a novel reduction scheme for asynchronous event-driven programs that finds almost-synchronous invariants--- invariants consisting of global states where message buffers are close to empty. The reduction finds almost-synchronous invariants and simultaneously argues that they cover all local states. We show that asynchronous programs often have almost-synchronous invariants and that we can exploit this to build natural proofs that they are correct. We implement our reduction strategy, which is sound and complete, and show that it is more effective in proving programs correct as well as more efficient in finding bugs in several programs, compared to current search strategies which almost always diverge. The high point of our experiments is that our technique can prove the Windows Phone USB Driver written in P correct for the receptiveness property, which was hitherto not provable using state-of-the-art model-checkers.

10:45-13:00 Session 90AW: Arithmetic (SMT)
Location: FH, Seminarraum 104
10:45
Leveraging Linear and Mixed Integer Programming for SMT
SPEAKER: Tim King

ABSTRACT. Solvers for Satisfiability Modulo Theories (SMT) combine the ability of fast Boolean satisfiability solvers to find solutions for complex propositional formulas with specialized theory solvers. Theory solvers for linear real and integer arithmetic reason about systems of simultaneous inequalities. These solvers either find a feasible solution or prove that no such solution exists. Linear programming (LP) solvers come from the tradition of optimization, and are designed to find feasible solutions that are optimal with respect to some optimization function. Typical LP solvers are designed to solve large systems quickly using floating point arithmetic. Because floating point arithmetic is inexact, rounding errors can lead to incorrect results, making these solvers inappropriate for direct use in theorem proving. Previous efforts to leverage such solvers in the context of SMT have concluded that in addition to being potentially unsound, such solvers are too heavyweight to compete in the context of SMT. In this paper, we describe a technique for integrating LP solvers that dramatically improves the performance of SMT solvers without compromising correctness. These techniques have been implemented using the SMT solver CVC4 and the LP solver GLPK. Experiments show that this implementation outperforms other state-of-the-art SMT solvers on the QF_LIA and QF_LRA SMT-LIB benchmarks.

11:15
raSAT: SMT for Polynomial Inequality
SPEAKER: unknown

ABSTRACT. This paper presents an iterative approximation refinement, called rasat loop, which solves a system of polynomial inequalities on real numbers. The approximation scheme consists of interval arithmetic (over-approximation, aiming to decide UNSAT) and testing (under-approximation, aiming to decide SAT). If both of them fail to decide, input intervals are refined by decomposition. The rasat loop is implemented as an SMT rasat with miniSAT 2.2 as a backend SAT solver. We discuss three strategy design choices: dependency to set priority among atomic polynomial constraints, sensitivity to set priority among variables, and UNSAT core for reducing learned clauses and incremental UNSAT detection. Preliminary experimental observation on comparison with Z3 4.3, dReal, and isat3 is also discussed.

11:45
Better Answers to Real Questions
SPEAKER: unknown

ABSTRACT. We consider existential problems over the reals. Extended quantifier elimination generalizes the concept of regular quantifier elimination by providing in addition answers, which are descriptions of possible assignments for the quantified variables. Implementations of extended quantifier elimination via virtual substitution have been successfully applied to various problems in science and engineering. So far, the answers produced by these implementations included infinitesimal and infinite numbers, which are hard to interpret in practice. We introduce here a post-processing procedure to convert, for fixed parameters, all answers into standard real numbers. The relevance of our procedure is demonstrated by applications of our implementation to various examples from the literature, where it significantly improves the quality of the results.

12:15
Towards Conflict-Driven Learning for Virtual Substitution
SPEAKER: unknown

ABSTRACT. We consider SMT-solving for linear real arithmetic. Inspired by related work for the Fourier--Motzkin method, we combine virtual substitution with learning strategies. For the first time, we present virtual substitution---including our learning strategies---as a formal calculus. We prove soundness and completeness for that calculus. Some standard linear programming benchmarks computed with an experimental implementation of our calculus show that the integration of learning techniques into virtual substitution gives rise to considerable speedups. Our implementation is open-source and freely available.

10:45-12:15 Session 90AX: Contributed Talks: Extracting Parsers and Ancestral Logic (PSC)
Location: FH, CAD 1
10:45
Extracting Monadic Parsers from Proofs
SPEAKER: Alison Jones

ABSTRACT. This talk outlines a proof-theoretic approach to developing correct and terminating monadic parsers. Using a modified realisability interpretation, we extract provably correct and terminating programs from formal proofs. If the proof system is proven to be correct, then any extracted program is guaranteed to be so. By extracting parsers, we can ensure that they are correct, complete and terminating for any input. The work is ongoing, and is being carried out in the interactive proof system Minlog.

11:30
Intuitionistic Ancestral Logic
SPEAKER: Liron Cohen
ABSTRACT.
We define pure intuitionistic Ancestral Logic (iAL), extending pure intuitionistic First-Order Logic (iFOL). This logic is a dependently typed abstract programming language with computational functionality beyond iFOL given by its realizer for transitive closure, TC. We derive this operator from the natural type theoretic definition of TC using intersection.
We show that provable formulas in iAL are uniformly realizable, thus iAL is sound with respect to constructive type theory. We further show that iAL subsumes Kleene Algebras with tests and thus serves as a natural programming logic for proving properties of program schemes. We also extract schemes from proofs that iAL specifications are solvable.
10:45-13:00 Session 90AY: iPrA Regular Talks (iPRA)
Location: FH, Seminarraum 101A
10:45
Resolution Based Uniform Interpolation for Expressive Description Logics
SPEAKER: unknown

ABSTRACT. Ontologies represented using description logics model domains of interest in terms of concepts and binary relations, and are used in a range of areas including medical science, bio-informatics, semantic web or artificial intelligence. Often, ontologies are large and complex and cover 10,000s of concepts. Uniform interpolants are restricted views of ontologies, that only use a specified set of symbols, while preserving all entailments that can be expressed using these symbols. Uniform interpolation can be used for analysing hidden relations in an ontology, removing confidential concepts from an ontology, computing logical differences between ontologies or extracting specialised ontologies for ontology reuse and has many more applications.

We follow a resolution-based approach to make the computation of uniform interpolants of larger ontologies feasable. Uniform interpolants cannot always be represented finitely in the language of the input ontology, for which situation we offer three solutions: extending the signature with additional concepts, approximating the uniform interpolant, or using fixpoint operators.

11:15
Practical CNF Interpolants Via BDDs
SPEAKER: unknown

ABSTRACT. Craig interpolation has been recently shown to be useful in a wide variety of problem domains. One use is in strategy extraction for two player games, as described in our accompanying submission. However, interpolation is not without its drawbacks. It is well-known that an interpolant may be very large and highly redundant. Subsequent use of the interpolant requires that it is transformed to CNF or DNF, which will further increase its size.

We present a new approach to handling both the size of interpolants and transformation to clausal representation. Our approach relies on the observation that in many real-world applications, interpolants are defined over a relatively small set of variables. Additionally, in most cases there likely exists a compact representation of the interpolant in CNF. For instance, in our application to games an interpolant represents a set of winning states that is likely to have a simple structure.

11:45
Interpolation and Domain Independence applied to Databases

ABSTRACT. We study a general framework for query rewriting in the presence of a domain independent first-order logic theory (a knowledge base) over a signature including database and non-database predicates, based on Craig's interpolant and Beth's definability theorem. In our framework queries are (possibly open) domain independent first-order formulas over the extended signature. The database predicates are meant to be closed, i.e., their extensions are the same in every model since they represent a classical finite relational database, whereas the non-database predicates in the ontology are interpreted classically, i.e., their extensions may vary across different models. It is important to notice that all the conceptual modelling languages devised for the designing of information and database systems (such as Entity-Relationship schemas, UML Class diagrams, Object-Role Modelling ORM diagrams, etc) are domain independent: they can be formalised as domain independent first order theories. Given a domain independent knowledge base and a query implicitly definable from the database signature, the framework provides precise semantic conditions to decide the existence of a domain independent first-order logically equivalent reformulation of the query (called exact rewriting) in terms of the database signature, and if so, it provides an effective approach to construct the reformulation based on interpolation using standard theorem proving techniques (e.g., tableau). We are interested in domain independent reformulations of queries because their range-restricted syntax is needed to reduce the original query answering problem to a relational algebra evaluation over the original database, that is, it is effectively executable as an SQL query directly over the database. Due to the results on the applicability of Beth's theorem and Craig's interpolant, we prove the completeness of our framework in the case of domain independent ontologies and queries expressed in any fragment of first-order logic enjoying finitely controllable determinacy, a stronger property than the finite model property of the logic. If the employed logic does not enjoy finitely controllable determinacy our approach would become sound but incomplete, but still effectively implementable using standard theorem proving techniques. In order to constructively compute the exact domain independent query reformulation we use the tableau based method to find the Craig’s interpolant to compute the from a validity proof of the implication (KB and Q) --> (KB --> Q). Since description logics knowledge bases are not necessarily domain independent, we have syntactically characterised the domain independent fragment of several very expressive fragments of description logics by enforcing a sort of guarded negation, a very reasonable restriction from the point of view of conceptual modelling. These fragments do also enjoy finitely controllable determinacy. We have applied this framework not only to query answering under constraints, but also to data exchange and to view update.

12:15
Tree-based Modular SMT Solving
SPEAKER: unknown

ABSTRACT. Modern Satisfiability Modulo Theories (SMT) solvers are highly efficient and can generate a resolution proof in case of unsatisfiability. Some applications, such as synthesis of Boolean controllers, compute multiple coordinated interpolants from a single refutation proof. In order to do so, the proof is required to have two properties: It must be colorable and local-first. The latter means that resolution over a literal that occurs just in one partition, has to have both premises derived from this partition. Off-the-shelf SMT solvers do not necessarily produce proofs that have these properties. In particular, proofs are usually not local-first. Hofferek et al. 1 suggest to perform proof transformations to obtain a colorable and local-first proof, but these transformations can be computationally expensive. Our goal is to introduce a new method to directly compute a local-first, colorable resolution proof for an unsatisfiable SMT formula. This proof can then be directly used for n-interpolation. Our approach uses a tree-based structure of SMT solvers, where every node in the tree is associated with a formula (possibly empty initially), and a (possibly empty) set of literals. The semantics of the modular SMT problem is the conjunction of the formulas of all nodes. The set of literals associated with a node is computed recursively as follows. Every literal, which appears in more than one descendant of a parent node, is assigned to the parent node. During solving, every node makes decisions about its associated literals only. We start at the root node and communicate the partial assignments to the child nodes until we reach a leaf. Every node has its own solver instance and tries to compute a satisfying assignment w.r.t. the given partial assignments. A blocking clause is added to the parent node if no satisfying assignment can be found. If all child nodes find a satisfying assignment and the conjunction of them is theory-consistent, we either decide more literals, or return to the parent node if we already have a full assignment for all the literals of the current node. In case the conjunction is inconsistent within the theory, we respectively add a blocking clause for the current assignment to the children. In order to obtain this clauses while keeping the modular structure intact, we perform interpolation over the assignments of the children and use the interpolants, which are guaranteed to contain only "global" literals (which can safely be added to the children without breaking the modularity), to learn a blocking clause for every child node. The algorithm terminates if either solely full assignments are communicated to the root node and there is no literal left to decide, or when enough clauses have been learned at the root node to show inconsistency. In the latter case we are able to extract a resolution proof with the same modular structure as the original problem. We are currently working on implementing this approach in the hope that we can use it to generate colorable, local-first proofs for synthesis problems much faster than with post-processing proof transformations.

10:45-11:45 Session 90AZ: Invited Speaker (FCS-FCC)
Location: MB, Hörsaal 7
10:45
Constructive Cryptography and Security Proofs
SPEAKER: Ueli Maurer

ABSTRACT. Constructive cryptography, an alternative paradigm for designing cryptographic protocols and proving their security, is reviewed in this talk and contrasted with other approaches.  In constructive cryptography, a cryptographic scheme (e.g. encryption) is seen as constructing a certain resource (e.g. a secure channel) from another resource (e.g. an authenticated channel and a secret key), for a well-defined notion of construction.  The construction notion is composable, which allows to design complex protocols in a modular, layered manner. The security proofs of the modules (e.g. encryption, authentication, key agreement, or signatures) directly compose to a security proof for the entire protocol. A treatment in constructive cryptography comes with several advantages, among them:

 - Simplicity and reusability due to the modular design.

 - The semantics of security definitions are clear in the sense that one knows how a scheme satisfying a given definition can be used. This allows to compare security definitions.

 - The treatment is at an abstract level, freed from artefacts like Turing machines, asymptotics, polynomial-time, communication tapes, etc. 

 - Information-theoretic and computational security are different instantiations of the same security statement.

 - It appears better amenable to a treatment by formal methods.

 

 

 

10:45-11:45 Session 90BA: Contributed talks (PCCR)
Location: FH, Hörsaal 3
10:45
Connection matrices of finite rank instead of definability in Monadic Second Order Logic

ABSTRACT. Connection matrices for graphs and hypergraphs are a generalization of Hankel matrices for words. They were used by L. Lovasz and A. Schrijver and their collaborators to characterize which graph parameters arise from partition functions. Lovasz also noted that they can be used to make Courcelle's Theorem, which shows that graph properties definable in Monadic Second Order Logic (MSOL) are in FPT on graph classes of bounded tree-width, logic-free by replacing MSOL-definability by a finiteness condition on the rank of connection matrices and allowing graph parameters with values in a field. In this paper we extend this to graph parameters with values in a tropical semi-rings rather than a field, and graph classes of bounded clique-width.

11:15
Backdoors into Two Occurrences
SPEAKER: Jan Johannsen

ABSTRACT. The class CNF(2) of CNF-formulas in which every variable has at most two occurrences is a lesser known tractable case of SAT. Backdoor sets for this class are studied in terms of parameterized complexity. The question whether there exist a CNF(2)-backdoor set of size k is hard for the class W[2], for both weak and strong backdoors, and in both cases it becomes fixed-parameter tractable when restricted to inputs in d-CNF for a fixed d.

10:45-13:00 Session 90BB: Security Ceremonies and Policies (STAST)
Location: MB, Seminarraum 212/232
10:45
Decision justifications for wireless network selection
SPEAKER: unknown

ABSTRACT. As wireless networking increases, so do the risks associated with the use of such technology. We examined the role of wireless network display and choice presentation in a study involving 104 undergraduate social science students. One research goal was to examine to what extent different features (such as padlocks) and labels in different colours could be used to ‘nudge’ individuals towards more secure and away from open (unsecured) network options. Another goal was to better understand the basis for decision-making. Based on qualitative as well as quantitative data about participants’ rationales for selecting specific network options, we were able to differentiate groups whose decision were driven by security concerns, those who made convenience-based decisions, and those whose motives were unclear or undefined. These decision justifications were associated with different network choices. The results suggest that the padlock takes different functions and meanings for the three groups which can help to better understand their security-related decision making. We further observed significant effects for the use of colour when nudging participants towards more secure choices. We also wanted to examine the role of individual differences in relation to the choices individuals make. Perceived vulnerability and controllability of risk played a role in terms of the extent to which participants would more secure vs. unsecure choices, although we obtained no significant group differences when we examined these variables in relation to the different decision justification groups. This indicates that perceived risk perceptions and reasons for decisions may relate differently to the actual behavioural choices individuals make, with perceptions of risk not necessarily relating to the reasons that participants consider when making security decisions.

11:30
Reflecting on the Ability of Enterprise Security Policy to Address Accidental Insider Threat
SPEAKER: unknown

ABSTRACT. An enterprise's information security policy is an exceptionally important control as it provides the employees of an organisation with details of what is expected of them, and what they can expect from the organisation's security teams, as well as informing the culture within that organisation. The threat from accidental insiders is a reality across all enterprises, and can be extremely damaging to the systems, data and reputation of an organisation. Recent industry reports, and academic literature underline the fact that the risk of accidental insider compromise is potentially more pressing than that posed by a malicious insider. In this paper we focus on the ability of enterprises' information security policies to mitigate against the accidental insider threat. Specifically we perform an analysis of real-world cases of accidental insider threat to define the key reasons, actions and impacts of these events - captured as a grounded insider threat classification scheme. This scheme is then used to perform a review of a set of organisational security policies to highlight their strengths and weaknesses when considering the prevention of incidents of accidental insider compromise. We present a set of questions that can be used to analyse an existing security policy to help control the risk of the accidental insider threat.

12:15
Modelling User Devices in Security Ceremonies
SPEAKER: unknown

ABSTRACT. User constrained devices such as smart cards are commonly used in human-protocol interaction. Modelling these devices as part of human-protocol interaction is still an open problem. Examining the interaction of these devices as part of security ceremonies offers greater insight. This paper highlights two such cases: modelling extra channels between humans and devices in the ceremony, and modelling possession when the device also acts as an agent in the ceremony. Case studies where such devices are used during authentication ceremonies are presented to demonstrate these use cases.

10:45-11:15 Session 90BC: Constraints (continued) (RCRA)
Location: MB, Hörsaal 12
10:45
Neighbourhood SAC for Preprocessing and Search

ABSTRACT. This paper reports extensions and further analysis of a new form of singleton arc consistency, called neighbourhood SAC (NSAC), where a subproblem adjacent to the variable with a reduced domain (the "focal variable") is made arc consistent.

The first part of the paper presents two important extensions: (1) NSAC is generalized to k-NSAC, where k is the longest path between the focal variable and any variable in the subgraph. In this framework, the original NSAC becomes 1-NSAC; but in addition, we can have 2-NSAC, 3-NSAC, etc. up to a point where there is no difference from full SAC. In this work we will only consider the just-named forms. Obviously, there is an associated dominance hierarchy with respect to level of consistency, with k-NSAC < (k+1)-NSAC until full SAC is reached. (2) NSAC can be extended in a relatively straightforward way to handle n-ary constraints, although there are some subtleties, which are discussed.

The second part presents some studies of hybrid search techniques based on NSAC and SAC, using a variety of problems. Comparisons are made using the domain/degree and the domain/weighted degree variable ordering heuristics. In some cases, higher levels of consistency maintenance outperform MAC by several orders of magnitude, although with weighted degree the best tradeoff is obtained when SAC-based consistency is restricted to preprocessing.

10:45-12:00 Session 90BD: Invited talk (ADDCT)
Location: FH, Zeichensaal 1
10:45
A Superposition-Based Approach to Abductive Reasoning in Equational Clausal Logic

ABSTRACT. (joint work with M. Echenim and S. Tourret)

Abstract: Abduction can be defined as the process of inferring plausible explanations (or hypotheses) from observed facts (conclusions). This form of reasoning has the potential of playing a central role in system verification, particularly for identifying bugs and providing hints to correct them. We describe an approach to perform abductive reasoning that is based on the superposition calculus. The formulas we consider are sets of first-order clauses with equality, and the abducibles (in other words the hypotheses that are allowed to be inferred) are boolean combinations of equations constructed over a given finite set of ground terms. By duality, abduction can be reduced to a consequence-finding problem. We thus show how the inference rules of the superposition calculus can be adapted to obtain a calculus that is deductive complete for ground clauses built on the considered sets of ground terms, thus guaranteeing that all abducible formulas can be generated. This calculus enjoys the same termination properties as the superposition calculus: in particular, it is terminating on ground extensions of decidable theories of interest in software verification.

The number of implicates of a given equational formula is usually huge. We describe techniques for storing sets of abduced clauses efficiently, and show how usual trie-based approaches for representing sets of propositional clauses in a compact way can be adapted and extended in order to denote equational clauses up to equivalence (modulo the axioms of equality). We provide algorithms for performing redundancy pruning in an efficient way on such representations. We identify hints for improvements and provide lines of on-going and future research.

10:45-12:45 Session 90BE: Contributed Talks (CHR and CICLOPS-WLPE)
Location: FH, Hörsaal 7
10:45
Automated Test Case Generation and Model Checking with CHR
SPEAKER: Ralf Gerlich

ABSTRACT. We present an example for application of CHR to automated test data generation and model checking in verification of mission critical software for satellite control.

11:15
From XML Schema to JSON Schema: Translation with CHR
SPEAKER: Falco Nogatz

ABSTRACT. Despite its rising popularity as data format especially for web services, the software ecosystem around the JavaScript Object Notation (JSON) is not as widely distributed as that of XML. For both data formats there exist schema languages to specify the structure of instance documents, but there is currently no opportunity to translate already existing XML Schema documents into equivalent JSON Schemas.

In this paper we introduce an implementation of a language translator. It takes an XML Schema and creates its equivalent JSON Schema document. Our approach is based on Prolog and CHR. By unfolding the XML Schema document into CHR constraints, it is possible to specify the concrete translation rules in a declarative way.

11:45
Constraint Handling Rules with Multiset Comprehension Patterns

ABSTRACT. CHR is a declarative, concurrent and committed choice rule-based constraint programming language. We extend CHR with multiset comprehension patterns, providing the programmer with the ability to write multiset rewriting rules that can match a variable number of constraints in the store. This enables writing more readable, concise and declarative code for algorithms that coordinate large amounts of data or require aggregate operations. We call this extension CHR^cp . We give a high-level abstract semantics of CHR^cp , followed by a lower-level operational semantics. We then show the soundness of this operational semantics with respect to the abstract semantics.

12:15
Discussion
SPEAKER: Everyone

ABSTRACT. We have additional discussion time for all talks before lunch

10:45-13:00 Session 90BF: Contributed Talks (VPT)
Location: FH, Dissertantenraum E104
10:45
On Unfolding for Programs Using Strings as a Data Type

ABSTRACT. {Regular paper} As a rule, program transformation methods based on semantics unfold a semantic tree of a given program. Sometimes that allows to optimize the program or to prove its certain properties automatically. Unfolding is one of the basic operations, which is a meta-extension of one step of the abstract machine executing the program. This paper is interested in unfolding for programs based on pattern matching and manipulating the strings. The corresponding computation model originates from Markov's normal algorithms and extends this theoretical base. Even though algorithms unfolding programs were being intensively studied for a long time in the context of variety of programming languages, as far as we know, the associative concatenation was stood at the wayside of the stream. We define a class of term rewriting systems manipulating with strings and describe an algorithm unfolding the programs from the class. The programming language defined by this class is algorithmic complete. Given a word equation, one of the algorithms suggested in this paper results in a description of the corresponding solution set.

11:30
Branching Processes of Conservative Nested Petri Nets
SPEAKER: Daniil Frumin

ABSTRACT. Nested Petri nets (NP-nets) is an extension of the Petri nets formalism with accordance to the “nets-within-nets” approach, when tokens in a marking are themselves Petri nets which have autonomous behavior and synchronize with the system net. The formalism of NP-nets allows for modeling multi-level multi-agent systems with dynamic structure in a natural way. In this paper we define branching processes and unfoldings for conservative NP-nets, i.e. for NP-nets with a persistent set of agents. We show that NP-nets unfoldings satisfy the fundamental property of unfoldings, and thus can be used for verification of conservative NP-nets in line with classical unfolding methods.

12:15
Local Driving in Higher-Order Positive Supercompilation via the Omega-theorem
SPEAKER: unknown

ABSTRACT. A program transformation technique should terminate, return efficient output programs and be efficient itself.

For positive supercompilation ensuring termination requires memoisation of expressions, and these are subsequently used to determine when to perform generalization and folding. For a first-order language, every infinite sequence of transformation steps must include function unfolding, so it is sufficient to memoise only those expressions immediately prior to a function unfolding step.

However, for a higher-order language, it is possible for an expression to have an infinite sequence of transformation steps which do not include function unfolding, so memoisation prior to a function unfolding step is not sufficient by itself to ensure termination. But memoising additional expressions is expensive during transformation and may lead to less efficient output programs due to auxiliary functions. This additional memoisation may happen explicitly during transformation or implicitly via a pre-processing transformation as outlined in previous work by the first author.

We introduce a new technique for local driving in higher-order positive supercompilation which obliviates the need for memoising other expressions than function unfolding steps, thereby improving efficiency of both the transformation and the generated programs. We exploit the fact, due to the second author in the setting of type-free lambda-calculus that every expression with an infinite sequence of transformation steps not involving function unfolding must have somthing like the term Omega = (lambda x. x x) (lambda x . x x) embedded within it in a certain sense. The technique has proven useful on a host of examples.

10:45-13:00 Session 90BG (QED)
Location: FH, Hörsaal 4
10:45
25 years of the Mizar Mathematical Library

ABSTRACT. Throughout the years, the development of the Mizar Mathematical Library was stimulated by large formalization projects, of which the proof of the Jordan Curve Theorem and the encoding of "Compendium of Continuous Lattices" were most influential. In Andrzej Trybulec's (the head of the Mizar project) opinion, expressed during QED II workshop in 1995, "one of the fundamental tasks of the QED effort should focus on reconstruction of the language and technology of mathematics." We will show how Andrzej's dreams of QED are implemented.

11:20
The seL4 microkernel verification
SPEAKER: Gerwin Klein

ABSTRACT. This talk will first give a brief high-level overview of the formal verification of the seL4 microkernel before showing some of its proof techniques in more detail. The aim will be to show examples of libraries and tactics for refinement, invariant, and security proofs for operating systems.

11:55
Towards Formally Verified Theorem Provers - Part I
SPEAKER: Magnus Myreen
12:15
Towards Formally Verified Theorem Provers - Part II
SPEAKER: Ramana Kumar
12:30
TBA
10:45-12:45 Session 90BH: Invited Talk 2 and Submitted Talks: Javascript and Concurrency (HOPA)
Location: FH, Seminarraum 138C
10:45
Program Analysis for JavaScript Web Applications

ABSTRACT. JavaScript supports a powerful mix of object-oriented and functional programming, which provides flexibility for the programmers but also makes it difficult to reason about the behavior of the programs without actually running them. One of the main challenging for program analysis tools is to handle the complex programming patterns that are found in widely used libraries, such as jQuery, without losing critical precision. Another challenge is the use of dynamic language features, such as 'eval'. This talk presents an overview of the challenges and the techniques used in the TAJS research project that aims to develop sound and effective program analysis techniques for JavaScript web applications. 

11:45
Analyzing JavaScript: The Bad Parts

ABSTRACT. JavaScript is a popular, powerful, and highly dynamic programming language. It is the arguably the most widely used and ubiquitous programming language, has a low barrier to entry, and has vast amounts of code in the wild. JavaScript has grown from a language used primarily to add small amounts of dynamism to web pages into one used for large-scale applications both in and out of the browser--including operating systems and compilers. As such, automated program analysis tools for the language are increasingly valuable. Almost all of the research to date targets ECMAScript 3, a standard that was succeeded by the most recent version, 5.1, over four years ago. Much of the research targets well-behaved subsets of JavaScript, eliding the darker corners of the language (the bad parts). In this work, we demonstrate how to statically analyze full, modern JavaScript, focusing on uses of the language's so-called bad parts. In particular, we highlight the analysis of scoping, strict mode, property and object descriptors, getters and setters, and eval.

Speed, precision, and soundness are the basic requirements of any static analysis. To obtain soundness, we began with LambdaS5, a small, functional language developed by Guha et al. that, through desugaring, encompasses the entirety of the ECMAScript 5.1 standard. Its small size and deliberate desugaring make it a tractable intermediate representation for the analysis of JavaScript. Leveraging the semantics of LambdaS5, we build an abstract machine interpreter. Following the abstracting abstract machines recipe developed by Van Horn and Might, a small-step abstract machine is easily and soundly transformed into an abstract interpreter.

Once we have a sound abstract interpreter, speed and precision can be configured from a number of angles. Optimization techniques developed by Johnson et al. make any AAM-style analyzer faster. By parameterizing the analyzer's allocation function we gain another axis of control. A modular design of the analyzer's values makes it possible to plug-and-play different lattices, which is necessary for testing different abstractions of JavaScript's values and non-trivial objects. The objects of the 5.1 specification are no longer simple key-value mappings, but map their string keys to either data or accessor properties, each of which hold their own metadata. Further, data properties can be converted to and from accessor properties. The objects themselves hold metadata that changes the semantics of modifying both their mappings and metadata, causing the same line of code to either update a value, silently have no effect, or raise an exception. In the setting of analysis, a map with abstract spaces for both keys and values presents its own challenges to retain soundness while remaining performant. As an example of a potential lattice, we discuss an abstraction of JavaScript's objects that significantly improves performance at the cost of a loss in precision.

12:15
Saturation of Concurrent Collapsible Pushdown Systems (Extended Abstract)
SPEAKER: Matthew Hague

ABSTRACT. Multi-stack pushdown systems are a well-studied model of concurrent computation using threads with first-order procedure calls. While, in general, reachability is undecidable, there are numerous restrictions on stack behaviour that lead to decidability. To model higher-order procedures calls, a generalisation of pushdown stacks called collapsible pushdown stacks are required. Reachability problems for multi-stack collapsible pushdown systems have been little studied. In a recent FSTTCS article we studied ordered, phase-bounded and scope-bounded multi-stack collapsible pushdown systems using saturation techniques and showed decidability of control state reachability, as well as giving a regular representation of all configurations that can reach a given control state.

11:00-13:00 Session 91: Tutorial and invited talk (LC)
Location: MB, Prechtlsaal
11:00
On local induction schemes

ABSTRACT. First--order Peano arithmetic $PA$ is axiomatized over a finite algebraic base theory by the full induction scheme $$\varphi(0,v) \land \forall x \, (\varphi(x,v) \rightarrow \varphi(x+1,v)) \rightarrow \forall x \, \varphi(x,v),$$ where $\varphi(x,v)$ ranges over all formulas in the language of arithmetic $\{0,1,+,\cdot,<\}$. Fragments of arithmetic are obtained by restricting, in one way or another, the induction scheme axiomatizing $PA$. Classical examples include the theories of $\Sigma_n$ and $\Pi_n$ induction and their parameter free counterparts.

In this talk we present a new kind of restriction on the induction scheme, giving rise to new subsystems of arithmetic that we collectively call \emph{local induction} theories. Roughly speaking, local indiction axioms have the form $$\varphi(0,v) \land \forall x \, (\varphi(x,v) \rightarrow \varphi(x+1,v)) \rightarrow \forall x \in \mathcal{D} \, \varphi(x,v).$$ That is to say, we restrict the $x$'s for which the axiom claims $\varphi(x,v)$ to hold to the elements of a prescribed subclass $\mathcal{D}$ of the universe. Natural choices for $\mathcal{D}$ are the sets of the $\Sigma_n$--definable elements of the universe as well as other related substructures of definable elements.

We will study the basic properties of the local induction theories obtained in this way and derive a number of applications to the study of 'classical' fragments of $PA$. Remarkably, we show that the hierarchy of local reflection principles can be reexpressed in terms of our local induction theories, thus filling a gap in our understanding of the equivalence between reflection and induction in arithmetic.

12:00
Tutorial on Classical Realizability and Forcing 2
11:20-13:00 Session 92: Heuristic, randomised and probabilistic approaches (RCRA)
Location: MB, Hörsaal 12
11:20
Concept Learning by a Monte-Carlo Tree Search of Argumentations
SPEAKER: unknown

ABSTRACT. Monte-Carlo Tree Search (MCTS) is a heuristic to search in large trees. We apply it to argumentation with regard to concept learning where MCTS pursues the best argumentation meant to account for a possibly inconsistent set of training examples. We provide experimental results of our approach with the search variant Upper Confidence Bounds on Tree (UCT)

11:50
Evaluating Probabilistic Model Checking Tools for Verification of Robot Control Policies
SPEAKER: unknown

ABSTRACT. In the last decade, thanks to the ability of analyzing probabilistic models given specifications in temporal logics, Probabilistic Model Checking (PMC) has become a widely used methodology in several application domains, e.g., communication protocols, planning and scheduling, and robotics. Moreover, the availability of effective PMC tools enables automated analysis in such complex real-world scenarios.

In this paper we evaluate PMC tools to investigate safety of robots whose control policies are learned by reinforcement, i.e., considering feedback signals received upon acting in unknown environments. Introduced in previous contributions of ours, a new challenging application domain for PMC tools is represented by their usage as back-engines of an automated methodology aimed to verify and repair control policies. We present an evaluation of the current state-of-the-art PMC tools and assess their potential on various case studies, including both real and simulated robots accomplishing navigation, manipulation and reaching tasks.

12:20
Self Regulating Mechanisms for Network Immunization
SPEAKER: unknown

ABSTRACT. Many real network, e.g. internet or communication systems, can be modeled as scale-free network. An important property of these type of networks is that the diffusion of information to all nodes is very fast. Therefore, immunization strategies are necessary for this type of networks to prevent that a virus can infect, starting from some nodes, the entire network. In recent years, various immunization strategies has been developed and they can be divided into two main class: centralized and distributed strategies. Both of these approaches have a limitation of having the need to know the number of nodes of the network that must be immunized. In this paper, we propose an AOC based immunization strategy modification in which the entities self regulate their population inside the network. In this strategy the entities are deployed in a network with a threshold of coverage of the graph to be achieved. The entities, while collectively search the nodes with the highest degree, estimate the coverage of the graph through the information in their local environment and reproduce accordingly. The self regulating mechanism is evaluated on a set of public benchmark networks, even with community-based structures, and experimental results about convergence, coverage efficiency, computational cost and scalability are presented.

12:40
An Enhanced Genetic Algorithm of the BLF2G Guillotine Placement Heuristic for the Orthogonal Cutting-Stock Problem

ABSTRACT. The orthogonal cutting-stock problem tries to place a given set of items in a minimum number of identically-sized bins. As part of solving this problem with guillotine constraint, we propose to combine the new BLF2G, Bottom Left Fill 2 direction Guillotine, placement heuristic with an advanced genetic algorithm. According to the items order, the BLF2G routine makes a direct placement of items on bins to give a cutting format. The genetic algorithm exploits the search space to find the supposed optimal items order. Other methods tires to guide the evolutionary process by introducing a greedy heuristics to the initial population to enhance the results. We propose to enrich the population by qualified individuals, without disturbing the genetic phase, by introducing a new enhancement to guide the evolutionary process. We control the evolution and when we observe that there are no improvements, after some iterations, we inject a qualified individual to the population to avoid premature convergence to a local optimum. We generate a set of order-based individuals, to enrich the evolutionary process. Our method is compared with other heuristics and metaheuristics found in literature on existing data sets.

11:45-12:00 Session 93A: Break (ITRS)
Location: MB, Zeichensaal 14
11:45-12:45 Session 93B: talks after an invited NLCS/NLSR lecture by Aarne Ranta (NLCS)
Location: FH, CAD 2
11:45
Transfer Semantics for the Clear Parser

ABSTRACT. Transfer semantics uses a rewriting (or transfer) system to map syntactic parses onto semantic representations [4, 5]. This form of semantic mapping was used to build broad coverage semantic text indexes at Powerset, and has been used in various other settings such as question answering for intelligence analysts and medical QA. It has been used for English, German [14] and Japanese [13]. However, in all these cases the transfer semantics has been applied to the output of just one kind of parser: LFG functional structures created by the XLE parser [6]. This paper describes the adaptation of transfer semantics to apply to the output of the Clear dependency parser [2].

12:15
Analyzing and Modelling the Structure of Argument Compounds in Logic: the case of technical texts
SPEAKER: unknown

ABSTRACT. In this contribution, we start form a concrete problem: the identification of argument compounds and show how their structure can be modelled on a linguistic and logical basis. This investigation is, in a first stage, restricted to arguments found in technical texts. Then, we propose a type-based syntactic analysis and the premises of a conceptual model that accounts for the logical properties of such compounds. An implementation in Dislog is presented with an indicative evaluation of the results. Dislog runs on TextCoop which is a logic-based platform we developed for discourse analysis.

12:00-13:00 Session 94A (ITRS)
Location: MB, Zeichensaal 14
12:00
Uniform Proofs of Normalisation and Approximation for Intersection Types

ABSTRACT. We present intersection type systems in the style of sequent calculus, modifying the systems that Valentini introduced to prove normalisation properties without using the reducibility method. Our systems are more natural than Valentini's ones and equivalent to the usual natural deduction style systems. We prove the characterisation theorems of strong and weak normalisation through the proposed systems, and, moreover, the approximation theorem without using the reducibility method. This provides in a uniform way proofs of the normalisation and approximation theorems via type systems in sequent calculus style.

12:30
Indexed linear logic and higher-order model checking
SPEAKER: unknown

ABSTRACT. In recent work, Kobayashi observed that the acceptance by an alternating tree automaton A of an infinite tree T generated by a higher-order recursion scheme G may be formulated as the typability of the recursion scheme G in an appropriate intersection type system associated to the automaton A. The purpose of this article is to establish a clean connection between this line of work and Bucciarelli and Ehrhard's indexed linear logic. This is achieved in two steps. First, we recast Kobayashi's result in an equivalent infinitary intersection type system where intersection is not idempotent anymore. Then, we show that the resulting type system is a fragment of an infinitary version of Bucciarelli and Ehrhard's indexed linear logic. While this work is very preliminary and does not integrate key ingredients of higher-order model-checking like priorities, it reveals an interesting and promising connection between higher-order model checking and linear logic.

12:00-13:00 Session 94B: Invited Tutorials (Interactive Theorem Provers and Calculus of Inductive Constructions) (APPA)
Location: FH, Seminarraum 325/1
12:00
Interactive Theorem Provers from the perspective of Isabelle/Isar

ABSTRACT. Interactive Theorem Provers have a long tradition, going back to the 1970s when interaction was introduced as a concept in computing. The main provers in use today can be traced back over 20-30 years of development. As common traits there are usually strong logical systems at the bottom, with many layers of add-on tools around the logical core, and big applications of formalized mathematics or formal methods. There is a general attitude towards flexibility and open-endedness in the combination of logical tools: typical interactive provers use automated provers and dis-provers routinely in their portfolio.

The subsequent exposition of interactive theorem proving (ITP) takes Isabelle/Isar as the focal point to explain concepts of the greater "LCF family'', which includes Coq and various HOL systems. Isabelle itself shares much of the relatively simple logical foundations of HOL, but follows Coq in the ambition to deliver a sophisticated system to end-users without requiring self-assembly of individual parts. Isabelle today is probably the most advanced proof assistant concerning its overall architecture and extra-logical infrastructure.

The "Isar'' aspect of Isabelle refers first to the structured language for human-readable and machine-checkable proof documents, but also to the Isabelle architecture that emerged around the logical framework in the past 10 years. Thus "Isabelle/Isar'' today refers to an advanced proof assistant with extra structural integrity beyond the core logical framework, with native support for parallel proof processing and asynchronous interaction in its Prover IDE (PIDE).

12:30
Calculus of Inductive Constructions
12:00-13:00 Session 94C: Contributed Talks and Poster Announcements (DL)
Location: EI, EI 7
12:00
Temporal OBDA with LTL and DL-Lite
SPEAKER: unknown

ABSTRACT. We investigate various types of query rewriting over ontologies given in the standard temporal logic LTL as well as combinations of LTL with DL-Lite logics. In particular, we consider FO(<)-rewritings that can use the temporal precedence relation, FO(<,+)-rewritings that can also employ the arithmetic predicate PLUS, and rewritings to finite automata with data given on the automaton tape.

12:25
Complexity of Temporal Query Abduction in DL-Lite
SPEAKER: unknown

ABSTRACT. Temporal query abduction is the problem of hypothesizing a minimal set of temporal data which, given some fixed background knowledge, warrants the entailment of the query. This problem formally underlies a variety of forms of explanatory and diagnostic reasoning in the context of time series data, data streams, or otherwise temporally annotated structured information. In this paper, we consider (temporally ordered) data represented in Description Logics from the popular DL-Lite family and Temporal Query Language, based on the combination of LTL with conjunctive queries. In this defined setting, we study the complexity of temporal query abduction, assuming different restrictions on the problem and minimality criteria for abductive solutions. As a result, we draw several revealing demarcation lines between NP-, DP- and PSpace-complete variants of the problem.

12:50
Temporalising EL Concepts with Time Intervals
SPEAKER: unknown

ABSTRACT. We design and investigate a new interval based temporal description logic, EL-Lambda, which is based on an Annotated Logic introduced by Kifer, AL, and motivated by life-science applications. We show how a subset of the logic can be captured as the EL fragment of AL, EL-AL, and then go on to show how we can extend this representation to capture further temporal entailments. We show that both EL-AL and EL-Lambda maintain the same tractable complexity bounds for reasoning as EL and finally provide an example of how the logic can be utilised for the Drosophila developmental ontology.

12:53
Transition Constraints for Temporal Attributes

ABSTRACT. Representing temporal data in conceptual data models and ontologies is required by various application domains, and for it to be useful for modelling precision and useful with automated reasoners, a language that is expressive enough to capture the required operational semantics of the time-varying information is essential. Temporal modelling languages have little support for temporal attributes, if at all, yet attributes are a standard element in the widely used conceptual modelling languages such as EER and UML. This hiatus prevents one to utilise a complete temporal conceptual data model and keep track of evolving values of data and its interaction with temporal classes. A rich axiomatization of fully temporized attributes is possible with a minor extension to the already very expressive description logic language DLRUS . We formalize the notion of transition of attributes, and their interaction with transition of classes. The transition specified for attributes are extension, evolution, and arbitrary quantitative extension.

12:56
A Stream-Temporal Query Language for Ontology Based Data Access
SPEAKER: unknown

ABSTRACT. The paper contributes to the recent enterprise of temporalizing and streamifiying ontology based data access (OBDA) by discussing rewritability and unfoldability aspects for DL-Lite instantiations of the new query-language framework STARQL. In particular, it demonstrates how STARQL queries can be rewritten and unfolded into queries expressed in one of the early relational stream query languages, the continuous query language CQL.

12:00-13:00 Session 94D: Technical Session (FCS-FCC)
Location: MB, Hörsaal 7
12:00
A tool for automating the computationally complete symbolic attacker
SPEAKER: unknown

ABSTRACT. In this paper we describe a tool automating the computationally complete symbolic attacker model provided by Bana and Comon at POST'12. This model offers soundness guarantees without the usual computational soundness restrictions such as tagging, key cycles... This models allows to find attacks relying on implementation mistakes. Using our tool, we discovered a such new attack on Andrew's secure RPC, we hope to find more attacks in the next few months.

12:20
Towards a Coinductive Characterization of Computational Indistinguishability
SPEAKER: unknown

ABSTRACT. Computational indistinguishability (CI in the following) is one of the most central concepts in modern cryptography, and many other definitions (e.g. pseudorandomness, security of cryptoschemes) can be rephrased in terms of CI. We present the results of a study directed towards giving a direct and precise characterization of computational indistinguishability in an higher-order functional language for polynomial time computability, in which tools from implicit computational complexity and coinduction both play a central role.

12:40
Actual Causes of Security Violations
SPEAKER: Divya Sharma

ABSTRACT. Accurate blame assignment for security violations is essential in a wide range of settings. For example, protocols for authentication and key exchange, electronic voting, auctions, and secure multiparty computation (in the semi-honest model) ensure desirable security properties if protocol parties follow their prescribed programs. However, if they deviate from their prescribed programs and a security property is violated, determining which agents should be blamed and appropriately punished is important to deter agents from committing future violations. This work proposes actual causation (i.e., identifying which agents' deviations caused a specific violation) as a useful building block for blame-assignment. The central contribution of this work is formalizing and reasoning about actual causation in decentralized multi-agent systems, in particular, to formalize programs (rather than events) as actual causes of security violations and to deal with non-deterministic systems.

12:00-12:30 Session 94E: Contributed talk (PCCR)
Location: FH, Hörsaal 3
12:00
Discovering Archipelagos of Tractability: Split-Backdoors to Constraint Satisfaction
SPEAKER: Robert Ganian

ABSTRACT. The Constraint Satisfaction Problem (CSP) is a central and generic computational problem which provides a common framework for many theoretical and practical applications. A central line of research is concerned with the identification of classes of instances for which the CSP can be solved in polynomial time, such classes are often called "islands of tractability". A prominent way of defining islands of tractability for the CSP is to restrict the relations that may occur in the constraints to a fixed set, called a constraint language. A constraint language is conservative if it contains all unary relations. Schaefer's famous Dichotomy Theorem (STOC 1978) identifies all islands of tractability in terms of tractable constraint languages over a Boolean domain of values. Since then many extensions and generalizations of this result have been obtained.

Recently, Bulatov (TOCL 2011, JACM 2013) gave a full characterization of all islands of tractability for CSP and the counting version CSP that are defined in terms of a conservative constraint language. Our results build upon these results and show how the identified islands of tractability can be combined to "archipelagos of tractability" (in general, the union of tractable constraint languages is not tractable). In order to put this to work, we need some control over the way how the constraints over the various considered tractable languages interact with each other. We capture this by the notion of a split-backdoor which is a set of variables that, when instantiated, separate the CSP instance into pairwise independent parts, each of which falls into a different island of tractability. Strong backdoors, as introduced by Williams et al. (IJCAI 2003), form the special case where all instantiations of the backdoor variables move the instance into one fixed island.

The main difficulty lies in finding a split-backdoor of given size k; once the split-backdoor is found, we can try all possible instantiations of the backdoor variables and apply the polynomial time algorithms associated with the islands of tractability on the list. Our main result is an algorithm that finds a split-backdoor (with respect to a list of finite conservative constraint languages) in FPT time; this also gives an FPT time algorithm for solving the decision or counting version of CSP for the given instance, assuming the decision or counting version of CSP, respectively, is polynomial-time tractable for the considered constraint languages.

As part of our algorithm for finding the split-backdoors, we first develop an algorithm to compute "split-modulators" in a graph theoretical setting. This has the advantage that we can build upon existing advanced algorithmic techniques. In particular, we show that for any finite collection of finite sets of (possibly disconnected) graphs F, there is an algorithm that, given a graph G and a positive integer k, in FPT time (parameterized by k) either finds a subset X of V(G) of size at most k such that for each connected component C of G-X there is some F_i in F such that C has no graph from F_i as an induced subgraph, or correctly reports that no such set X exists. This result has numerous applications outside CSP, as it can be used to combine known polynomial time algorithms for NP-hard problems restricted to classes of instances characterized by forbidden induced subgraphs.

12:00-13:00 Session 94F: Satisfiability Modulo Theories and Applications (ADDCT)
Location: FH, Zeichensaal 1
12:00
Satisfiability Modulo Non-Disjoint Combinations of Theories Connected via Bridging Functions
SPEAKER: unknown

ABSTRACT. The Nelson-Oppen combination method is ubiquitous in all Satisfiability Modulo Theories solvers. However, one of its major drawbacks is to be restricted to disjoint unions of theories. We investigate the problem of extending this combination method to particular non-disjoint unions of theories connected via bridging functions. This problem is of greatest interest for solving verification problems expressed in a combination of data structures and arithmetic. Our work can be viewed as a synthesis of previous contributions by different authors, based on different techniques and frameworks, like the one developed by Sofronie-Stokkermans based on locality principles. We investigate here an approach by reduction from non-disjoint to disjoint combination which is particularly well-suited to combine a wide range of data structures with bridging functions over a target theory. We focus on data structures for which the standard superposition calculus provides an off-the-shelf underlying satisfiability procedure. We also consider the problem of adapting the combination procedure to get a satisfiability procedure for a standard interpretation of the data structure.

12:30
Finding Minimum Type Error Sources

ABSTRACT. Automatic type inference is a popular feature of functional programming languages. If a program cannot be typed, the compiler typically reports a single program location in its error message. This location is the point where the type inference failed, but not necessarily the actual source of the error. Other potential error sources are not even considered. Hence, the compiler often misses the true error source, which increases debugging time for the programmer. In this paper, we present a general framework for automatic localization of type errors. Our algorithm finds all minimum error sources, where the exact definition of minimum is given in terms of a compiler-specific ranking criterion. Compilers can use minimum error sources to produce more meaningful error reports, and for automatic error correction. Our approach works by reducing type inference to constraint satisfaction. We then formulate the problem of computing minimum error sources in terms of weighted maximum satisfiability modulo theories (MaxSMT). Ranking criteria are incorporated by assigning weights to typing constraints. The reduction to MaxSMT allows us to build on decision procedures to support rich type systems.

12:00-13:00 Session 94G: Contributed Talks: Benchmarks (NMR)
Location: EI, EI 9
12:00
Some thoughts about benchmarks for NMR

ABSTRACT. The NMR community would like to build a repository of benchmarks to push forward the design of systems implementing NMR as it has been the case for many other area in AI. There are a number of lessons which can be learned from the experience of other communi- ties. Here are a few thoughts about the requirements and choices to make before building such a repository.

12:30
Towards a Benchmark of Natural Language Arguments
SPEAKER: unknown

ABSTRACT. The connections among natural language processing and argumentation theory are becoming stronger in the latest years, with a growing amount of works going in this direction, in different scenarios and applying heterogeneous techniques. In this paper, we present two datasets we built to cope with the combination of the Textual Entailment framework and bipolar abstract argumentation. In our approach, such datasets are used to automatically identify through a Textual Entailment system the relations among the arguments (i.e., attack, support), and then the resulting bipolar argumentation graphs are analyzed to compute the accepted arguments.

12:15-13:00 Session 95: Discussion (EC2)
Location: FH, Seminarraum 101C
13:00-14:30Lunch Break
14:30-16:00 Session 96AA: Formalizations in Coq (Coq)
Location: MB, Hörsaal 14A
14:30
Formalization of Error-correcting Codes using SSReflect
SPEAKER: unknown

ABSTRACT. By adding redundant information to transmitted data, error-correcting codes (ECCs) make it possible to communicate reliably over noisy channels. Minimizing redundancy and coding/decoding time has driven much research, culminating with Low-Density Parity-Check (LDPC) codes. Hard-disk storage, wifi communications, mobile phones, etc.: most modern devices now rely on ECCs and in particular LDPC codes. Yet, correctness guarantees are only provided by research papers of ever-growing complexity. One solution to improve this situation is to provide a formalization of ECCs. We think that a first difficulty to achieve this goal has been lifted by the SSReflect library, that provides a substantial formalization of linear algebra. Using this library, we have been tackling the formalization of linear ECCs. Our formalization of linear ECCs is already rich enough to formalize most properties of Hamming codes and we are now in a position to formalize the more difficult LDPC codes.

15:00
Autosubst: Automation for de Bruijn Substitutions
SPEAKER: unknown

ABSTRACT. Formalizing syntactic theories with variable binders is not easy. We present Autosubst, a library for the Coq proof assistant to automate this process. Given an inductive definition of syntactic objects in de Bruijn representation augmented with binding annotations, Autosubst synthesizes the parallel substitution operation and automatically proves the basic lemmas about substitutions. Our core contribution is an automation tactic that solves equations involving terms and substitutions. This makes the usage of substitution lemmas unnecessary. The tactic is based on our current work on a decision procedure for the equational theory of an extension of the sigma-calculus by Abadi et. al. The library is completely written in Coq and uses Ltac to synthesize the substitution operation.

15:30
Automating Abstract Logics

ABSTRACT. We present the results of our efforts building compositional proof automation for program verification. We focus our efforts on abstract axiomatizations of different logics in order to make our procedures more generally applicable. Our automation leverages computational reflection to solve both intuitionistic and separation logic entailments. These specialized procedures are parameterized by auxiliary procedures that can reason about domain-specific theories such as arithmetic.

14:30-16:00 Session 96AB: Games and Finite Models (CSL-LICS)
Chair:
Location: FH, Hörsaal 1
14:30
Regular Combinators for String Transformations
SPEAKER: Rajeev Alur

ABSTRACT. Abstract--- We focus on (partial) functions that map input strings to a monoid such as the set of integers with addition and the set of output strings with concatenation. The notion of regularity for such functions has been defined using two-way finite-state transducers, (one-way) cost register automata, and MSO-definable graph transformations. In this paper, we give an algebraic and machine-independent characterization of this class analogous to the definition of regular languages by regular expressions. When the monoid is commutative, we prove that every regular function can be constructed from constant functions using the combinators of choice, split sum, and iterated sum, that are analogs of union, concatenation, and Kleene-*, respectively, but enforce unique (or unambiguous) parsing. Our main result is for the general case of non-commutative monoids, which is of particular interest for capturing regular string-to-string transformations for document processing. We prove that the following additional combinators suffice for constructing all regular functions: (1) the left-additive versions of split sum and iterated sum, which allow transformations such as string reversal; (2) sum of functions, which allows transformations such as copying of strings; and (3) function composition, or alternatively, a new concept of chained sum, which allows output values from adjacent blocks to mix.

The uploaded PDF is the concetation of the short and full versions of the paper.

15:00
On Periodically Iterated Morphisms
SPEAKER: unknown

ABSTRACT. We investigate the computational power of periodically iterated morphisms, also known as D0L systems with periodic control, PerD0L systems for short. These systems give rise to a class of one-sided infinite sequences, called PerD0L words. We construct a PerD0L word with exponential subword complexity, thereby answering a question raised by (Lepisto 1993) on the existence of such words. We solve another open problem concerning the decidability of the first-order theories of PerD0L words (Muchnik, Pritykin, Semenov 2009); we show it is already undecidable whether a certain letter occurs in a PerD0L word. This stands in sharp contrast to the situation for D0L words (purely morphic words), which are known to have at most quadratic subword complexity, and for which the monadic theory is decidable.

15:30
Finite-Memory Strategy Synthesis for Robust Multidimensional Mean-Payoff Objectives
SPEAKER: Yaron Velner

ABSTRACT. Two-player games on graphs provide the mathematical foundation for the study of reactive systems. In the quantitative framework, an objective assigns a value to every play, and the goal of player 1 is to minimize the value of the objective. In this framework, there are two relevant synthesis problems to consider: the quantitative analysis problem is to compute the minimal (or infimum) value that player 1 can assure, and the boolean analysis problem asks whether player 1 can assure that the value of the objective is at most r (for a given threshold r). Mean-payoff expression games are played on a multidimensional weighted graph. An atomic mean-payoff expression objective is the mean-payoff value (the long-run average weight) of a certain dimension, and the class of mean-payoff expressions is the closure of atomic mean-payoff expressions under the algebraic operations of max, min, numerical complement and sum. In this work, we study for the first time the strategy synthesis problems for games with robust quantitative objectives, namely, games with mean-payoff expression objectives. While in general, optimal strategies for these games require infinite-memory, in synthesis we are typically interested in the construction of a finite-state system. Hence, we consider games in which player 1 is restricted to finite-memory strategies, and our main contribution is as follows. We prove that for mean-payoff expressions, the quantitative analysis problem is computable, and the boolean analysis problem is inter-reducible with Hilbert's tenth problem over rationals --- a fundamental long-standing open problem in computer science and mathematics.

14:30-16:00 Session 96AC: Types (CSL-LICS)
Chair:
Location: FH, Hörsaal 5
14:30
A type theory for productive coprogramming via guarded recursion

ABSTRACT. To ensure consistency and decidability of type checking, proof assistants impose a requirement of productivity on corecursive definitions. In this paper we investigate a type-based alternative to the existing syntactic productivity checks of Coq and Agda, using a combination of guarded recursion and quantification over clocks. This approach was developed by Atkey and McBride in the simply typed setting, here we extend it to a calculus with dependent types. Building on previous work on the topos-of-trees model we construct a model of the calculus using a family of presheaf toposes, each of which can be seen as a multi-dimensional version of the topos-of-trees. As part of the model construction we must solve the coherence problem for modelling dependent types in locally cartesian closed categories simulatiously in a whole family of locally cartesian closed categories. We do this by embedding all the categories in a large one and applying a recent approach to the coherence problem due to Streicher and Voevodsky.

15:00
Formulae-as-Types for an Involutive Negation

ABSTRACT. Negation is not involutive in the λC calculus because it does not distinguish captured stacks from continuations. We show that there is a formulae-as-types correspondence between the involutive negation in proof theory, and a notion of high-level access to the stacks studied by Felleisen and Clements.
We introduce polarised, untyped, calculi compatible with extensionality, for both of classical sequent calculus and classical natural deduction, with connectives for an involutive negation. The involution is due to the ℓ delimited control operator that we introduce, which allows us to implement the idea that captured stacks, unlike continuations, can be inspected. Delimiting control also gives a constructive interpretation to falsity. We describe the isomorphism there is between A and ¬¬A, and thus between ¬∀ and ∃¬.

15:30
Eilenberg-MacLane Spaces in Homotopy Type Theory
SPEAKER: unknown

ABSTRACT. Homotopy type theory is an extension of Martin-L\"of type theory with principles inspired by category theory and homotopy theory. With these extensions, type theory can be used to construct proofs of homotopy-theoretic theorems, in a way that is very amenable to computer-checked proofs in proof assistants such as Coq and Agda. In this paper, we give a computer-checked construction of \emph{Eilenberg-MacLane spaces}. For an abelian group $G$, an Eilenberg-MacLane space $K(G,n)$ is a space (type) whose $n^{th}$ homotopy group is G, and whose homotopy groups are trivial otherwise. These spaces are a basic tool in algebraic topology; for example, they can be used to build spaces with specified homotopy groups, and to define the notion of cohomology with coefficients in $G$. Their construction in type theory is an illustrative example, which ties together many of the constructions and methods that have been used in homotopy type theory so far.

14:30-16:00 Session 96AD: Tutorial: Hardware Model Checking (CAV)
Location: FH, Hörsaal 6
14:30
Hardware Model Checking (Part A)
SPEAKER: Fabio Somenzi
14:30-15:30 Session 96AF (ITRS)
Location: MB, Zeichensaal 14
14:30
Lucretia — intersection type polymorphism for scripting languages

ABSTRACT. Scripting code may present maintenance problems in the long run. There is, then, the call for methodologies that make it possible to control the properties of programs written in dynamic languages in an automatic fashion. We introduce Lucretia, a core language with an introspection primitive. Lucretia is equipped with a (retrofitted) static type system based on local updates of types that describe the structure of objects being used. In this way, we deal with one of the most dynamic features of scripting languages, that is, the runtime modification of object interfaces. Judgements in our systems have a Hoare-like shape, as they have a precondition and a postcondition part. Preconditions describe static approximations of the interfaces of visible objects before a certain expression has been executed and postconditions describe them after its execution. The field update operation complicates the issue of aliasing in the system. We cope with it by introducing intersection types in method signatures.

15:00
Liquid Intersection Types

ABSTRACT. We present a new type system combining refinement types and the expressiveness of intersection type discipline. The use of such features makes it possible to derive more precise types than in the original refinement system. We have been able to prove several interesting properties for our system (including subject reduction) and developed an inference algorithm, which we proved to be sound.

14:30-16:00 Session 96AG: Special session: Philosophy of Mathematics (LC)
Location: MB, Hörsaal 14
14:30
Frege on mathematical progress

ABSTRACT. Progress in mathematics has often involved a good deal of conceptual clarification, including increasingly precise characterizations of concepts (e.g. those of infinity, of continuity, perhaps of ‘set,’ etc.) that were less clearly understood by earlier theorists. But the sometimes-vast difference between the earlier and later concepts that go by the same name raises the possibility that such conceptual refinement really brings with it a whole new subject-matter for the branch of mathematics in question, rather than a clarified understanding of the concepts used by earlier generations. This talk investigates Gottlob Frege’s approach to understanding this kind of conceptual progress, and assesses the plausibility of his view that a given subject-matter can survive essentially unscathed despite fairly radical changes in the surrounding theory.

15:15
On Bernays' Generalization of Cantor's Theorem

ABSTRACT. Cantor's theorem that no one-one function maps a set a onto the power set of a follows from the statement that no function maps a set a onto the power set of a. Paul Bernays showed how to encode assignments of subclasses of a given A to members of A by means of a class of ordered pairs, and he deployed a familiar diagonal argument to show that no assignment of subclasses of A to members of A is onto. It much less clear how to make sense of an assignment of members of A to subclasses of A, but when we do, we are in a position to show that no such assignment is one-one. Unfortunately, familiar arguments for this claim fail to provide an explicit characterization of two different subclasses of A which are assigned one and the same member of A by a given assignment. George Boolos later showed how to specify explicit counterexamples to the claim that a function from the power set of a set a into the set a is one-one. Similar constructions turn out to be available in the case of classes, but they are sensitive to the presence of impredicative class comprehension. We explore some ramifications of this observation for traditional philosophical puzzles raised by the likes of Russell's paradox of propositions in Appendix B of "The Principles of Mathematics" and Kaplan's paradox.

14:30-16:00 Session 96AH: Special session: Set Theory (LC)
Location: MB, Seminarraum 225
14:30
Large cardinals, forcing axioms, and the theory of subsets of \omega_1

ABSTRACT. The goal of this research is to rule out "natural" independence phenomena in Set Theory by maximizing your theory in terms of large cardinals and forcing axioms. Using large cardinals in ZFC, by the results of Woodin, we have a clear understanding of the theory of the second-order structure (\omega, P(\omega), \in)) and what it should be.

In this talk, we try to extend this understanding to the theory of the structure (\omega_1, P(\omega_1), \in) by using large cardinals, forcing axioms, and some hypothesis from inner model theory in ZFC. This is joint work with Matteo Viale.

15:15
Locally definable well-orders

ABSTRACT. A classical theorem of Mansfield shows that there exists a well-ordering of the set \omega^\omega of all functions from \omega to \omega$that is definable over the collection H(\omega_1) of all hereditarily countable sets by a \Pi_1-formula without parameters if and only if every such function is contained in Gödel's constructible universe L. In particular, the existence of such a well-ordering implies that the continuum hypothesis holds. We consider the question whether this implication generalizes to higher cardinalities: does the existence of a well-ordering of the set \omega_1^{\omega_1} of all functions from \omega_1 to \omega_1 that is definable over H(\omega_2) by a \Pi_1-formula without parameters imply that the GCH holds at \omega_1? This is joint work with Peter Holy (Bristol).

14:30-16:00 Session 96AI: Term Rewriting Revisited (WST)
Location: FH, Seminarraum 101B
14:30
On the derivational complexity of Kachinuki orderings

ABSTRACT. For string rewriting systems compatible with the standard Kachinuki ordering it is known that their derivational complexity is primitive recursively bounded. However, in case the definition of Kachinuki orderings comprises a possibly different lexicographic status for different letters, the standard upper bound proof method by monotone interpretations fails, and primitive recursive complexity bounds are an open problem, to the best of our knowledge. In this talk, such an upper bound result is shown by examining worst case rewriting strategies.

15:00
Kurth's Criterion H Revisited
SPEAKER: Alfons Geser

ABSTRACT. Criterion H is one of the criteria for termination of string rewriting that Winfried Kurth implemented in his thesis. Criterion H can be slightly improved. The improved version can be translated into a termination proof by Semantic Labelling.

15:30
Another Proof for the Recursive Path Ordering

ABSTRACT. Yet another proof of well-foundedness of the recursive path ordering is provided.

14:30-16:00 Session 96AJ: QBF and First-Order Logic (QUANTIFY)
Location: FH, Seminarraum 136
14:30
Algorithmic Paradigms for Solving QBF

ABSTRACT. The past 15 years has seen tremendous improvement in the effectiveness of solvers for Quantified Boolean Formulas (QBF). This talk will discuss some of the main algorithmic approaches that have been employed in constructing such solvers. We will aim to identify certain features of these approaches that are effective and not so effective. Some of the theoretical questions that arise from our empirical experience with these solvers will be identified in an attempt to spur more synergy between theory and practice for solving QBF.

15:30
Projective Quantifier Elimination for Equational Constraint Solving
SPEAKER: Paqui Lucio

ABSTRACT. We deal with (general) equational constraints, that is, first-order formulas, with equality as its only predicate symbol, over a (finite or infinite) language $\Language$ of function symbols. As semantic domain, we consider the algebra of finite terms over L. Solving one of such constraints means to find all the solutions for (i.e. assignments to) its free variables. We present the Projective Quantifier Elimination (PQE) technique for solving equational constraints (in particular, deciding their satisfiability) that performs an algebraic-style transformation of expressions instead of handling first-order formulas. PQE is formulated on the basis of three algebraic operations on expressions: complement, intersection and projection. We aim to contribute not only a new style of quantifier elimination for constraint solving, but also a more efficient method for equational constraint solving. PQE avoids unnecessary applications of the costly Explosion Rule (ER) that are performed by traditional solving methods.

14:30-16:00 Session 96AK (AISS)
Location: FH, Seminarraum 134
14:30
Playing with Automata and Trees
SPEAKER: Olivier Serre

ABSTRACT. Roughly speaking a finite automaton on infinite trees is a finite memory machine that takes as input an infinite node-labelled binary tree and processes it in a top-down fashion as follows. It starts at the root of the tree in its initial state, and picks (possibly nondeterministically) two successor states, one per son, according to the current control state, the letter at the current node and the transition relation. Then the computation proceeds in parallel from both sons, and so on. Hence, a run of the automaton on an input tree is a labelling of this tree by control states of the automaton, that should satisfy the local constrains imposed by the transition relation. A branch in a run is accepting if the omega-word obtained by reading the states along the branch satisfies some acceptance condition (typically an omega-regular condition such as a Büchi or a parity condition). A run is accepting if *all* branches are accepting. Finally, a tree is accepted by the automaton if *there exists* an accepting run over this tree. Hence, there are three natural levers on which one can act to define alternative families of tree automata / tree languages. - The first lever is local with respect to the run: it is the conditionrequired for a branch to be accepting. - The second lever is global with respect to the run: it is the condition required for a run to be accepting. - The third lever has to do with the set of runs: it is the condition required for a tree to be accepted (wrt the set of runs over it). For the usual definition of tree automata the three lever are: parity condition on branches / all branches accepting for run / there exists an accepting run. In this talk I will mainly focus on the second and third levers and propose variants of them. More precisely (time depending): - second lever: count the number of accepting/rejecting branches (cardinality constraint), topologically measure the largeness of the set of accepting/rejecting branches, measure (à la Lebesgue) the set of accepting/rejecting branches. - third lever (possibly combined with the second lever): alternating and probabilistic models. I will explain how decidability results can be obtained by extending the well-known equivalence between games and tree automata (that I will recall first!). This will lead us, in several situations, to leave the usual than perfect-information turn based two-player framework: in particular we will have to deal with stochastic aspects and/or imperfect information.

15:30
Quantitative Automatic Structures
SPEAKER: Martin Lang

ABSTRACT. see pdf

14:30-16:00 Session 96AL: Invited Talk (CHR and CICLOPS-WLPE)
Location: FH, Hörsaal 7
14:30
Symbolic Evaluation Graphs and Term Rewriting - A General Methodology for Analyzing Logic Programs
SPEAKER: Jürgen Giesl

ABSTRACT. There exist many powerful techniques to analyze termination and complexity of term rewrite systems (TRSs). Our goal is to use these techniques for the analysis of other programming languages as well. For instance, approaches to prove termination of definite logic programs by a transformation to TRSs have been studied for decades. However, a challenge is to handle languages with more complex evaluation strategies (such as Prolog, where predicates like the cut influence the control flow).

We present a general methodology for the analysis of such programs. Here, the logic program is first transformed into a symbolic evaluation graph which represents all possible evaluations in a finite way. Afterwards, different analyses can be performed on these graphs. In particular, one can generate TRSs from such graphs and apply existing tools for termination or complexity analysis of TRSs to infer information on the termination or complexity of the original logic program.

15:30
Discussion
SPEAKER: Everyone

ABSTRACT. Discussion on invited talk about "Symbolic Evaluation Graphs and Term Rewriting - A General Methodology for Analyzing Logic Programs".

14:30-16:00 Session 96AM: Invited Tutorials (Foundational Proof Certificates, Deduction Modulo and Proof Analysis) (APPA)
Location: FH, Seminarraum 325/1
14:30
Foundational Proof Certificates
SPEAKER: Dale Miller
15:00
Deduction Modulo
SPEAKER: Gilles Dowek
15:30
Mathematical Proof Analysis
14:30-16:00 Session 96AN: Invited Talk by Michael Rusinowitch (PAS)
Location: FH, Seminarraum 138A
14:30
Automated verification of security protocols and services

ABSTRACT. Cryptographic protocols such as IKE, SET, TLS, Kerberos have been developed to secure electronic transactions. However the design of such protocols often appears to be problematic even assuming that the cryptographic primitives are perfect, i.e. even assuming we cannot decrypt a message without the proper key. An intruder may intercept messages, analyse them, modify them without much ressources and then carry out malevolent actions. This may lead to a variety of attacks such as well-known Man-In-The-Middle attacks. In this setting, the so-called Dolev-Yao model, protocol analysis can be automated by designing suitable symbolic constraint solving techniques. Messages are represented by terms and intruder actions are simulated by deduction rules. It is also possible to somewhat relax the perfect encryption hypothesis by taking into account some algebraic properties of operators. Secrecy and authentification properties, as expected of correct protocol specifications, are verified by showing that no sequence of intruder actions leads to a failure state.

We present some applications of the same security protocol analysis techniques to service oriented computing such as the synthesis of a secure composed service and its validation. In this original framework intruder actions are replaced by mediator actions and the failure states are replaced by accepting states for both client and services.

15:30
Building explicit induction schemas for cyclic induction reasoning

ABSTRACT. In the setting of classical first-order logic with inductive predicates, two kinds of sequent-based induction reasoning are distinguished: cyclic and structural. Proving their equivalence is of great theoretical and practical interest for the automated reasoning community. It has been shown previously how to transform structural proofs developed with the LKID system into cyclic proofs using the CLKID$^\omega$ system. However, the inverse transformation was only conjectured.

We provide a simple procedure that performs the inverse transformation for an extension of LKID with explicit induction rules issued from the structural analysis of CLKID$^{\omega}$ proofs, then establish the equivalence of the two systems. This result is further refined for an extension of LKID with Noetherian induction rules. Based on it, we propose a solution for validating cyclic reasoning by (higher-order) sequent-based systems integrating the Noetherian induction principle, like Coq.

14:30-16:00 Session 96AP: Real Time and Security (VSTTE)
Location: FH, Zeichensaal 3
14:30
Model Checking Parameterized Timed Systems

ABSTRACT. In this work we extend the Emerson and Kahlon’s cutoff theorems for process skeletons with conjunctive guards to Parameterized Networks of Timed Automata, i.e. systems obtained by an unbounded number of Timed Automata instantiated from a finite set (U_1 , ..., U_n) of Timed Automata templates. In this way we aim at giving a first tool for universally verifying software systems where an unknown number of software components (i.e. processes) interact with temporal constraints. It is often the case, indeed, that distributed algorithms show an heterogeneous nature, combining dynamic aspects with real-time aspects. In the paper we will also show how to model check a protocol that uses special variables storing identifiers of the participating processes (i.e. PIDs) in Timed Automata with conjunctive guards. This is non-trivial, since solutions to the parameterized verification problem often relies on the processes to be symmetric, i.e. indistinguishable. On the other side, many popular distribute algorithms make use of PIDs and thus cannot directly apply those solutions.

14:55
Timed Refinements for Verification of Real-Time Object Code Programs

ABSTRACT. We introduce a refinement-based notion of correctness for verification of interrupt driven real-time object code programs, called timed refinements. The notion of timed refinements is targeted at verification of low-level object code against high-level specification models. For timed refinements, both the object code (implementation) and the specification are encoded as timed transition systems. Hence, timed refinements can be construed as a notion of equivalence between two timed transition systems that allows for stuttering between the implementation and specification, and also allows for the use of refinement maps. Stuttering is the phenomenon where multiple but finite transitions of the implementation can match a single transition of the specification. Refinement maps allow low-level implementations to be verified against high-level specification models. We also present a procedure for checking timed refinements. The proposed techniques are demonstrated with the verification of object code programs of six case studies from electric motor control applications.

15:20
Formal Modeling and Verification of CloudProxy
SPEAKER: Wei Yang Tan

ABSTRACT. Services running in the cloud face threats from several parties, including malicious clients, administrators, and external attackers. CloudProxy is a recently-proposed framework for secure deployment of cloud applications. In this work, we present the first formal model of CloudProxy, including a formal specification of desired security properties. We model CloudProxy as a transition system in the UCLID modeling language, using term-level abstraction. Our formal specification includes both safety and non-interference properties. We use induction to prove these properties, employing a back-end SMT-based verification engine. Furthermore, we structure our proof as an "assurance case", showing how we decompose the proof into various lemmas, and listing all assumptions and axioms employed. We also perform some limited model validation to gain assurance that the formal model correctly captures all behaviors of the implementation.

14:30-16:10 Session 96AQ: LaSh afternoon 1: Invited Talks - Lazy Grounding (LaSh)
Location: MB, Aufbaulabor
14:30
Laziness is next to Godliness
SPEAKER: Peter Stuckey

ABSTRACT.   Solving combinatorial optimization problems is hard.  There are many
  techniques which we need to combine to successfully solve these
  problems: including inference, search, relaxation, and
  decomposition.  Any method to solve combinatorial optimization
  problems must tradeoff how much time to spend in search, and how
  much time to spend on methods to reduce the search.  In this talk I
  will propose that we should spend the least amount of time possible
  on methods to reduce search, until there is evidence that it will be
  useful.  I will demonstrate how this "laziness" principle can be
  highly effective in solving

15:30
Lazy Model Expansion: Interleaving Grounding with Search
SPEAKER: Bart Bogaerts

ABSTRACT. Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state- of-the-art approach for bounded model generation for rich knowledge representation languages, like Answer Set Programming (ASP), FO(·) and the CSP modeling language Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory.

An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoret- ical framework and an implementation in the context of the FO(·) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non- grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified.

The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding- on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach.

 

14:30-16:00 Session 96AR: NLCS contributed talks (NLCS)
Location: FH, CAD 2
14:30
A low-level treatment of quantifiers in categorical compositional distributional semantics
SPEAKER: unknown

ABSTRACT. We show how one can formalise quantifiers in the categorical compositional distributional model of meaning. Our model is based on the generalised quantifier theory of Barwise and Cooper. We develop an abstract compact closed semantics and instantiate it in vector spaces and in relations. The former is an example for the distributional corpus-based models of language and the latter for the truth-theoretic ones.

15:00
Divergence in Dialogues

ABSTRACT. This work is part of a project which goal is to understand natural language by taking dialogue as a primitive notion. For this aim, the core of our theoretical framework is a logical theory of interaction, namely Ludics. Ludics is a logical theory developed by J.Y. Girard as an achievement of both proof theoretical and computational theoretical considerations. Above all it is a theory of interaction in the sense that interaction is ontologically the primitive element of Ludics. Ludics being a logical theory, our framework is relevant for grasping logical dimensions of language, especially for studying argumentation but also divergences in dialogue as we show in this paper. In particular complementary aspects of argumentation and divergence in dialogues are modelled in a uniform way: from logics and pragmatics to semantics and syntax. Furthermore we account for several features of dialogues directly at the formal level. Therefore, we may expect a fine-grained representation of various dialogical phenomenas. More precisely we illustrate this last point by describing how different divergences in dialogue may be distinguished in our modelling.

15:30
Modelling implicit dynamic introduction of function symbols in mathematical texts
SPEAKER: Marcos Cramer

ABSTRACT. The specialized language of mathematics has a number of linguistically and logically interesting features. One of them, which to our knowledge has not been systematically studied before, is the implicit dynamic introduction of function symbols, exemplified by constructs of the form "for every x there is an f(x) such that ...". We present an extension of Groenendijk and Stokhof's Dynamic Predicate Logic -- Typed Higher-Order Dynamic Predicate Logic -- which formally models this feature of the language of mathematics. Furthermore, we illustrate how the implicit dynamic introduction of function symbols is treated in the proof checking algorithm of the Naproche system.

14:30-16:00 Session 96AS: Invited Talk (Metcalfe) and Contributed Talk (LATD)
Location: MB, Festsaal
14:30
First-Order Logics and Truth Degrees
15:30
Classification of germinal MV-algebras

ABSTRACT. The aim of this paper is to give a complete classification of germinal MV-algebras. As an application, we will settle the fifth one of the eleven problems present by Mundici in [D. Mundici, Advanced Łukasiewicz calculus and MV-algebras, Trends in Logic, Studia Logica Library. Vol. 35, Springer, Berlin, 2011].

14:30-16:00 Session 96AT: Talk and Panel Discussion (ParSearchOpt)
Location: MB, Hörsaal 4
14:30
General Panel Discussion

ABSTRACT. This panel discussion be a forum for considering the main issues and future challenges in the domain of parallel search and optimization

15:30
Towards a complete constraint solver on GPU
SPEAKER: unknown

ABSTRACT. Constraint programming has gained prominence as an effective and declarative paradigm for modeling and solving complex combinatorial problems. In spite of the natural presence of concurrency, there has been relatively limited effort to use novel massively parallel architectures-such as those found in modern Graphical Processing Units (GPUs)-to speedup constraint programming algorithms. This paper presents an ongoing work for the development of a constraint solver which exploits GPU parallelism. The work is based on two previous results where constraint propagation and approximated search have been parallelized. We summarize these result and discuss the features we have planned to carry on.

14:30-15:50 Session 96AU: Technical session (ASA)
Location: MB, Zeichensaal 13
14:30
Application Oriented Low Cost Security Modules
SPEAKER: Dan Cvrcek

ABSTRACT. Building an HSM out of Smartcards

15:10
Security APIs for Device Enrolment
SPEAKER: Mike Bond

ABSTRACT. This talk considers the challenges in designing a Security API for back end systems which enrol devices into a central network, such as a payments network. Since no prior relationship exists with the device, whatever the enrolment protocol, in effect it will reveal keys in clear to the endpoint upon completion.

In order to save on database space, key derivation is a common approach in financial organisations, but revelation in clear can be a problem if device keys are derived rather than randomly generated - since the same process could be repeated to reveal the same key with an unwanted party.

The talk discusses how to harmonise the desire for secure APIs that support enrolment while minimising storage requirements, and find that parametric API designs (where the API uses a configurable mixture of security mechanisms) can offer a convenient solution -- for better or worse -- when hammering out a design which will satisfy parties with differing interests within a design team. A parametric design for enrolment is presented and constrasted with previously presented parametric designs for data tokenisation.

14:30-15:50 Session 96AV: Contributed Talks and Poster Announcements (DL)
Location: EI, EI 7
14:30
Gödel FL_0 with Greatest Fixed-Point Semantics

ABSTRACT. We study the fuzzy extension of FL_0 with semantics based on the Gödel t-norm. We show that gfp-subsumption w.r.t. a finite set of primitive definitions can be characterized by a relation on weighted automata, and use this result to provide tight complexity bounds for reasoning in this logic.

14:55
Fuzzy DLs over Finite Lattices with Nominals

ABSTRACT. The complexity of reasoning in fuzzy description logics over finite lattices usually does not exceed that of the underlying classical DLs. This has recently been shown for the logics between L-IALC and L-ISCHI using a combination of automata- and tableau-based techniques. In this paper, this approach is modified to deal with nominals and constants in L-ISCHOI. Reasoning w.r.t. general TBoxes is ExpTime-complete, and PSpace-completeness is shown under the restriction to acyclic terminologies in two sublogics. The latter implies two previously unknown complexity results for the classical DLs ALCHO and SO.

15:20
A MILP-based decision procedure for the (Fuzzy) Description Logic ALCB
SPEAKER: unknown

ABSTRACT. To overcome the inability of Description Logics (DLs) to represent vague or imprecise information, several fuzzy extensions have been proposed in the literature. In this context, an important family of reasoning algorithms for fuzzy DLs is based on a combination of tableau algorithms and Operational Research (OR) problems, specifically using Mixed Integer Linear Programming (MILP).

In this paper, we present a MILP-based tableau procedure that allows to reason within fuzzy ALCB, i.e., ALCB with individual value restrictions. Interestingly, unlike classical tableau procedures, our tableau algorithm is deterministic, in the sense that it defers the inherent non-determinism in ALCB to a MILP solver.

15:23
Complexity sources in FDL
SPEAKER: Marco Cerami

ABSTRACT. In recent years many FDLs based on infinite t-norms have been proved to be undecidable. On the other hand, several FDLs based on finite t-norms, not only have been proved to be decidable, but they have been proved to belong to the same complexity classes as the corresponding crisp DLs. On the light of such results, a question that naturally arises is whether the finite-valued fuzzy framework is really more complex than the old crisp-valued formalism. The aim of this work is to analyze some of the complexity sources that are not present in the crisp framework. To this end, we will consider FDL languages with low expressivity that allow us to observe how the need for more complex deciding strategies, not required in the crisp framework, arises in many-valued FDLs.

15:26
Gödel Description Logics with General Models

ABSTRACT. In the last few years, the complexity of reasoning in fuzzy description logics has been studied in depth. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy description logics using the Gödel t-norm. It was recently shown that in the logic G-IALC under witnessed model semantics, all standard reasoning problems can be solved in exponential time, matching the complexity of reasoning in classical ALC. We show that this also holds under general model semantics.

15:29
Certain Answers in a Rough World

ABSTRACT. Rough Description Logics have recently been studied as a means for representing and reasoning with imprecise knowledge. Real-world applications need to exploit reasoning over such knowledge in an efficient way. We describe how the combined approach to query answering can be extended to the rough setting. In particular, we extend both the canonical model and the rewriting procedure such that rough queries over rough EL ontologies can be answered by considering this information alone.

15:32
Bayesian Description Logics

ABSTRACT. We present Bayesian Description Logics (BDLs): an extension of Description Logics with contextual probabilities encoded in a Bayesian network (BN). Classical DL reasoning tasks are extended to consider also the contextual and probabilistic information in BDLs. A complexity analysis of these problems shows that, for propositionally closed DLs, this extension comes without cost, while for tractable DLs the complexity is affected by the cost of reasoning in the BN.

15:35
Reasoning about Belief Uncertainty in DL-Lite N bool
SPEAKER: Ala Djeddai

ABSTRACT. Dealing with uncertainty is a very important issue in description logics (DLs). In this paper, we present PrDL-lite N bool a new probabilistic extension of DL-Lite N bool, known as a very expressive DL-Lite, by supporting the belief degree interval in a single axiom or a set of axioms connected with conjunction (by ∧) or disjunction (by ∨) operators. The PrDL-lite N bool semantics is based on DL-Lite N bool features which are a new alternative semantics for DL-lite N bool having a finite structure and its number is always finite unlike classical models. PrDL-lite N bool also supports terminological and assertional probabilistic knowledge and the reasoning tasks (satisfiability, deciding the entailment, computing tight interval for entailment) are achieved by solving linear optimization problems. A prototype of the approach is implemented using OWLAPI for knowledge base creation, Pellet for reasoning and LpSolve for solving the linear programs.

15:38
Obfuscation of Semantic Data: Restricting the Spread of Sensitive Information

ABSTRACT. This paper investigates how to restrict the spread of sensitive information. This work is situated in a military context, and provides a tractable method to decide what semantic information to share, with whom, and how. The latter decision is supported by obfuscation, where a related concept or fact may be shared instead of the original. We consider uncertain information, and utilise Subjective Logic as an underlying formalism to further obfuscate concepts, and reason about obfuscated concepts.

15:41
Measuring Conceptual Similarity in Ontologies: How Bad is a Cheap Measure?

ABSTRACT. Several attempts have been made to develop similarity measures for ontologies. Motivated by finding problems in existing measures, we design a new family of measures to address these problems. We carry out two sets of empirical studies. Firstly, to explore how good the new measures are by comparing them to human judgments of similarity. Secondly, we investigate how likely it is to encounter specific task-oriented problems when using a bad similarity measure.

15:44
Mary, What's Like All Cats?
SPEAKER: unknown

ABSTRACT. In Description Logics (DL) knowledge bases (KBs) information is typically captured by crisp concepts. For many applications querying the KB by crisp query concepts is too restrictive. A controlled way of gradually relaxing a query concept can be achieved by the use of concept similarity. We formalize the task of instance query answering for crisp DL KBs using concepts relaxed by concept similarity measures (CSM). For the DL EL we investigate computation algorithms for this task, their complexity and properties for the employed CSM in case unfoldable TBoxes or general TBoxes are used.

14:30-16:00 Session 96AX: Contributed Talks: Proof Search and Proof Relevance (PSC)
Location: FH, CAD 1
14:30
Proof-search in natural deduction calculi for IPL

ABSTRACT. We address the problem of proof-search in the natural deduction calculus Ni for intuitionistic propositional logic (IPL). Our aim is to improve the usual n\"aive proof-search procedure where introduction rules are applied upwards and elimination rules downwards. In particular, we introduce Nbu, a sequent-style variant of Ni and we show that its derivations can be reduced in $\theta$-NF, a normal form (stronger than the usual one) where applications of $\lor E$ and $\to I$ satisfy additional conditions involving an evaluation relation $\theta$. This limits the application of such rules in proof-search and restricts the search space.

15:15
A joint logic of problems and propositions, a modified BHK-interpretation and proof-relevant topological models of intuitionistic logic

ABSTRACT. In a 1985 commentary to his collected works, Kolmogorov remarked that his 1932 paper ("Zur Deutung der intuitionistischen Logik") "was written in hope that with time, the logic of solution of problems will become a permanent part of [a standard] course of logic. Creation of a unified logical apparatus dealing with objects of two types - propositions and problems - was intended." We describe such a formal system, QHC, which is a conservative extension of both the intuitionistic predicate calculus, QH, and the classical predicate calculus, QC. Moreover:

- The only new connectives ? and ! of QHC induce a Galois connection (i.e., a pair of adjoint functors) between the Lindenbaum algebras of QH and QC, regarded as posets.

- Kolmogorov's double negation translation of propositions into problems extends to a retraction of QHC onto QH.

- Goedel's provability translation of problems into modal propositions extends to a retraction of QHC onto its QC+(?!) fragment, which can be identified with the modal logic QS4.

This leads to a new paradigm that views intuitionistic logic not as an alternative to classical logic that criminalizes some of its principles, but as an extension package that upgrades classical logic without removing it. The purpose of the upgrade is proof-relevance, or "categorification"; thus, if the classical conjunction and disjunction are to be modeled by intersection and union (of sets), the intuitionistic conjunction and disjunction will be modeled by product and disjoint union (of sheaves of sets). More formally, we describe a family of sheaf-valued models of QH, inspired by dependent type theory (not to be confused with the well-known open set-valued sheaf models of QH), which can be regarded as proof-relevant analogues of classical topological models of Tarski et al., and which extend to models of QHC. We also give an interpretation of some of these models in terms of stable solutions of algebraic equations; this can be seen as a proof-relevant counterpart of Novikov's classical interpretation of some Tarski models in terms of weighting of masses.

The new paradigm also suggests a rethink of the Brouwer-Heyting-Kolmogorov interpretation of QH. Traditional ways to understand intuitionistic logic (semantically) have been rooted either in philosophy - with emphasis on the development of knowledge (Brouwer, Heyting, Kripke) or in computer science - with emphasis on effective operations (Kleene, Markov, Martin-Loef). Our "modified BHK interpretation" is rooted in the usual mathematical ideas of canonicity and stability; it emphasizes simply the order of quantifiers, developing Kolmogorov's original approach. This interpretation is compatible with two complete classes models of QH: (i) Tarski topological models and (ii) set-theoretic models of Medvedev-Skvortsov and Laeuchli; as well as with (iii) our sheaf-valued models.

14:30-16:00 Session 96AY: iPrA Regular Talks (iPRA)
Location: FH, Seminarraum 101A
14:30
Using Interpolation for the Verification of Security Protocols (Extended Abstract)
SPEAKER: Luca Viganò

ABSTRACT. Interpolation has been successfully applied in formal methods for model checking and test-case generation for sequential programs. Security protocols, however, exhibit such idiosyncrasies that make them unsuitable to the direct application of such methods. In this paper, we address this problem and present an interpolation-based method for security protocol verication. Our method starts from a formal protocol specication and combines Craig interpolation, symbolic execution and the standard Dolev-Yao intruder model to search for possible attacks on the protocol. Interpolants are generated as a response to search failure in order to prune possible useless traces and speed up the exploration. We illustrate an implementation of our method and its application to concrete examples.

15:00
Interpolation Strategies

ABSTRACT. We consider some of the issues and trade-offs in computing useful interpolants in practice, including questions of proof search strategy, proof translation vs. local proof and interpolant optimization. These issues are explored empirically using the Duality fixed point engine as a platform and benchmark problems from software model checking.

15:30
Towards Craig Interpolation for String Constraints
SPEAKER: unknown

ABSTRACT. The problem of reasoning automatically about string formulae, containing constraints such as string/word equations, regular language membership, or length restrictions, has increasingly received attention over the past years. Much of the existing work has concentrated on checking satisfiability of string formulae, by means of translation to bit-vector formulae or analysis of finite automata representations, among others. We report about ongoing research on Craig interpolation for string formulae, done in the context of work presented in our CAV’14 paper.

14:30-16:00 Session 96AZ: Technical Session (FCS-FCC)
Location: MB, Hörsaal 7
14:30
Adversary Gain vs. Defender Loss in Quantified Information Flow

ABSTRACT. Metrics for quantifying information leakage assume that an adversary’s gain is the defender’s loss. We demonstrate that this assumption does not always hold via a class of scenarios. We describe how to extend quantification to account for a defender with goals distinct from adversary failure. We implement the extension and experimentally explore the impact on the measured information leakage of the motivating scenario.

15:00
Generalizing Permissive-Upgrade in Dynamic Information Flow Analysis
SPEAKER: unknown

ABSTRACT. Preventing implicit information flows by dynamic program analysis requires coarse approximations that result in false positives, because a dynamic monitor sees only the executed trace of the program. One widely deployed method is the no-sensitive-upgrade check, which terminates a program whenever a variable's taint is upgraded (made more sensitive) due to a control dependence. Although sound, this method is impermissive, e.g., it terminates the program even if the upgraded variable is never used subsequently. To counter this, Austin and Flanagan introduced the permissive-upgrade check, which allows a variable upgrade due to control dependence, but marks the variable "partially-leaked". The program is stopped later if it tries to use (case-analyze) the partially-leaked variable. Permissive-upgrade handles the dead-variable assignment problem and remains sound. However, Austin and Flanagan develop permissive-upgrade only for a two-point (low-high) security lattice and indicate a generalization to pointwise products of such lattices. In this paper, we develop a non-trivial and non-obvious generalization of permissive-upgrade to arbitrary lattices. The key difficulty lies in finding a suitable notion of partial leaks that is both sound and permissive and in developing a suitable definition of memory equivalence that allows an inductive proof of soundness.

15:30
Micro-Policies: Formally Verified Low-Level Tagging Schemes for Safety and Security (Short Paper)

ABSTRACT. Today's computer systems are distressingly insecure. A host of vulnerabilities arise from the violation of known, but in-practice unenforceable, safety and security policies, including high-level programming models and critical invariants of low-level programs. This project is aimed at showing that a rich and valuable set of _micro-policies_ can be efficiently enforced by a new generic hardware-software mechanism and result in more secure and robust computer systems. In particular we aim to come up with a clean formal framework in Coq for expressing, composing, and verifying arbitrary micro-policies, and to instantiate this framework on a diverse set of interesting examples.

14:30-15:30 Session 96BA: Invited talk (PCCR)
Location: FH, Hörsaal 3
14:30
Structural Decomposition Methods: How They Matter
SPEAKER: Georg Gottlob

ABSTRACT. This talk is based on the published paper [Aschinger, Drescher, Gottlob, Jeavons, and Thorstensen, STACS 2011]. We review structural problem decomposition methods, such as tree and path decompositions. It is argued that these notions can be applied in two distinct ways: Either to show that a problem is efficiently solvable when a width parameter is fixed, or to prove that the unrestricted (or some width-parameter free) version of a problem is tractable by using a width-notion as a mathematical tool for directly solving the problem at hand. Examples are given for both cases, and the refined taxonomy below is illustrated. When we speak about the treewidth of a problem instance, we mean the treewidth of some graph associated with the instance. Obviously, for each concrete problem, one has to indicate what this graph is, and, whenever necessary, how it can be obtained from the instance.

Taxonomy:
1. Proving a problem tractable for instances of bounded treewidth  1.a General tractability results
  1.b Fixed-parameter tractability results
2. Using treewidth as a tool for proving a problem to be generally tractable (on all instances)
  2.a Proving that a problem is tractable both for small and large treewidth
  2.b Proving that all yes-instances of a problem have bounded treewidth and that the problem is tractable for this reason

14:30-16:00 Session 96BB: Security and Human Behaviour (STAST)
Location: MB, Seminarraum 212/232
14:30
The Social Engineering Personality Framework
SPEAKER: unknown

ABSTRACT. We explore ICT security in a socio-technical world and focus in particular on the susceptibility to social engineering attacks. We pursue the question if and how personality traits influence this susceptibility. We use Cialdini's principles of influence to categorise social engineering attacks. With a comprehensive literature review we first show how existent research approaches social engineering susceptibility.

Based on this review we construct suggestions for plausible relations between personality traits of the Five-Factor Model (Big 5) and the principles of influence. We propose our – at this stage theory-based – "Social Engineering Personality Framework" (SEPF) which we will evaluate in future empiric research. The characteristics of victims' personality traits in the SEPF will support and guide security researchers and practitioners in developing detection, mitigation, and prevention strategies while dealing with human factors in social engineering attacks.

15:15
Modeling Human Behaviour with Higher Order Logic: Insider Threats

ABSTRACT. In this paper, we approach the problem of modeling the human component in technical systems with a view on the difference between the use of model and theory in sociology and computer science. One aim of this essay is to show that building of theories and models for sociology can be compared and implemented in Higher Order Logic. We validate this working hypothesis by revisiting Weber's understanding explanation. We focus on constructive realism in the context of logical explanation. We review Higher Order Logic (HOL) as a foundation for computer science and summarize its use of theories relating it to the sociological process of logical explanation. As a case study on modeling human behaviour, we present the modeling and analysis of insider threats as a Higher Order Logic theory in Isabelle/HOL. We show how each of the three step process of sociological explanation can be seen in our modeling of insider's state, its context within an organisation and the effects on security as outcomes of a theorem proving analysis.

14:30-16:00 Session 96BC: Dynamic and Modal Logic (ADDCT)
Location: FH, Zeichensaal 1
14:30
Decidability of Iteration-free PDL with Parallel Composition
SPEAKER: Joseph Boudou

ABSTRACT. PRSPDL is a highly undecidable propositional dynamic logic with an operator for parallel composition of programs. This operator has a separation semantic such that a multiplicative conjunction similar to the one found in the logic of Boolean bunched implications is definable. The present work identifies an iteration-free decidable fragment of PRSPDL in which the multiplicative conjunction is still definable. A NEXPTIME complexity upper bound for the fragment is given.

15:00
Axiomatic and Tableau-Based Reasoning for Kt(H,R)

ABSTRACT. We introduce a tense logic, called Kt(H,R), arising from logics for spatial reasoning.  Kt(H,R) is a multi-modal logic with two modalities and their converses defined with respect to a pre-order and a relation stable over this pre-order.  We show Kt(H,R) is decidable, it has the effective finite model property and reasoning in Kt(H,R) is PSPACE-complete.  Two complete Hilbert-style axiomatisations are given. The main focus of the paper is tableau-based reasoning.  Our aim is to gain insight into the numerous possibilities of defining tableau calculi and their properties.  We present several labelled tableau calculi for Kt(H,R) in which the theory rules range from accommodating correspondence properties closely, to accommodating Hilbert axioms closely.  The calculi provide the basis for decision procedures that have been implemented and tested on modal and intuitionistic problems. Though the focus is on Kt(H,R), the techniques and ideas used in this research are of general nature and provide a useful methodology for developing practical decision procedures for modal logics, which we will discuss.

The paper is published in "Advances in Modal Logic", Volume 10, 2014.

15:30
On dual tableau-based decision procedures for relational fragments
SPEAKER: unknown

ABSTRACT. We give a brief survey of the main results on dual tableau-based decision procedures for fragments of the logic of binary relations and we outline our current work in the design of relational decision procedures for logics for order-of-magnitude reasoning. Then we shortly hint at some further research plans on other relational decision procedures.

14:30-16:00 Session 96BD: Special Session: Decidability with Numerical Methods (NSV)
Location: FH, Hörsaal 2
14:30
Epsilon-semantics and hybrid automata

ABSTRACT. Hybrid automata are mathematical model designed to describe systems characterized by mixed discrete-continuous behaviours.

Standard semantics defined over hybrid automata implicitly assume perfect knowledge of the observed systems and infinite precision measurements. In the biological context these assumptions are unrealistic and often lead to unfaithful models.

From these observations, we introduce a family of more flexible semantics, the ε-semantics, able to handle noise, partial information, and finite precision.

 

In this presentation we study the relationship between ε-semantics and standard semantics over first order formulae used to construct logic-based hybrid automata. Under this framework, we present a decidability result concerning the reachability problem. Finally, we consider some biological models and show the qualitative benefits brought by the ε-semantics application.

15:00
Verifying Floating-Point Implementations of Nonlinear Functions
SPEAKER: Sicun Gao

ABSTRACT. We are developing a new approach for debugging and verifying floating-point implementations of nonlinear functions. Our method combines testing, numerical analysis, and delta-complete SMT solving using dReal. We generate point inputs to obtain preliminary coverage, and then generalize them to an automatically chosen neighborhood based on static analysis. In this way we often obtain full coverage of bounded intervals of interest in reasonable amount of time. Applying the techniques on the current version of the Embedded GNU C library, we have found serious bugs in the implementations of elementary functions, as well as intervals where their correctness is verified.

15:30
Correctness of Parallel Interval Computations

ABSTRACT. Interval computations are a means to guarantee a possible range of a numerical result computed with finite precision numbers. With a parallel implementation, the non-determinism of the execution order, which hinders the reproducibility of numerical result from one run to the other, may also affect the correctness of an interval algorithm. In addition, implementations of interval algorithms depend on the respect of the rounding mode by the software stack: from the compiler to the thread manager. The talk illustrates these pitfalls with the parallel implementation of interval matrix multiplications.

14:30-16:00 Session 96BE (QED)
Location: FH, Hörsaal 4
14:30
TBA
15:00
Mixing proofs and computations

ABSTRACT. In order to achieve the goals of QED, it will be necessary to effectively combine calculations and proofs in the same formal and computerized representation. We discuss the obstacles to this combination, and some of the attempts to overcome those obstacles.

15:30
The Naproche system: Proof-checking mathematical texts in controlled natural language
SPEAKER: Marcos Cramer

ABSTRACT. We have developed a controlled natural language (CNL) – i.e. a subset of a natural language defined through a formal grammar – for writing mathematical texts. The Naproche system can check the correctness of mathematical proofs written in this CNL. In this talk we will highlight some interesting logical and linguistic problems that had to be solved in the development of the Naproche CNL and the Naproche system: The system uses a formalism from the linguistic theory of presuppositions in order to work with potentially undefined terms, and can handle dynamic quantification, including the phenomenon of implicit dynamic function introduction, which had previously not been discussed in the literature.

14:30-16:00 Session 96BF: Special session: Perspectives on Induction (CSL-LICS and LC)
Location: MB, Prechtlsaal
14:30
Weak well orders and related inductions

ABSTRACT. It is an interesting program to investigate the relationship between the proof theory of second order arithmetic and more general second order systems (e.g. theories of sets and classes such as von Neumann-Bernays-Goedel set theory and Morse-Kelley set theory). Which proof-theoretic results can be lifted from second order arithmetic to theories of sets and classes, for which is this not the case, and what are the reasons? What is specific of second order number theory and what additional insights can we gain?

One of the crucial questions is how to distinguish between "small" and "large" in the various contexts. In second order arithmetic, the small objects are the natural numbers whereas the large objects are the infinite sets of natural numbers. Hence it seems natural to identify the small objects in sets and classes with sets and the large objects with proper classes.

As long as only comparatively weak systems are concerned, the moving up from second order arithmetic to sets and classes seems to be a matter of routine. However, as soon as well orderings enter the picture, the situation becomes interesting. In second order arithmetic, every $\Pi^1_1$ statement is equivalent to the question whether a specific arithmetic relation is well ordered; on the other hand, in set theory a simple elementary formulas expresses the well foundedness of a given relation.

We propose studying the (new) notion of weak well order in sets and classes as the proof-theoretically adequate analogue of well order in second order arithmetic. To support this claim several results about inductions and recursions in connection with weak well orders will be presented. This is joint work with D. Flumini.

15:15
Automating inductive proof
SPEAKER: Alan Bundy

ABSTRACT. We discuss the automation of inductive proof.

14:30-16:00 Session 96BG: Invited talk by Josep Silva (VPT and WWV)
Location: FH, Seminarraum 107
14:30
Automatic Detection of Webpage Candidates for Site-Level Web Template Extraction
SPEAKER: Josep Silva
14:30-16:00 Session 96BH: Contributed Talks: Argumentation 1 (NMR)
Location: EI, EI 9
14:30
Analysis of Dialogical Argumentation via Finite State Machines

ABSTRACT. Dialogical argumentation is an important cognitive activity by which agents exchange arguments and counterarguments as part of some process such as discussion, debate, persuasion and negotiation. Whilst numerous formal systems have been proposed, there is a lack of frameworks for implementing and evaluating these proposals. First-order executable logic has been proposed as a general framework for specifying and analysing dialogical argumentation. In this paper, we investigate how we can implement systems for dialogical argumentation using propositional executable logic. Our approach is to present and evaluate an algorithm that generates a finite state machine that reflects a propositional executable logic specification for a dialogical argumentation together with an initial state. We also consider how the finite state machines can be analysed, with the minimax strategy being used as an illustration of the kinds of empirical analysis that can be undertaken.

15:00
Abduction in Argumentation: Dialogical Proof Procedures and Instantiation
SPEAKER: unknown

ABSTRACT. We develop a model of abduction in abstract argumentation, where changes to an argumentation frame- work act as hypotheses to explain the support of an observation. We present dialogical proof theories for the main decision problems (i.e., finding hypotheses that explain skeptical/credulous support) and we show that our model can be instantiated on the basis of abductive logic programs.

15:30
Non-Monotonic Reasoning and Story Comprehension
SPEAKER: unknown

ABSTRACT. This paper develops a Reasoning about Actions and Change framework integrated with Default Reasoning, suitable as a Knowledge Representation and Reasoning framework for Story Comprehension. The proposed framework, which is guided strongly by existing knowhow from the Psychology of Reading and Comprehension, is based on the theory of argumentation from AI. It uses argumentation to capture appropriate solutions to the frame, ramification and qualification problems and generalizations of these problems required for text comprehension. In this first part of the study the work concentrates on the central problem of integration (or elaboration) of the explicit information from the narrative in the text with the implicit (in the reader’s mind) common sense world knowledge pertaining to the topic(s) of the story given in the text. We also report on our empirical efforts to gather background common sense world knowledge used by humans when reading a story and to evaluate, through a prototype system, the ability of our approach to capture both the majority and the variability of understanding of a story by the human readers in the experiments.

14:30-16:00 Session 96BI: Abstract Interpretation and Linear Logic (HOPA)
Location: FH, Seminarraum 138C
14:30
Environment Unrolling
SPEAKER: unknown

ABSTRACT. We propose a new way of thinking about abstract interpretation with a method we term environment unrolling. Model checkers for imperative languages will often apply loop unrolling to make their state space finite in the presence of loops and recursion. We propose handling environments in a similar fashion by putting a bound on the number of environments to which a given closure can transitively refer. We present how this idea relates to a normal model of abstract interpretation, give a general overview of its soundness proof in regards to a concrete semantics, and show empirical results demonstrating the effectiveness of our approach.

15:00
Strong Function Call
SPEAKER: unknown

ABSTRACT. This work presents an incremental improvement to abstract interpretation of higher order languages, similar to strong update, which we term strong function call. In an abstract interpretation, after a function call, we know which abstract closure was called and can deduce that any other values found at the same abstract address in the abstract store could not possibly exist in the corresponding concrete state. Thus they can be removed from the abstract store without loss of soundness. We provide the intuition behind this analysis along with a general overview of its soundness proof.

15:30
The modal nature of colors in higher-order model-checking
SPEAKER: unknown

ABSTRACT. A careful investigation of the work by Naoki Kobayashi and Luke Ong reveals that priorities (or colors) behave in essentially the same way as the exponential modality of linear logic. From this follows a purely proof-theoretic reconstruction of their type system, based on an indexed and infinitary variant of tensorial logic. In this talk, we will describe the basic properties of the color modalities, and how to translate a minor variant of Kobayashi and Ong's type system into tensorial logic with colors.

Talk of 30 minutes.

15:30-16:00 Session 97: Contributed talk (PCCR)
Location: FH, Hörsaal 3
15:30
Backdoors into Heterogeneous Classes of SAT and CSP

ABSTRACT. Backdoor sets represent clever reasoning shortcuts through the search space for SAT and CSP. By instantiating the backdoor variables one reduces the given instance to several easy instances that belong to a tractable class. The overall time needed to solve the instance is exponential in the size of the backdoor set, hence it is a challenging problem to find a small backdoor set if one exists; over the last years this problem has been subject of intensive research.

In this paper we extend the classical notion of a strong backdoor set by allowing that different instantiations of the backdoor variables result in instances that belong to different base classes; the union of the base classes forms a heterogeneous base class. Backdoor sets to heterogeneous base classes can be much smaller than backdoor sets to homogeneous ones, hence they are much more desirable but possibly harder to find.

We draw a detailed complexity landscape for the problem of detecting strong backdoor sets into heterogeneous base classes for SAT and CSP. We provide algorithms that establish fixed-parameter tractability under natural parameterizations, and we contrast the tractability results with hardness results that pinpoint the theoretical limits.

Our results apply to the current state-of-the-art of tractable classes of CSP and SAT that are definable by restricting the constraint language.

16:00-16:30Coffee Break
16:30-18:00 Session 99AA: New IDEs & Coq 8.5 update (Coq)
Location: MB, Hörsaal 14A
16:30
Asynchronous interaction for Coq
SPEAKER: Carst Tankink

ABSTRACT. In this talk I will discuss the new jEdit editor integration for Coq, that supports asynchronous interaction with the proof assistant.

17:00
Coqoon: towards a modern IDE for Coq

ABSTRACT. Users of modern IDEs expect sophisticated features like automatic dependency resolution, background recompilation, content-aware search features (such as "show me where this is defined"), automatic indentation, syntax highlighting and code completion. Coqoon, which is built on top of the Eclipse platform, provides such an IDE for Coq.

17:30
Update on Coq 8.5
16:30-18:00 Session 99AB: Tutorial: Hardware Model Checking (CAV)
Location: FH, Hörsaal 6
16:30
Hardware Model Checking (Part B)
SPEAKER: Fabio Somenzi
16:30-17:00 Session 99AC: Test-of-Time Award Presentations and Lectures (CSL-LICS)
Location: FH, Hörsaal 1
16:30
Citations for the Test-of-Time Award from 1994
SPEAKER: Dexter Kozen
16:30-18:35 Session 99AD: Summer School on Computational Logic: Answer Set Programming II (ICLP)
Location: Seminarraum 1/3 OPG
16:30
Answer Set Programming II
SPEAKER: Martin Gebser
18:30
Summer School Closing
SPEAKER: Gopal Gupta
16:30-17:30 Session 99AE (ITRS)
Location: MB, Zeichensaal 14
16:30
Semantic Types for Classes and Mixins
SPEAKER: unknown

ABSTRACT. We consider a formalization of mixins composition and class linearization yielding a class in mixin-based inheritance. We provide an interpretation of the mixin calculus into a lambda-calculus extended with records and a record merge operator. After extending the BCD intersection type assignment to such calculus showing that types are preserved by subject expansion and reduction, we naturally interpret mixin terms as the sets of the types that can be deduced of their translations. It turns out that the class obtained from a composition of mixins and the composition itself have the same logical meaning.

17:00
Delegation-based Mixin Composition Synthesis

ABSTRACT. In the object oriented world existing inheritance concepts have been extended by mixins to assist modular software design. Actually, most languages only have limited mixin support. We use staged computation proposed by Davies and Pfenning together with a specification following an interface based design pattern to introduce mixins to Java 8. We present a synthesis based approach for mixin composition guided by user annotated semantics translated to intersection types. Synthesis results in meaningful Java code utilizing delegation to augment existing classes with mixin features.

16:30-18:00 Session 99AF: Special session: Philosophy of Mathematics (LC)
Location: MB, Hörsaal 14
16:30
Restrictiveness relative to notions of interpretation

ABSTRACT. In [4], Maddy gives a semi-formal account of restrictiveness by defining a corresponding formal notion based on a class of interpretations. In [2] and [3], Maddy's notion of restrictiveness was discussed and the theory ZF + `Every uncountable cardinal is singular' was presented as a potential witness to the restrictiveness of ZFC. More recently, Hamkins has given more examples and pointed out some structural issues with Maddy's definition [1]. We look at Maddy's definitions from the point of view of an abstract interpretation relation. We consider various candidates for this interpretation relation, including one that is close to Maddy's original notion, but fixes the issues raised in [1]. Our work brings to light additional structural issues that we also discuss.

17:15
Reflection, Trust, Entitlement
SPEAKER: Leon Horsten

ABSTRACT. It has been argued by Feferman and others that when we accept a mathematical theory, we implicitly commit ourselves to reflection principles for this theory. When we reflect on this implicit commitment, we come to explicitly believe certain reflection principles. In my talk I will discuss our epistemic warrant for this resulting explicit belief in reflection principles.

16:30-18:00 Session 99AG: Special session: Model Theory (LC)
Location: MB, Seminarraum 225
16:30
Definable valuation rings
17:15
The Fitting subgroup of a supersimple group
16:30-18:00 Session 99AH: Discussion and Business Meeting (WST)
Location: FH, Seminarraum 101B
16:30
Discussion and Business Meeting
SPEAKER: Everyone
16:30-18:00 Session 99AI: Satisfiability Modulo Theories (QUANTIFY)
Location: FH, Seminarraum 136
16:30
Efficiently Solving Quantified Bit-Vectors

ABSTRACT. There is great interest in the use of symbolic reasoning for synthesising loop invariants, ranking functions, or whole program fragments and hardware circuits. Solvers for the quantifier-free fragment of bit-vector logic exist and often rely on SAT solvers for efficiency. However, many techniques require quantifiers in bit-vector formulas to avoid an exponential blow-up during construction and QBF solvers are often inefficient. I will present an approach based on a set of effective word-level simplifications that are traditionally employed in automated theorem proving, heuristic quantifier instantiation methods used in SMT solvers, and model finding techniques based on function templates. I will also go into why we got interested in this problem in the first place and where we would like to take it from here.

17:00
Quantifier Projection

ABSTRACT. We take as starting point routines for quantifier reasoning in the SMT solver Z3 and we analyze these through the lens of quantfier projection: quantifiers are specialized or instantiated partially using information about possible feasible interpretations that is collected during search.

17:30
Finding Conflicting Instances of Quantified Formulas in SMT

ABSTRACT. Modern SMT solvers primarily rely on heuristic methods such as E-matching for answering ``unsatisfiable" in the presence of quantified formulas. The success of these solvers is often hindered by an overabundance of instantiations produced by these heuristics, making it difficult for the solver to continue its operation. In this paper, we introduce new techniques that alleviate this shortcoming by first discovering instantiations that are in conflict with the current state of the solver. In these techniques, the solver only resorts to heuristic methods when such an instantiation cannot be found, thus decreasing its dependence upon E-matching. Our experimental results show our technique significantly reduces the number of instantiations required by the SMT solver for answering ``unsatisfiable" for several benchmark libraries, and consequently leads to improvements over state-of-the-art implementations.

16:30-18:00 Session 99AJ (AISS)
Location: FH, Seminarraum 134
16:30
Simulation for infinite state systems.
SPEAKER: Piotr Hofman

ABSTRACT. Abstract is in the pdf file. My submission is quiet unusual, as I propose 2 topics (closely related). I hope that one of them will suit to this year AISS programme.

17:30
Simulation Over One-Counter Nets is PSPACE-Complete

ABSTRACT. One-counter nets (OCN) are Petri nets with exactly one unbounded place. They are equivalent to a subclass of one-counter automata with just a weak test for zero. Unlike many other semantic equivalences, strong and weak simulation preorder are decidable for OCN, but the computational complexity was an open problem. We show that both strong and weak simulation preorder on OCN are PSPACE-complete.

16:30-18:00 Session 99AK: Implementation Issues (CHR and CICLOPS-WLPE)
Location: FH, Hörsaal 7
16:30
Intelligent Visual Surveillance Logic Programming: Implementation Issues

ABSTRACT. The main idea of the logic programming approach to the intelligent video surveillance is in using a first order logic for describing complex events and abstract concepts like anomalous human activity, i.e. brawl, sudden attack, armed attack, leaving object, loitering, pick pocketing, personal theft, immobile person, etc. We consider main implementation issues of our approach to the intelligent video surveillance logic programming: object-oriented logic programming of concurrent stages of video processing, translating video surveillance logic programs to fast Java code, embedding low-level means for video storage and processing to the logic programming system.

17:00
A Parallel Virtual Machine for Executing Forward-Chaining Linear Logic Programs
SPEAKER: Flavio Cruz

ABSTRACT. Linear Meld is a concurrent forward-chaining linear logic programming language where logical facts can be asserted and retracted in a structured way. The database of facts is partitioned by the nodes of a graph structure which leads to parallelism if nodes are executed simultaneously. Communication arises whenever nodes send facts to other nodes by fact derivation. We present an overview of the virtual machine that we implemented to run Linear Meld on multicores, including code organization, thread management, rule execution and database organization for efficient fact insertion, lookup and deletion. Although our virtual machine is a work-in-progress, our results already show that Linear Meld is not only capable of scaling graph and machine learning programs but it also exhibits some interesting performance results when compared against other programming languages.

17:30
Discussion
SPEAKER: Everyone

ABSTRACT. We have additional discussion time at the end of the workshop.

16:30-18:10 Session 99AM: Program Verification (PAS)
Location: FH, Seminarraum 138A
16:30
A hybrid path constraint solver combined with meta-heuristic search
SPEAKER: unknown

ABSTRACT. Constraint solving is one of the essential parts of symbolic execution. Most of the constraint solvers are aimed at a particular kind of constraint and might have no way to deal with complex path constraints derived from real-world applications. To tackle the problem, we propose a hybrid solving strategy for path constraints containing multiple kinds of constraints. In particular, a split approach is used to cut the whole path constraint into several parts that can be solved by different constraint solving techniques. Next, the local method of meta-heuristic search is used to solve non-linear constraints led by the solutions produced by a linear constraint solver. We have extended the constraint solver in the symbolic execution engine KLEE-FP with our approach and evaluated its effectiveness on a set of mathematical programs. The results show that our approach can solve a majority of complex path constraints in reasonable time.

17:00
Automatic App Testing of LTL Properties
SPEAKER: unknown

ABSTRACT. We present an automatic testing tool for Android apps. The tool accepts specifications in an extension of LTL (Linear Temporal Logic) and then automatically crawls the screen activities. We report experiments with the random test plan.

17:30
Programming Language Aggregation with Applications in Equivalence Checking
SPEAKER: unknown

ABSTRACT. We show that, given the operational semantics of two programming languages L and R, it is possible to construct an aggregate language, in which programs consist of pairs of programs from L and respectively R. In the aggregate language, a program P = (PL, PR) takes a step from either PL or PR. The main difficulty is how to aggregate the two languages so that data such as integers or booleans that are common to both languages has the same representation in the aggregate language. The aggregation is based on the pushout theorem, which allows us to construct a model of the aggregate language from models of the initial languages, while making sure that the interpretation of common data such as integers is the same. The main application of the aggregation result is in equivalence checking. It is possible to check for example that two programs PL and PR (written in L and respectively R) compute the same result by checking the partial correctness of the pair (PL,PR) in the aggregate language.

16:30-18:00 Session 99AP: LaSh afternoon 2: Invited Talks - Answer Set Programming (LaSh)
Location: MB, Aufbaulabor
16:30
The D-FLAT System for Dynamic Programming on Tree Decompositions

ABSTRACT. Complex reasoning problems over large amounts of data pose a great challenge for computer science. To overcome the obstacle of high computational complexity, exploiting structure by means of tree decompositions has proved to be effective in many cases. However, the implementation of suitable efficient algorithms is often tedious. D-FLAT is a software system that combines the logic programming language Answer Set Programming with problem solving on tree decompositions and can serve as a rapid prototyping tool for such algorithms. Since we initially proposed D-FLAT, we have made major changes to the system, improving its range of applicability and its usability. In this paper, we present the system resulting from these efforts.

17:10
Cross-Translating Answer Set Programs
SPEAKER: Tomi Janhunen

ABSTRACT.  Answer set programming (ASP)  is a declarative programming paradigm where computation takes place in two phases. The rules of a logic program are first instantiated after which the answer sets of the resulting ground program are computed. There are two mainstream approaches to the search for answer sets: either by using native answer set solvers or by translating ground rules into other kinds of constraints and using other off-the-shelf solver technology to perform the search. Over the years, we have developed a number of translations from ASP to Boolean satisfiability (SAT), difference logic, bit-vector logic, and mixed integer programming. If a target formalism does not support extended rule types, they have to be normalized before translation. Our latest translation of ASP into SAT modulo acyclicity enables a cross-compilation architecture where the back-end formalism is determined by the last translation step generating Clark’s completion extended by acyclicity constraints.  In this invited talk, I describe the resulting cross-translation framework for ASP and discuss lessons learned from the ten-year development of the framework.

 

16:30-18:00 Session 99AQ: NLCS contributed talks (NLCS)
Chair:
Location: FH, CAD 2
16:30
Program Extraction Applied to Monadic Parsing
SPEAKER: Alison Jones

ABSTRACT. This paper outlines a proof-theoretic approach to developing correct and terminating monadic parsers. Using a modified realisability interpretation, we extract provably correct and terminating programs from formal proofs. If the proof system is proven to be correct, then any extracted program is guaranteed to be so. By extracting parsers, we can ensure that they are correct, complete and terminating for any input. The work is ongoing, and is being carried out in the interactive proof system M\textsc{inlog}.

17:00
On Translating Context-Free Grammars into Lambek Categorial Grammars

ABSTRACT. We modify Buszkowski's translation of context-free grammars to Lambek categorial grammars in order to deal with Montague-style semantics properly.

16:30-18:00 Session 99AR: Contributed Talks (LATD)
Location: MB, Festsaal
16:30
Cut-free calculus for second-order {G\"odel} logic
SPEAKER: unknown

ABSTRACT. We prove that the extension of the known hypersequent calculus for standard first-order {G\"odel} logic with usual rules for second-order quantifiers is sound and (cut-free) complete for Henkin-style semantics for second-order {G\"odel} logic. The proof is semantic, and it is similar in nature to {Sch\"utte} and Tait's proof of Takeuti's conjecture.

17:00
Poof Search and Co-NP completeness for Many-Valued Logics
SPEAKER: unknown

ABSTRACT. We provide a methodology to introduce relational hypersequent calculi for a large class of many-valued logics, and a sufficient condition for their Co-NP completeness. Our results include the most important Co-NP fuzzy logics.

17:30
Cut and completion?
SPEAKER: Sam van Gool

ABSTRACT. During the last decade a connection emerged between proof theory and algebra via which cut-elimination, one of the cornerstones of structural proof theory, can be proved by using completions, in particular, the MacNeille completion. This technique, which has been developed for a wide range of logics, including substructural ones, is far from trivial. Our modest aim in this paper is to establish what the technique of completions boils down to for "strong" logics such as full intuitionistic propositional logic, and what the connection is with other semantical proofs of cut-elimination.

16:30-18:00 Session 99AS: Contributed Talks (LATD)
Location: MB, Hörsaal 15
16:30
Definability of truth predicates in abstract algebraic logic

ABSTRACT. One of the main topics of Abstract Algebraic Logic is the study of the Leibniz hierarchy, in which logics are classified by means of properties of the Leibniz operator which determine how nicely the Leibniz congruences and the truth predicates can be described in models of the logic. In this talk we will introduce and characterize logics, whose truth predicates are defined by means of equations with parameters. Then we will go through the consideration of weaker conditions on the truth predicates of a logic. This will give rise to a small hierarchy, in which logics are classified according to the way their truth predicates are defined; this new hierarchy can be thought of as an extension of the Leibniz hierarchy, since almost all the conditions we take into account turn out to be characterised by a property of the Leibniz operator.

17:00
Generalizing the Leibniz and Suszko operators

ABSTRACT. In this paper we study the notion of an S-coherent family of S-compatibility operators, for a sentential logic S. This notion is tailored to be a common generalization of the well-known Leibniz and Suszko operators, which have been fundamental tools in recent developments of Abstract Algebraic Logic. The first main result we prove is a General Correspondence Theorem, which generalizes several results of this kind obtained for either the Leibniz operator (Blok and Pigozzi, 1986; Font and Jansana, 2001) or the Suszko operator (Czelakowski, 2003). We apply the general results to obtain several new characterizations of the main classes of logics in the Leibniz hierarchy in terms of the Leibniz operator or in terms of the Suszko operator.

17:30
Church-style type theories over finitary weakly implicative logics

ABSTRACT. In this paper, Church--Henkin simple type theories are constructed for finitary weakly implicative logics. The resulting type theory TT(L) over a given finitary weakly implicative logic L is the minimal (extensional, substitution-invariant) type theory closed under the rules of lambda-conversion and the intersubstitutivity of equals whose propositional fragment coincides with L and whose sound and complete Henkin semantics consists of Henkin-style general models over a generating class of L-algebras. The soundness and completeness theorem for TT(L) is obtained by a schematic adaptation of the proof for the ground theory TT0 which is a common fragment of all TT(L).

16:30-18:00 Session 99AU: iPrA Regular Talks, Discussions, Closing (iPRA)
Location: FH, Seminarraum 101A
16:30
Specification Synthesis via Generalized Abduction
SPEAKER: unknown

ABSTRACT. We consider the problem of verifying safety of a program in the absence of code for called procedures. Our goal is to synthesize maximal specifications of absent procedures that maintain safety of the given program. This is useful when the software to be verified contains calls to external components (e.g., libraries) that cannot be analyzed. Such components are usually (1) proprietary software whose source code is not available or (2) libraries that are too complex to be analyzed along with the client application. Thus, synthesized specifications can be used as stubs for calls to external components.

In this talk, we discuss a solution to this problem that relies on a reduction to a novel form of logical abduction, which we call generalized abduction. Generalized abduction is a powerful generalization of the classical abduction problem that allows us to abduce an individual specification (formula) for each unknown procedure — as opposed to a single formula in classical abduction. The fact that a procedure may be called from more than one call site, along with the interactions between different call sites, makes this problem quite challenging. We discuss the challenges faced when solving generalized abduction problems, present an algorithm for solving generalized abduction, and outline potential uses of interpolation as a way for solving this problem.

17:00
Interpolation from Clausal Proofs
SPEAKER: unknown

ABSTRACT. We present a method for interpolation based on DRUP proofs. Interpolants are widely used in model checking, synthesis and other applications. Most interpolation algorithms rely on a a resolution proof produced by a SAT-solver for unsatisfaible formulas. The proof is traversed and translated into an interpolant by replacing resolution steps with AND and OR gates. This process is efficient (once there is a proof) and generates interpolants that are linear in the size of the proof. As this method relies on the existence of a resolution proof, it depends on a SAT-solver that can log a resolution-proof.

In this work, we address three known weakness of this approach, due to its dependence on a resolution proof: (1) performance degradation experienced by the SAT-solver and the extra memory requirements needed when logging a resolution proof; (2) the proof generated by the solver is not necessarily the "best'' proof for interpolantion, and (3) combining proof logging with pre-processing is complicated. We show that these issues can be remedied by using DRUP proofs.

First, we show how to produce an interpolant from a DRUP proof, even when pre-processing is enabled. Then, we give a novel interpolation algorithm that produces interpolants partially in CNF. More precisely, our algorithm return the CNF component of the interpolant. In case that the proof permits it, the interpolant is a pure CNF. Lastly, we show how a DRUP proof can be reconstructed on-the-fly to yield better interpolants. Our reconstruction aims at maximizing the CNF componenet of the interpolant forcing a specific type of resolutions.

We implemented our DRUP-based interpolation framework in MiniSAT, and evaluated its affect using Avy - a SAT-based model checking algorithm that heavily relies on interpolation. We show the affect our method has on model checking and demonstrate the usefulness of our different heuristics.

17:30
Concluding discussions
SPEAKER: unknown
16:30-18:00 Session 99AV: Technical Session (FCS-FCC)
Location: MB, Hörsaal 7
16:30
Protocol Indistinguishability and the Computationally Complete Symbolic Attacker
SPEAKER: unknown

ABSTRACT. We considered the problem the computational indistinguishability of two bounded protocols. We designed a symbolic model that is amenable to automated deduction, and such that an inconsistency proof implies computational indistinguishability. Conversely, symbolic models of distinguishability provide clues for likely computational attacks. The idea is to define a computationally complete symbolic attacker in order to achieve unconditional computational soundness of the symbolic analysis. Following our previous work on reachability properties, we achieve this by axiomatizing what an attacker cannot violate. By adding computationally sound, modular axioms, a library of axioms can be built. Despite additional difficulties stemming from equivalences, the models and proofs are much simpler than in the case of reachability properties.

16:50
On Well-Founded Security Protocols

ABSTRACT. We introduce the class of well-founded protocols, and explain why leakiness is decidable for it. We hope that this will help to unify our understanding of a group of results that obtain decidability by imposing conditions that make encrypted messages context-explicit.

17:10
Fitting's Embedding of Classical Logic in S4 and Trace Properties in the Computational Model
SPEAKER: unknown

ABSTRACT. We explain the connection between Fitting's embedding of classical logic in S4 and the non-Tarskian computational semantics that Bana and Comon defined to interpret first-order formulas for their computationally complete symbolic attacker.

17:30
Towards Timed Models for Cyber-Physical Security Protocols
SPEAKER: unknown

ABSTRACT. Many security protocols rely on the assumptions on the physical properties in which its protocol sessions will be carried out. For instance, Distance Bounding Protocols take into account the round trip time of messages and the transmission velocity to infer an upper bound of the distance between two agents. We classify such security protocols as cyber-physical. The key elements of such protocols are the use of cryptographic keys, nonces and time. This paper investigates timed models for the verification of such protocols. Firstly, we introduce a multiset rewriting framework with continuous time and fresh values. We demonstrate that in this framework one can specify distance bounding protocols and intruder models for cyber-physical security protocols that take into account the physical properties of the environment. Then we prove that models with discrete time are not able to expose as many security flaws as models with continuous time. This is done by proposing a protocol and demonstrating that there is no attack to this protocol when using the model with discrete time, but there is an attack when using the model with continuous time. For the important class of Bounded Memory Cyber-Physical Security Protocols with a Memory Bounded Intruder the reachability problem is PSPACE-complete if the size of terms is bounded.

16:30-18:00 Session 99AX: Socio-Technical Security (STAST)
Location: MB, Seminarraum 212/232
16:30
What You Enter Is What You Sign: input integrity in an online banking environment
SPEAKER: Sven Kiljan

ABSTRACT. One problem with most currently used transaction authentication methods is that they depend on the customer's computer for integrity of the information flow between customer and bank. This allows man-in-the-middle attacks to be conducted using malware for financial fraud. Some banks are implementing new authentication methods that allow customers to verify transactions received by a bank without depending on the customer's computer to provide information integrity. These new methods are more complex compared to traditional authentication methods and need the customer's attention to be effective, since it is up to the customer to verify the information that was received by his or her bank. By examining the intrinsic problems of traditional and new transaction authentication methods as used by banks, we designed an alternative authentication method named 'Entered Single Transaction Authentication'. Our method ensures that the bank receives information as the customer entered it without requiring further verification by the customer. We introduce the concept 'What You Enter Is What You Sign', which ensures the digital integrity of information as soon as it is entered. Our proposal is theoretical and high-level, but opens the way for secure transaction authentication methods that rely less on the authenticating party to provide correct information, thereby reducing errors and improving user friendliness.

17:15
Using Statistical Information to Communicate Android Permission Risks to Users
SPEAKER: Lydia Kraus

ABSTRACT. The Android OS has a permission-based security system that controls the third party applications’ access to sensitive information on the smartphone. The risk evaluation is left to the user who has to evaluate whether or not the requested permissions are appropriate. However, former work has shown that users lack attention to and understanding of the permissions which makes it difficult for them to make appropriate decisions. To support users with better understandable information we provide statistical information about permissions, grouped by functionality. We use methods from health risk communication to communicate this information to the users. In a lab experiment with 48 participants we find that users tend to choose more often the app with a lower number of permissions when statistical information is provided together with graphics. We also find that the privacy-intrusiveness and trustworthiness of apps is perceived differently when statistical information is given.

16:30-18:00 Session 99AY: NSV Invited Talk: Jim Kapinski (NSV)
Location: FH, Hörsaal 2
16:30
Numerical Challenges in Simulation-guided Dynamical System Analysis
SPEAKER: Jim Kapinski

ABSTRACT. Verification and validation (V&V) for industrial control designs can be a difficult and expensive process. Simulation-guided approaches offer a practical means of performing V&V for industrial applications. One such technology uses simulation traces to discover Lyapunov functions or barrier certificates. This technique has proved promising; however, there are challenges related to the numerical computations involved. For example, the technique translates data obtained from simulation traces into a large collection of linear constraints, but not all of these constraints are necessarily useful for solving the problem. Strategies for selecting an appropriate subset of these constraints are required. In this talk, we present an overview of the simulation-guided analysis approach and discuss these numerical challenges.

16:30-18:00 Session 99AZ (QED)
Location: FH, Hörsaal 4
16:30
QED and the TPTP World
17:00
MathHub.info: Active Mathematics

ABSTRACT. We present the MathHub.info system, a development environment for active mathematical documents and an archive for flexiformal mathematics. It offers a rich interface for reading, writing, and interacting with mathematical documents and knowledge. The core MathHub.info system is an archive for flexiformal mathematical documents and libraries in the OMDoc/MMT format. Content can be authored or archived in the source format of the respective system, is versioned in GIT repositories, and transformed into OMDoc/MMT for machine-support and further into HTML5 for reading and interaction.

17:30
Hammering towards QED
16:30-18:00 Session 99BA: Joint session with VPT: Web systems (WWV)
Location: FH, Seminarraum 107
16:30
Static Enforcement of Role-Based Access Control
SPEAKER: Asad Ali

ABSTRACT. We propose a new static approach to RBAC policy enforcement. The static approach we advocate includes a new design methodology, for applications involving RBAC, which integrates the security requirements into the system's architecture, helping to ensure that policies are correctly defined and enforced. We apply this new approach to policies restricting calls to methods in Java applications. However, our approach is more general and can be applied to other Object-Oriented languages. We present a language to express RBAC policies on calls to methods in Java, a set of design patterns which Java programs must adhere to for the policy to be enforced statically, and a high-level algorithm for static enforcement.

17:00
A Local Logic for Realizability in Web Service Choreographies
SPEAKER: R. Ramanujam

ABSTRACT. Web service choreographies specify conditions on observable interactions among the services. An important question in this regard is realizability: given a choreography C, does there exist a set of service implementations I that conform to C ? Further, if C is realizable, is there an algorithm to construct implementations in I ? We propose a local temporal logic in which choreographies can be specified, and for specifications in the logic, we solve the realizability problem by constructing service implementations (when they exist) as communicating automata. These are nondeterministic finite state automata with a coupling relation. We also report on an implementation of the realizability algorithm and discuss experimental results.

16:40-17:30 Session 100: Contributed Talks (DL)
Location: EI, EI 7
16:40
Datalog Rewriting Techniques for Non-Horn Ontologies
SPEAKER: unknown

ABSTRACT. We study the closely related problems of rewriting disjunctive datalog programs and non-Horn DL ontologies into plain datalog programs that entail the same facts for every dataset. We first propose the class of markable disjunctive datalog programs, which is efficiently recognisable and admits polynomial rewritings into datalog. Markability naturally extends to SHI ontologies, and markable ontologies admit (possibly exponential) datalog rewritings. We then turn our attention to resolution-based rewriting techniques. We devise an enhanced rewriting procedure for disjunctive datalog, and propose a second class of SHI ontologies that admits exponential datalog rewritings via resolution. Finally, we evaluate the feasibility of our techniques over a large corpus of ontologies, with encouraging results.

17:05
Succinctness of Query Rewriting in OWL 2 QL: The Case of Tree-like Queries
SPEAKER: unknown

ABSTRACT. This paper further investigates the succinctness landscape of query rewriting in OWL 2 QL. We clarify the size of positive existential (PE), non-recursive datalog (NDL), and first order (FO) rewritings for various classes of tree-like conjunctive queries, ranging from linear queries up to bounded treewidth queries. More specifically, we establish a superpolynomial lower bound on the size of PE rewritings that holds already for linear queries and TBoxes of depth 2. For NDL-rewritings, we show that polynomial-size rewritings always exist for tree-shaped queries with a bounded number of leaves (and arbitrary TBoxes), and for bounded treewidth queries and bounded depth TBoxes. Finally, we show that the succinctness problems concerning FO-rewritings are equivalent to well-known problems in Boolean circuit complexity. Along with known results, this yields a complete picture of the succinctness landscape for the considered classes of queries and TBoxes. 

17:30-18:00 Session 102C: Joint session with WWV (VPT)
Location: FH, Seminarraum 107
17:30
Formal Replay of Translation Validation for Highly Optimised C
SPEAKER: Thomas Sewell

ABSTRACT. In previous work~\cite{sewell2013translation} we have implemented a translation validation mechanism for checking that a C compiler is adhering to the expected semantics of a verified program. We used this apparatus to check the compilation of the seL4 verified operating system kernel~\cite{Klein_EHACDEEKNSTW_09} by GCC 4.5.1. To get this result, we carefully chose a problem representation that worked well with certain highly optimised SMT solvers. This raises a question of correctness. While we are confident the result is correct, we still aim to replay this result with the most dependable tools available.

In this work we present a formalisation of the proof rules needed to replay the translation check within the theorem prover Isabelle/HOL. This is part of an ongoing effort to bring the entire translation validation result within a single trusted proof engine and derive a single correctness theorem, thus reaching the gold standard level of trustworthiness for program verification.

We had hoped to present the formal rule set in action through a worked example. Unfortunately while we have all the theory we need, the mechanisms for selecting and applying the rules and discharging certain side conditions remain a work in progress, and our example proof is incomplete.

18:15-19:15 Session 104A: Contributed talks A3 (LC)
Location: MB, Seminarraum 225
18:15
A completeness theorem for general relativity

ABSTRACT. We introduce several first-order axiom systems for general relativity and show that they are complete with respect to the standard models of general relativity, i.e., to Lorentzian manifolds having the corresponding smoothness properties.

This is only a sample of our approach (see the references in [2]) to the logical analysis of special and general relativity theory in the axiomatic framework of modern mathematical logic. The aim of our research is to build a flexible hierarchy of axiom systems (instead of one axiom system only), analyzing the logical connections between the different axioms and axiomatizations. We try to formulate simple, logically transparent and intuitively convincing axioms. The questions we study include: What is believed and why? - Which axioms are responsible for certain predictions? - What happens if we discard some axioms? - Can we change the axioms, and at what price?

[1] H. Andréka, J.X. Madarász, I. Németi, and G. Székely, An axiom system for general relativity complete with respect to Lorentzian manifolds, arXiv:1310.1475, 2013.

[2] H. Andréka, J.X. Madarász, I. Németi, and G. Székely, A logic road from special relativity to general relativity, Synthese, vol. 186 (2012), no. 3, pp. 633–649, arXiv:1005.0960.

18:35
More On Completion with Horn Filters
18:15-19:15 Session 104B: Contributed talks B3 (LC)
Location: MB, Prechtlsaal
18:15
Predicative Mathematics via Safety Relations
SPEAKER: Liron Cohen

ABSTRACT. In [1] a new framework for predicative mathematics was developed. The main new features of this framework are that it is type-free, based on a subsystem of ZF, and the language it employs includes nothing that is not used in ordinary mathematical texts. In particular: it reflects real mathematical practice in making an extensive use of statically defined abstract set terms of the form {x|φ}.

In this work we show how large portions of classical analysis can be developed within that framework in a natural, predicatively acceptable way. Among other things, this includes the introduction of the natural numbers, the real line and continuous real functions, as well as formulating and proving the main classical results concerning these notions.

[1] Arnon Avron, A New Approach to Predicative Set Theory, Ways of Proof Theory (R. Schindler, editor), Onto Series in Mathematical Logic, onto verlag, 2010, pp. 31 - 63.

18:35
Natural Deduction in Renaissance Geometry
SPEAKER: Ryszard Mirek

ABSTRACT. Moritz Cantor was so impressed by the achievements of Piero della Francesca in mathematics and geometry that devoted him in his Vorlesungen uber Geschichte der Mathematik far more attention than to any other contem- porary algebraicist. In Francesca's treatise De prospectiva pingendi we find the advanced geometrical exercises presented in the form of propositions. For instance, in Book 1, Proposition 8, he shows that the perspective images of orthogonals converge to a point. Proposition 12 shows how to draw in per- spective a surface of undefined shape, which is located in profile as a straight line. The task is to find the image of a line perpendicular to the picture plane. But the most interesting is Proposition 13 that shows how to degrade a square and, more precisely the sides of the square. It is obvious that most of these propositions are used in the paintings of Francesca. The purpose of the study is to describe these results in the form of logi- cal system EF. Generally, the logical language is six sorted, with sorts for points, lines, circles, segments, angles, and areas. As proofs it is possible to employ the method of natural deduction. The aim is to demonstrate that such a method is the most useful for the presentation of the geometric proofs of Francesca, taking into account also the importance of diagrams within them.

18:55
Aristotle’s conception of demonstration and modern proof theory
SPEAKER: Olga Antonova

ABSTRACT. The history of modern mathematical proof theory begins with Beweistheorie or Hilbert’s proof theory. The mathematical theories such as logicism (Frege, Russell), intuitionism (Brouwer, Heyting), set theory (Cantor, Dedekind) influenced directly the conception of proof and generally modern proof theory. The modern proof theory is based not only on mathematical theories, but also on the philosophical and logical proof theories, such as Aristotle’s conception of demonstration. According to Aristotle a demonstration is a “scientific syllogism”, in which the premises are true, first, immediate, more known than the conclusion, prior to the conclusion and causes of the conclusion. Aristotle’s theory of demonstration impacted on the development of logic and, in particular, on the philosophical and logical conception of proof. Can we say that Aristotle’s conception of demonstration is modern? Is the actual conception of proof really based on Aristotle’s conception? The purpose of my talk is to analyze Aristotle’s definition of demonstration and compare it with the modern approach to demonstration. [1]Barnes, J. Aristotle. Posterior Analytics. Oxford University Press, 1976. [2]Hendricks, V. et al., eds, Proof Theory: History and Philosophical Significance, Kluwer. 2000.

18:15-19:15 Session 104C: Contributed talks C3 (LC)
Location: MB, Hörsaal 14
18:15
Modelling Inference in Fiction
SPEAKER: unknown

ABSTRACT. As it is widely-known, fiction became a serious problem for several classical conceptions closed related to philosophy of logic (J. Woods, 2006). This was mainly due to some of the leading features of reasoning in fiction. Firstly, inference in fiction involves reasoning with incomplete information. Stories describe their characters, places, and events only in an incomplete way. Due to the fact that stories are composed by a finite set of sentences, a large amount of information about them remains unknown. Secondly, inference in fiction also involves reasoning with inconsistent information. Inconsistencies can emerge from two sources. On the one hand, information belonging to a fiction contradicts reality in many aspects. On the other hand, some stories are based on a contradiction or contain inconsistent information. This is the case of stories in which contradictions are an essential part of their plots. In order to cope with the abovementioned features of reasoning in fiction, we propose a semantic approach of fiction based on an intuitionistic modal system. The semantic model is an adaptation of the multiple-expert semantics developed by Melvin Fitting in 1992. Firstly, we consider a propositional language to represent fictional information formally. That propositional language is interpreted in an intuitionistic modal semantics that involves two different perspectives and a partial valuation. On the one hand, these two perspectives make it possible to distinguish two sources of information involved in reasoning in fiction, i.e., fiction and reality. On the other hand, the partial valuation makes it possible to deal with incomplete information. A relation of logical consequence is defined in order to distinguish between valid and invalid inferences within the fictional context. Finally, we explore different proof-theoretical alternatives in order to characterize a deductive system for this semantic approach.

18:35
Metalogical Extensions--Part II: First-order Consequences and Gödel

ABSTRACT. Metalogical extensions incorporate metalogics by means of new logical symbols. Among many other things, they thereby yield a soundness criterion for immanent attempts, which try to reflect provability or truth by means of unary formulae in stock. However, this benchmark cannot be met within sufficiently strong consistent theories—a further consequence of the Diagonal Lemma. Thus it is particularly questionable whether Prov(x) really expresses provability and Con (derived therefrom) really states consistency: the 2nd Incompleteness Theorem may be doubted.

18:15-18:45 Session 104D (QED)
Location: FH, Hörsaal 4
18:15
Panel Discussion
SPEAKER: John Harrison