ABSTRACT. We introduce a mathematical framework for retrosynthetic analysis, an important research method in synthetic chemistry. Our approach represents molecules and their interaction using string diagrams in layered props - a recently introduced categorical model for partial explanations in scientific reasoning. Such principled approach allows one to model features currently not available in automated retrosynthesis tools, such as chirality, reaction environment and protection-deprotection steps.

Closure and Decision Properties for Higher-Dimensional Automata

ABSTRACT. We develop the language theory of higher-dimensional automata (HDAs). We show a pumping lemma which allows us to expose a class of non-regular ipomset languages. We also give an example of a regular language with unbounded ambiguity. Then we pass to decision and closure properties of regular languages. We show that inclusion of regular languages is decidable (hence is emptiness), and that intersections of regular languages are again regular. On the other hand, complements of regular languages are not regular. We introduce a width-bounded complement and show that width-bounded complements of regular languages are again regular.

ABSTRACT. Generalized metric spaces are obtained by weakening the requirements (e.g., symmetry) on the distance function and by allowing it to take values in structures (e.g., quantales) that are more general than the set of non-negative real numbers. Quantale-valued metric spaces have gained prominence due to their use in quantitative reasoning on programs/systems, and for defining various notions of behavioral metrics. We investigate imprecision and robustness in the framework of quantale-valued metric spaces, when the quantale is continuous. In particular, we study the relation between the robust topology, which captures robustness of analyses, and the Hausdorff-Smyth hemi-metric. To this end, we define a preorder-enriched monad P_S, called the Hausdorff-Smyth monad, and when Q is a continuous quantale and X is a Q-metric space, we relate the topology induced by the metric on P_S(X) with the robust topology on the powerset P(X) defined in terms of the metric on X.

Synchronous Agents, Verification, and Blame — A Deontic View

ABSTRACT. A question we can ask of multi-agent systems is whether the agents’ collective interaction satisfies a particular set of objectives or specifications. This goal can be individual or collective. When a collaborative goal is not reached, or a specification is violated, a pertinent question is whether any agent is to blame. We give trace semantics to our logic and use it to define blame assignments for
violations. We also provide quantitative semantics to compare different interactions in terms of the required reparations. Finally, we give an automaton construction for the logic, which we use as the base for model checking and blame analysis.

ABSTRACT. The use of message-passing process calculi for the verification of distributed algorithms requires support for state-based reasoning that goes beyond their traditional action-based style: knowledge about (local) states is at best provided implicitly. Therefore, we propose a dis- tributed process calculus with locations, the units of distribution, which we equip with explicit state information in the form of memories. On top, we provide a simple formal model for location failure and failure detection such that we can deal with the verification of fault-tolerant distributed algorithms. We exhibit the use of our calculus by formalizing a simple distributed consensus algorithm and prove its correctness. The proof exploits global invariants by direct access to the local memories.

ABSTRACT. Symbolic execution is a technique to systematically explore all possible paths through a program. This technique can be formally explained by means of small-step transition systems that update symbolic states and compute a precondition corresponding to the taken execution path (called the path condition). To enable swift and robust compositional reasoning, this paper defines a denotational semantics for symbolic execution. We prove the correspondence between the denotational semantics and both the small-step execution semantics and a concrete semantics. The denotational semantics is a function defined piecewise using a partitioning of the input space. Each part of the input space is a path condition obtained from symbolic execution, and the semantics of this part is the corresponding symbolic substitution interpreted as a function on the initial state space. Correctness and completeness of symbolic execution is encapsulated in a graceful identity of functions. We provide mechanizations in Coq for our main results.

TOOL PAPER: Tessla-ROS-Bridge - Runtime Verification of Robotic Systems

ABSTRACT. Runtime Verification is a formal method to check a run of a system against a specification. To this end, a monitor is generated from the specification checking the system under scrutiny. Typically, runtime verification is used for checking exceutions of programs. However, it may equally be well suited for runs of robotic systems; often these are build and controlled on top the Robot Operating System (ROS). In stream run- time verification the specifications are given as stream transformations. This approach has become popular recently and several stream runtime verification systems starting from LOLA have emerged. In this paper the TeSSLa-ROS-Bridge is introduced allowing to interact with robotic systems based on ROS via the temporal stream specification language TeSSLa.

Simplifying process parameters by unfolding algebraic data types

ABSTRACT. Complex abstract data types are often used to facilitate creating concise models of the behavior of realistic systems. However, static analysis techniques that aim to optimize such models often consider variables of complex types as a single indivisible unit. The use of complex data types thus negatively affects the optimizations that can be performed. In this paper we revisit and extend a technique by Groote and Lisser that can be used to replace a single, complex variable by multiple variables of simpler data types, improving the effectiveness of other static analyzes. We describe the technique in the context of the process algebraic specification language mCRL2, and establish its correctness. We demonstrate using an implementation in the mCRL2 toolset that it sometimes reduces the size of the underlying state spaces, and it typically reduces the verification times when using symbolic model checking.

Modular Soundness Checking of Feature Model Evolution Plans

ABSTRACT. A software product line (SPL) is a family of closely related software systems which capitalizes on the variability and reusability of the software products and can be formalized by a feature model. Feature model evolution plans (FMEP) capture the current SPL as well as the planned evolution of the SPL to ensure successful long-term development. As business requirements often change, FMEPs should support intermediate update. This modification may cause paradoxes in an FMEP, e.g. a node left without a parent, making the plan impossible to realise. Current tools exist to validate FMEPs, but require analysis of the entire plan even when a modification affects only small parts of it. There is a need for a method which detects such paradoxes in a more efficient way.

In this paper, we present a representation for FMEPs, called an interval based feature model (IBFM). This representation enables local validation, which validates only the parts of the plan affected by the change. We define operations for updating the FMEPs and the preconditions under which they preserve soundness. We show the correctness of the method.