Accepted Papers with Abstracts
ABSTRACT. Implementing string transformation routines, such as encoders, decoders, and sanitizers, correctly and efficiently is a difficult and error prone task. Such routines are often used in security critical settings, process large amounts of data, and must work efficiently and correctly. We introduce a new declarative language called Bex that builds on elements of regular expressions, symbolic automata and transducers, and enables a compilation scheme into C, C# or JavaScript that avoids many of the potential sources of errors that arise when such routines are implemented directly. The approach allows correctness analysis using symbolic automata theory that is not possible at the level of the generated code. Moreover, the case studies show that the generated code consistently outperforms hand-optimized code.
ABSTRACT. The present paper shows how the idea of equality saturation can be used to build an inductive prover for a non-total first-order lazy functional language. We adapt equality saturation approach to a functional language by using transformations borrowed from supercompilation. A special transformation called merging by bisimilarity is used to perform proof by induction of equivalence between nodes of the E-graph. Equalities proved this way are just added to the E-graph. We also experimentally compare our prover with HOSC and HipSpec.
ABSTRACT. Java will be extended in version eight by lambda expressions and functional interfaces, where functional interfaces are interfaces with one method. Functional interfaces represent the types of lambda expressions. The type inference mechanism will be extended, such that the types of the parameters of lambda expressions can be inferred. But types of complete lambda expressions will still not be inferable. In this paper we give a type inference algorithm for complete lambda expressions. This means that fields, local variables, as well as parameters and return types of lambda expressions do not have to be typed explicitly. We therefore define for a core of Java 8 an abstract syntax, we formalize the functional interfaces and define a type system for expressions and statements. Finally, we give the type inference algorithm and prove its soundness and completeness.
Jeremy W. Bryans (School of Computing Science, Newcastle University)
ABSTRACT. We present a novel method for reasoning about time in state-based proof-oriented formalisms. The method builds on a non-classical model of time, the Leibnizian model, in which time is a relative property determined by the observations of an evolving subject, rather than one of the fundamental dimensions. It proves to be remarkably effective in the context of the Event-B formalism. We illustrate the method with a machine-checked proof of Fischer's algorithm that, to our knowledge, is simpler than other proofs available in the literature.
Chernishev George (SPbSU)
Pavel Fedotovsky (SPbSU)
George Erokhin (SPbSU)
Kirill Cherednik (SPbSU)
ABSTRACT. In this paper we consider the scalability issues of a classic data structure used for multidimensional indexing: the R-Tree. This data structure allows for effective retrieval of records in spaces of low-dimensionality and is de-facto a standard of the industry. Following the design guidelines of the GiST model we have developed a prototype implementation which supports concurrent (parallel) access and works in read committed isolation level. Using our prototype we study threads and cores impact on the performance of the system. In order to do this, we evaluate it in several scenarios, which may be encountered during the course of DBMS operation.
ABSTRACT. In this paper we present a novel transformation technique which automatically partitions instances of generic data-types, and so facilitates the parallelization of functional programs defined on such data-types. To achieve this, given a program defined on an arbitrary data-type we automatically derive conversion functions for instances of this data-type to join-lists so that they can be efficiently partitioned in order to facilitate efficient divide-and-conquer parallelization.
Denis Ponomaryov (A.P. Ershov Institute of Informatics Systems)
ABSTRACT. Disjoint AND-decomposition of a boolean formula means its representation as a conjunction of two (or several) formulas having disjoint sets of variables. We show tractability of computing AND-decompositions of boolean formulas given in ANF, Full DNF, and positive DNF. As an important tool for proving tractability, we describe a polytime algorithm for factoring linear multivariate polynomials in finite fields of order 2 and show that is is related to a variant of the well-known NP-complete Hypergraph 2-Coloring Problem.
Sergei A. Romanenko (Keldysh Institute of Applied Mathematics, Russian Academy of Sciences)
ABSTRACT. The paper describes the design and implementation of a certifying supercompiler TT Lite SC, which takes an input program and produces a residual program paired with a proof of the fact that the residual program is equivalent to the input one. As far as we can judge from the literature, this is the first implementation of a certifying supercompiler for a non-trivial higher-order functional language. The proofs generated by TT Lite SC can be verified by a type checker which is independent from TT Lite SC and is not based on supercompilation. This is essential in cases where the reliability of results obtained by supercompilation is of fundamental importance. Currently, the proofs can be either verified by the type-checker built into TT Lite, or converted into Agda programs and checked by the Agda system.
ABSTRACT. We describe pretty-printing combinators with choice which provide optimal document layout in polynomial time. Bottom-up tree rewriting and dynamic programming (BURS) is used to calculate a set of possible layouts for the given output width. We also present the results of evaluation of suggested approach and discuss its application for the implementation of pretty-printers.
ABSTRACT. In recent decades, open maps approach has been successfully applied to structure and unify a wide range of behavior equivalences for concurrency. In this paper, we proved that timed history preserving (thp-) bisimulation can be captured by the natural instance of an open maps bisimilarity when timed causal trees are chosen as the model category. In particular, we defined a category of timed causal trees and an accompanying subcategory of causal timed words, to which the notion of open maps is developed. Then we used the open maps framework to obtain an abstract bisimilarity. Finally, we established the coincidence of the obtained abstract bisimilarity and thp-bisimulation.
Mercedes Hidalgo-Herrero (Universidad Complutense)
Yolanda Ortega-Mallén (Universidad Complutense de Madrid)
ABSTRACT. Launchbury defines a natural semantics for lazy evaluation and proposes an alternative call-by-name version which introduces indirections and does not update closures. The equivalence of both semantics is not straightforward.
In the present paper we focus on the introduction of indirections during beta-reduction and study how the heaps, i.e., the sets of bindings, obtained with this kind of evaluation do relate with the heaps produced by substitution. As a heap represents the context of evaluation for a term, we first define an equivalence that identifies terms with the same meaning under a given context. This notion of "context" equivalence is extended to heaps. Finally, we define a relation between heap/term pairs to establish the equivalence between Launchbury's alternative natural semantics and its corresponding version without indirections.
Yifei Chen (School of Information Science, Nanjing Audit University)
ABSTRACT. We combine differential dynamic logic (dL) for reasoning about the possible behavior of hybrid systems with metric temporal logic (MTL) for reasoning about the metric temporal behavior during their operation. Our logic supports verification of metric temporal, non-metric temporal, and non-temporal properties of hybrid systems and provides a uniform treatment of discrete transitions and continuous evolution. For our combined logic, we generalize the semantics of dynamic modalities to refer to hybrid traces instead of final states. Further, we prove that this gives a conservative extension of dL for hybrid systems. On this basis, we provide a modular verification calculus that reduces correctness of metric temporal behavior of hybrid systems to generic temporal reasoning and then non-temporal reasoning, and prove that we obtain a complete axiomatization relative to the non-temporal base logic dL. Using this calculus, we analyze metric temporal properties in an local lane control system.
Sandeep Patil (Lulea University of Technology)
Cheng Pang (Lulea University of Technology)
Valeriy Vyatkin (Lulea University of Technology)
ABSTRACT. The Function Blocks Architecture of the IEC 61499 standard is an executable component model for distributed embedded control systems combining block-diagrams and state machines. The standard aims at the portability of control applications that is however hampered by ambiguities in its execution semantics descriptions. In recent years several execution models have been implemented in different software tools that generate mutually incompatible code. This paper proposes a general approach to neutralizing these semantic ambiguities by formal description of the IEC 61499 in abstract state machines (ASM). The model embodies all known execution semantics of function blocks. The ASM model is further translated to the input format of the SMV model-checker which is used to verify formally properties of applications. In this way the proposed verification framework enables the portability checking of component based control applications across different implementation platforms compliant with the IEC 61499 standard. The paper first discusses different existing execution semantics of function blocks and the portability issues across different IEC 61499 tools. Then a modular formal model of function blocks’ operational semantics in ASM is introduced and exemplified in the paper by the cyclic execution semantics case for a composite function block. Subsequently, the SMV model is generated and model-checking is illustrated for a simple test case.
Oleg Kudinov (Sobolev Institute of Mathematics)
ABSTRACT. We introduce and study the theory of index sets in the context of computable analysis. If we consider a class of effectively open sets or a class of computable real-valued functions, a sufficiently simple index set of the class reflects its simplicity. We construct computable numberings for effectively open sets and computable real-valued functions and show that these numberings are principal. We calculate index sets for important problems such as root realisability, set equivalence and inclusion, function equivalence which naturally arise in continuous constraint solvability. Using developed techniques we present clear proofs of generalised Rice-Shapiro theorem for effectively open sets of reals and an analogue of Rice theorem for computable real-valued functions. We illustrate how index sets can be used to estimate complexity of continuous constraints in settings of Kleene-Mostowski arithmetical hierarchy.
ABSTRACT. A new security concept called {\em process opacity} is formalized and studied. For processes which are process opaque with respect to a given predicate over processes, an intruder cannot learn validity of this predicate for any subsequent state of computation. We discuss different extensions of this concept as well as its properties. We put some restrictions on predicates in such a way that we obtain decidable security properties.
ABSTRACT. Program specialization is an effective tool for transforming interpreters to compilers. We present the first steps in the construction of a specialization tool chain for JavaScript programs. We report on an application of this tool chain in a case study that transforms a realistic interpreter implemented in JavaScript to a compiler.
The difference to previous work on compiling with program specialization is threefold. First, the interpreter has not been written with specialization in mind. Second, instead of specializing the interpreter, we transform it into a generating extension, i.e., we replace parts of the interpreter's code by a corresponding code generator. Third, the implementation language of the interpreter is not a restricted toy language, but full JavaScript.
Bingtian Xue (Department of Computer Science, Aalborg University)
Kim Guldstrand Larsen (Department of Computer Science, Aalborg University)
ABSTRACT. Labeled weighted transition systems (LWSs) are transition systems labeled with actions and real numbers. These numbers represent the costs of the corresponding actions in terms of resources. Recursive Weighted Logic (RWL) is a multimodal logic that expresses qualitative and quantitative properties of LWSs. It is endowed with simultaneous recursive equations, which specify the weakest properties satisfied by the recursive variables. We demonstrate that RWL is sufficiently expressive to characterize weighted-bisimilarity of LWSs. In addition, we prove that the logic is decidable, i.e., the satisfiability problem for RWL can be algorithmically solved.
Evgeny Novikov (Institute for System Programming of RAS)
Vadim Mutilin (Institute for System Programming of RAS)
Alexey Khoroshilov (Institute for System Programming of RAS)
ABSTRACT. Linux kernel modules work in an event-driven environment. On initialization each module registers callbacks that are invoked by the kernel when appropriate. Static verification of such software has to take into consideration all feasible scenarios of interaction between modules and their environment. The paper presents a new method for modeling the environment which allows to automatically build an environment model for a particular kernel module on the base of analysis of module source code and a set of specifications describing patterns of possible interactions. In specifications one can describe both generic patterns that are widespread in the Linux kernel and detailed patterns specific for a particular subsystem. The method was implemented in the Linux Driver Verification Toolkit and was used for static verification of modules from almost all Linux kernel subsystems.
Ekawit Nantajeewarawat (Computer Science, Sirindhorn Intl. Inst. of Tech., Thammasat University)
ABSTRACT. A query-answering problem (QA problem) is concerned with finding all ground instances of a query atomic formula that are logical consequences of a given logical formula describing the background knowledge of the problem. A method for solving QA problems on full first-order logic has been invented based on the equivalent transformation (ET) principle, where a given QA problem on first-order logic is converted into a QA problem on extended clauses and is then further transformed repeatedly and equivalently into simpler forms until its answer set can be readily obtained. In this paper, such a clause-based solution is extended by proposing a new method for effectively utilizing a universally quantified if-and-only-if statement defining a predicate, which is called an iff-formula. The background knowledge of a given QA problem is separated into two parts: (i) a conjunction of iff-formulas and (ii) other types of knowledge. Special ET rules for manipulating iff-formulas are introduced. The new solution method deals with both iff-knowledge in first-order logic and a set of extended clauses. Application of this solution method is illustrated.
ABSTRACT. Distillation is a fully automatic program transformation that can yield superlinear program speedups. Bisimulation is a key to the proof that distillation is correct, i.e., preserves semantics. However the proof, based on observational equivalence, is insensitive to program running times. This paper studies some examples of how distillation gives superlinear speedups. As examples, we show its working and effects on some “old chestnut” programs well-known from the early program transformation literature. An interesting part of the study is the question: How can a program running in time O(n) be bisimilar to a program running in time O(n^2)?
ABSTRACT. We describe the Lingva tool for generating and proving complex program properties using the recently introduced symbol elimination method. We present implementation details and report on a large number of experiments using academic benchmarks and open-source software programs. Our experiments show that Lingva can automatically generate quantified invariants, possibly with alternation of quantifiers, over integers and arrays. Moreover, Lingva can be used to prove program properties expressing the intended behavior of programs.
Kees Verstoep (Vrije Universiteit, Fac. Sciences, dept. Computer Science)
Henri Bal (Vrije Universiteit)
Wan Fokkink (Vrije Universiteit Amsterdam)
ABSTRACT. In model checking, abstractions can cause spurious results, which need to be verified in the concrete system to gain conclusive res- ults. Verification based on multi-valued model checking can distinguish conclusive and inconclusive results, while increasing precision over tradi- tional two-valued over- and under-abstractions. This paper describes the theory and implementation of multi-valued model checking for Promela specifications. We believe our tool Bonsai is the first four-valued model checker capable of multi-valued verification of parallel models, i.e. con- sisting of multiple concurrent processes. A novel aspect is the ability to only partially abstract a model, keeping parts of it concrete.
ABSTRACT. Symbolic execution is at the core of many program analysis and transformation techniques, like partial evaluation, test-case generation or model checking. In this paper, we introduce a symbolic execution semantics for the functional and concurrent (based on message passing) language Erlang. We illustrate our approach through some examples. We also discuss the main design decisions and some scalability issues.
ABSTRACT. Supercompilation is a method of transforming programs to obtain equivalent programs that perform fewer computation steps and allocates less memory. A transformed program defines new functions that are combinations of functions from the original program, but the datatypes in the transformed program is a subset of the datatypes defined in the original program. We will remedy this by extending supercompilation to create new datatypes.
We do this by creating new constructors that combine several constructors from the original program in a way reminiscent of how supercompilation combines several functions to create new functions.
Elena Sidorova (A.P. Ershov Institute of Informatics Systems, RAN)
Evgeny Bodin (A.P. Ershov Institute of Informatics Systems, RAN)
ABSTRACT. The paper presents a multi-agent approach for ontology population based on natural language semantic analysis. In this multi-agent model agents of two main kinds interact: the information agents correspond to meaningful units of the information being retrieved, and the rule agents implement population rules of a given ontology and a semantic-syntactic model of the language.
Vitaly Martynovich (Novosibirsk State University)
ABSTRACT. The probabilistic generalization of formal concept is presented: construction is resistant to noise in the data and give one an opportunity to consider both the attribute and it's absence (context with negation). This generalization is obtained from the notion of formal concepts through its denition as xed points of implications. Then implications, possibly with negations, are replaced by probabilistic laws. We prove that the xed points based on the probabilistic implications are consistent and thus determine the probabilistic formal concepts. Correspondence between probabilistic formal concepts and classic formal concepts is also presented. Previously the following was conrmed experimentally: if the formal concepts are found on some data and are swayed with some random noise, then the noisy data context allows the complete reconstruction of initial formal context's concepts. In the end, the demonstration for the probabilistic formal concepts formation is given.
ABSTRACT. Diagramming tools range from manual applications with freeform drawings, notation specific tools such as for UML or SDL diagrams, and programmatic tools like Pic or Tikz. In such tools, the diagrams are the end product, not available for further use. If two diagrams are related, we typically cannot derive one from the other, or connect them to systems they depict. We present Skeblle, new kind of diagramming tool where diagrams are backed by a simple \emph{generic} model, a graph. Skeblle can be used as an ordinary diagramming tool with manual control, or as a programmatic tool with a command language. Skeblle diagrams automatically have a backing graph. The graph can be manipulated declaratively by graph operators, or imperatively with a graph API. Any graph manipulations are reflected back to the diagram it represents. Furthermore, graph nodes can have related data retrieved via urls and processed as part of graph manipulations.
In combination, these facilities give us a novel tool that feels like a simple diagramming tool, but is capable of being rapidly customized to create diagrams that are active: they can change to reflect changes in systems they depict, and may be operated upon to compute related diagrams.
Skeblle diagrams can be sketched, created via commands, or described using a Graph Markup Language (GML) and Graph Style Sheet (GSS). An associated graph library includes operators such as projection, restriction, composition that can rewrite graphs, while visual customization is possible using GSS. Skeblle is implemented as a client side Javascript program, and the full power of JS is available to users, although graph operators and stylesheets suffice for common uses.
We illustrate Skeblle in use for software deployment diagrams, software architecture diagrams, and chemical reaction diagrams. We show how Skeblle can be readily customized for such uses, and how it becomes easy to compute diagrams to illustrate changes such as failures in deployed components, data flows, and reactions.
ABSTRACT. Application development for modern high-performance systems with many cores, i.e., comprising multiple Graphics Processing Units (GPUs) and multi-core CPUs, currently relies on low-level programming approaches like CUDA and OpenCL, which leads to complex, lengthy and error-prone programs. In this paper, we advocate a high-level programming approach for such systems, which relies on the following two main principles: a) the model is based on the current OpenCL standard, such that programs remain portable across various many-core systems, independently of the vendor, and all inherent parallel optimizations can be applied; b) the model extends OpenCL with three high-level features which simplify many-core programming and are automatically translated by the system into OpenCL code. The high-level features of our programming model are as follows: 1) computations are conveniently expressed using parallel algorithmic patterns (skeletons); 2) memory management is simplified and automated using parallel container data types (vectors and matrices); 3) an automatic data (re)distribution mechanism allows for implicit data movements between GPUs and ensures scalability when using multiple GPUs; The well-defined programming constructs above allow for formal transformations on programs which can be used both in the process of program development and in the compilation and optimization phase. We demonstrate how our programming model and its implementation are used to express parallel applications on one- and two-dimensional data, and we report first experimental results to evaluate our approach in terms of programming effort and performance.
Dmitry Bushin (A.P. Ershov Institute of Informatics Systems, Siberian Branch of the Russian Academy of Sciences)
ABSTRACT. The intention of the note is towards a framework for developing, studying and comparing observational semantics in the setting of a real-time true concurrent model. In particular, we introduce trace and bisimulation equivalences based on interleaving, step and causal net semantics in Petri nets with strong timing. We treat the relationships between the equivalences showing the discriminating power of the approaches of the linear-time -- branching-time and interleaving -- partial order spectra. The results obtained allow one to study in complete detail the timing behaviour in addition to the degrees of relative concurrency and nondeterminism of processes.
Stefania Gnesi (ISTI-CNR)
Franco Mazzanti (Istituto di Scienza e Tecnologie dell'Informazione "A. Faedo" - Consiglio Nazionale delle Ricerche (ISTI-CNR))
ABSTRACT. Formal modelling and verification of variability concepts in product families has been the subject of extensive study in the literature on Software Product Lines. In recent years, we have laid the basis for the use of modal specifications and branching-time temporal logics for the specification and analysis of behavioural variability in product family definitions. A critical point in this formalization is the lack of a possibility to model an adequate representation of the data that may need to be described when considering real systems. To this aim, we now extend the modelling and verification environment that we have developed for specifications interpreted over Modal Transition Systems, by adding the possibility to include data in the specifications. In concert with this, we also extend the variability-specific modal logic and the associated special-purpose model checker VMC. As a result, it offers the possibility to efficiently verify formulas over possibly infinite-state systems by using the on-the-fly bounded model-checking algorithms implemented in the model checker. We illustrate our approach by means of a simple yet intuitive example: a bike-sharing system.