PSI 2015: 10TH INTERNATIONAL ANDREI ERSHOV MEMORIAL CONFERENCE
PROGRAM FOR WEDNESDAY, AUGUST 26TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:30 Session 10: Keynote Speech
08:30
Scheduling for Manycore Systems

ABSTRACT. Since the early nineties of the last century, when massively parallel systems became prevailing, the problem of scheduling and mapping parallel programs to parallel computers became a pressing issue. New algorithms had to be developed to partition a large parallel machine to multiple parallel programs. In addition, for each parallel program it has to be decided, which process runs on which processor to minimize the overall execution time. Communication patterns between the processes of a program had to be taken into account to find a good match with the interconnect topology of the parallel computer. Nowadays, massive parallelism is no longer limited to the area of high performance computing. Standard graphics processors offer high degrees of data parallelism. Multi- and manycore processors are commodities, incorporated in any computer, notebook, smart phone, and even in embedded control devices. So scheduling for manycore processors has reached mainstream. While at the beginning the efficient usage of processor cores and communication links was of major concern, we now have to additionally deal with energy consumption, cache efficiency, and fault tolerance.

The talk gives an overview of 25 years of research in this area presenting a wide spectrum of algorithmic solutions including approaches borrowing ideas from other areas such as physics, economy, and neurology.

09:30-10:00Coffee Break
10:00-12:30 Session 11: Formalisms
10:00
Implementation and Evaluation of Contextual Natural Deduction for Minimal Logic

ABSTRACT. The contextual natural deduction calculus (NDc) extends the usual natural deduction calculus (ND) by allowing the implication introduction and elimination rules to operate on formulas that occur inside contexts. It has been shown that, asymptotically in the best case, NDc-proofs can be quadratically smaller than the smallest ND-proofs of the same theorems. In this paper we describe the first implementation of a theorem prover for minimal logic based on NDc. Furthermore, we empirically compare it to an equally simple ND theorem prover on thousands of randomly generated conjectures.

10:30
Equivalence of Finite-Valued Symbolic Finite Transducers
SPEAKER: unknown

ABSTRACT. Symbolic Finite Transducers, or SFTs, is a representation of finite transducers that annotates transitions with logical formulae to denote sets of concrete transitions. This representation has practical advantages in applications for web security analysis, where it provides ways to succinctly represent web sanitizers that operate on large alphabets. More importantly, the representation is also conducive for efficient analysis using state-of-the-art theorem proving techniques. Equivalence checking plays a central role in deciding properties of SFTs such as idempotence and commutativity. We show that equivalence of finite-valued SFTs is decidable, i.e., when the number of possible outputs for a given input is bounded by a constant.

11:00
Behavioural Analysis of Sessions using the Calculus of Structures
SPEAKER: Ross Horne

ABSTRACT. This paper describes an approach to the behavioural analysis of sessions. The approach is made possible by the calculus of structures --- a deep inference proof calculus, generalising the sequent calculus, where inference rules are applied in any context. The approach involves specifications of global and local sessions inspired by the Scribble language. The calculus features an novel operator that synchronises parts of a protocol that must be treated atomically. Firstly, the calculus can be used to determine whether local sessions can be compose in a type safe fashion such that they are capable of successfully terminating. Secondly, the calculus defines a subtyping relation for sessions that allows causal dependencies to be weakened while retaining termination potential. Consistency and complexity results follow from the proof theory of the calculus of structures.

11:30
Relaxed Parsing of Regular Approximations of String-Embedded Languages
SPEAKER: unknown

ABSTRACT. We present a technique for syntax analysis of a regular set of input strings. This problem is relevant for analysis of string-embedded languages when a host program generates clauses of embedded language at run time. Our technique is based on a generalization of RNGLR algorithm, which, inherently, allows us to construct a finite representation of parse forest for regularly approximated set of input strings. This representation can be further utilized for semantic analysis and transformations in the context of reengineering, code maintenance, program understanding etc. The approach in question implements relaxed parsing: non-recognized strings in approximation set are ignored with no error detection.

12:00
Branching Processes of Timed Petri Nets
SPEAKER: unknown

ABSTRACT. The intention of this note is to spread the Couvreur et al's semantic framework of branching processes \cite{CPW13}, suitable for describing the behavior of general Petri nets with interleaving semantics, to timed general Petri nets with step semantics in order to characterize unfolding as the greatest element of a complete lattice of branching processes. In case of maximal step semantics of timed Petri nets, we impose some restrictions on the model behavior and define a new class of branching processes and unfoldings under the name of apt ones which are shown to satisfy the good order-theoretic properties.

12:30-13:00 Session 12: Short Papers
12:30
Towards using exact real arithmetic for initial value problems
SPEAKER: unknown

ABSTRACT. In the paper we report on recent developments of the iRRAM software for exact real computations. We incorporate novel methods and tools to generate solutions of initial value problems for ODE systems with polynomial right hand sides (PIVP). The chosen representation allows evaluation of the solutions with arbitrary precision on their complete open intervals of existence. In consequence, the set of operators implemented in the iRRAM software (like function composition, computation of limits, or evaluation of Taylor series) is expanded by PIVP solving. This increases future applicability of the iRRAM software to real world problems such as verification of safety-critical systems

12:45
Rule-based Content Adaptation for Online Communication
SPEAKER: Zaenal Akbar

ABSTRACT. In order to reach the widest audience possible in a multi-channel online communication environment, content has to be fitted to the specifications of each channel and carefully edited to obtain the most amount of attention possible from audiences on every channel. This process, so called 'content adaptation', is mainly performed manually and has become more challenging due to the wide variety of channel specifications and audience characteristics. In this work, we present a solution for addressing this challenge by developing a semi-automatic content adaptation system where rule-based technologies are at the core. We propose several adaptation operators, where an operator consumes an input content, performs a specific adaptation to the content, and produces an output content. Thus, an adaptation process can be performed by using an operator or by chaining two or more operators. We show the effectiveness of our solution by constructing a widely used best practice of content adaptation for content dissemination to social media channels.

13:00-14:00Lunch Break
14:00-16:00 Session 13: Verification
14:00
Program Scheme Technique to Solve Propositional Program Logics Revised

ABSTRACT. Propositional program (dynamic, temporal and process) logicians are a basis for logical specification of program systems (including multiagent, distributed and parallel). Therefore development of efficient algorithms (decision procedures) for validation (to check identical validity), provability (logical calculi) and verification in models (model checking) is an important task of the theory of programming.

The essence of a program scheme technique consists in the following. Formulas of a program logic to be translated in nondeterministic monadic program schemes (so called Yanov schemata) so that the scheme is total (i.e. terminates) in all special interpretations iff the initial formula is identically true. Since this halting problem in special interpretations is solvable (with an exponential assessment of complexity), it implies the decidability of initial program logic (and to receive a complexity upper bound).

Initial variant of the technique was developed by the author and his Valery A. Nepomnjaschy in 1983-1987 for variants of Propositional Dynamic Logic (PDL). In 1997 it was expanded on the propositional Mu-Calculus. In all these cases a special algorithm was used to solve the generalized halting problem.

The new development of program schemata technique presented in the paper consists in revised decision procedure for the halting problem: now it is a model checking of a special fairness property (presented by some fixed -Calculus formula) in finite models presented by Yanov schemata flowcharts. Exponential lower bound for transformation of Mu-Calculus formulas to equivalent guarded form is a consequence of the new version of the decision procedure.

14:30
Lightweight Verification Support for DSLs Defined with XText
SPEAKER: Thomas Baar

ABSTRACT. A Domain-Specific Language (DSL) allows the succinct modeling of phenomena in a problem domain. Modern DSL-tools make it easy for a language designer to define the syntax of a new DSL, to specify code generators or to build a new DSL on top of existing DSLs. Based on the language specification, the DSL-tool then generates extensive editors with syntax highlighting, code completion, automatic refactoring and many other features including code generators.

The support for specifying the language's semantics and verification support has been so far rather neglected by existing DSL-tools. Verification support for a DSL (or also for general purpose modeling languages) has been realized in many cases by adopting a transformational approach. Here, the model is (1) transformed into an input for a verification tool, then (2) the verification tool is executed on the generated input and (3) the results of this run are translated back into terms of the original modeling language.

In this paper, we adopt the opposite approach. We extend the DSL definition itself by a verification calculus. As a result, the DSL-tool can generate adequate verification support that is tightly integrated into the already existing tool support. We base our work on the XText framework, the proof calculus is formulated in the DSL XSemantics.

15:00
Checking Several Requirements at once by CEGAR
SPEAKER: Vitaly Mordan

ABSTRACT. At present static verifiers, which are based on Counterexample Guided Abstraction Refinement (CEGAR), can prove correctness of a program against a specified requirement, find its violation in a program and stop analysis or exhaust the given resources without producing any useful result. If we use this approach for checking several requirements at once, then finding a first violation of some requirement or exhausting resources for some requirement will prevent checking the program against other requirements. In particular we may miss violations of some requirements. That is why in practice each requirement to the program is usually checked separately. However, static verifiers perform similar actions during checking of the same program against different requirements and thus a lot of resources are wasted. This paper presents a new CEGAR-based method for software static verification, that is aimed at checking programs against several requirements at once and getting the same result as basic CEGAR, which checks requirements one by one. In order to do it the suggested method divides resources among requirements equally and continues analysis after finding violation of requirement excluding that requirement. We used Linux kernel modules to conduct experiments, in which implementation of the suggested method reduced total verification time by 5 times. The total number of divergent results in comparison with CEGAR was about 2%.

15:30
Constraint Solving for Verifying Modal Specifications of Workflow Nets with Data
SPEAKER: unknown

ABSTRACT. For improving efficiency and productivity companies are used to work with workflows that allow them to manage the tasks and steps of business processes. Furthermore, modalities have been designed to allow loose specifications by indicating whether activities are necessary or admissible. This paper aims at verifying modal specifications of coloured workflows with data assigned to the tokens and modified by transitions. To this end, executions of coloured workflow nets are modelled using constraint systems, and constraint solving is used to verify modal specifications of necessary or admissible behaviours. An implementation supporting the proposed approach and promising experimental results on an issue tracking system constitute a practical contribution.

16:00-16:30Coffee Break