PSI 2014: ERSHOV INFORMATICS CONFERENCE

PROGRAM FOR TUESDAY, JUNE 24TH

Days:
next day
all days

View: session overviewtalk overview

09:00-10:25 Session 1
09:00
Conference Opening
09:25
Preserving class invariants in object-oriented programming

ABSTRACT. Class invariants are a fundamental notion of object-oriented programming, establishing the connection with abstract data types. They prove tough to maintain, however, in the case of multi-object structures; in particular the "Dependent Delegate Dilemma" arises if a callback into the class can prevent the original routine from maintaining the invariant. This talk surveys the problem and discusses available solutions, including the recently developed technique of "semantic collaboration" (Polikarpova, Tschannen and Furia, 2014).

 

10:25-10:55Coffee Break
10:55-12:50 Session 2
10:55
Towards High-Level Programming for Systems with Many Cores

ABSTRACT. Application development for modern high-performance systems with many cores, i.e., comprising multiple Graphics Processing Units (GPUs) and multi-core CPUs, currently relies on low-level programming approaches like CUDA and OpenCL, which leads to complex, lengthy and error-prone programs. In this paper, we advocate a high-level programming approach for such systems, which relies on the following two main principles: a) the model is based on the current OpenCL standard, such that programs remain portable across various many-core systems, independently of the vendor, and all inherent parallel optimizations can be applied; b) the model extends OpenCL with three high-level features which simplify many-core programming and are automatically translated by the system into OpenCL code. The high-level features of our programming model are as follows: 1) computations are conveniently expressed using parallel algorithmic patterns (skeletons); 2) memory management is simplified and automated using parallel container data types (vectors and matrices); 3) an automatic data (re)distribution mechanism allows for implicit data movements between GPUs and ensures scalability when using multiple GPUs; The well-defined programming constructs above allow for formal transformations on programs which can be used both in the process of program development and in the compilation and optimization phase. We demonstrate how our programming model and its implementation are used to express parallel applications on one- and two-dimensional data, and we report first experimental results to evaluate our approach in terms of programming effort and performance.

11:20
Symbolic String Transformations with Regular Lookahead and Rollback
SPEAKER: Margus Veanes

ABSTRACT. Implementing string transformation routines, such as encoders, decoders, and sanitizers, correctly and efficiently is a difficult and error prone task. Such routines are often used in security critical settings, process large amounts of data, and must work efficiently and correctly. We introduce a new declarative language called Bex that builds on elements of regular expressions, symbolic automata and transducers, and enables a compilation scheme into C, C# or JavaScript that avoids many of the potential sources of errors that arise when such routines are implemented directly. The approach allows correctness analysis using symbolic automata theory that is not possible at the level of the generated code. Moreover, the case studies show that the generated code consistently outperforms hand-optimized code.

11:45
Skeblle: A Tool for Programmable Active Diagrams
SPEAKER: unknown

ABSTRACT. Diagramming tools range from manual applications with freeform drawings, notation specific tools such as for UML or SDL diagrams, and programmatic tools like Pic or Tikz. In such tools, the diagrams are the end product, not available for further use. If two diagrams are related, we typically cannot derive one from the other, or connect them to systems they depict. We present Skeblle, new kind of diagramming tool where diagrams are backed by a simple \emph{generic} model, a graph. Skeblle can be used as an ordinary diagramming tool with manual control, or as a programmatic tool with a command language. Skeblle diagrams automatically have a backing graph. The graph can be manipulated declaratively by graph operators, or imperatively with a graph API. Any graph manipulations are reflected back to the diagram it represents. Furthermore, graph nodes can have related data retrieved via urls and processed as part of graph manipulations.

In combination, these facilities give us a novel tool that feels like a simple diagramming tool, but is capable of being rapidly customized to create diagrams that are active: they can change to reflect changes in systems they depict, and may be operated upon to compute related diagrams.

Skeblle diagrams can be sketched, created via commands, or described using a Graph Markup Language (GML) and Graph Style Sheet (GSS). An associated graph library includes operators such as projection, restriction, composition that can rewrite graphs, while visual customization is possible using GSS. Skeblle is implemented as a client side Javascript program, and the full power of JS is available to users, although graph operators and stylesheets suffice for common uses.

We illustrate Skeblle in use for software deployment diagrams, software architecture diagrams, and chemical reaction diagrams. We show how Skeblle can be readily customized for such uses, and how it becomes easy to compute diagrams to illustrate changes such as failures in deployed components, data flows, and reactions.

12:10
Modeling Environment for Static Verification of Linux Kernel Modules
SPEAKER: unknown

ABSTRACT. Linux kernel modules work in an event-driven environment. On initialization each module registers callbacks that are invoked by the kernel when appropriate. Static verification of such software has to take into consideration all feasible scenarios of interaction between modules and their environment. The paper presents a new method for modeling the environment which allows to automatically build an environment model for a particular kernel module on the base of analysis of module source code and a set of specifications describing patterns of possible interactions. In specifications one can describe both generic patterns that are widespread in the Linux kernel and detailed patterns specific for a particular subsystem. The method was implemented in the Linux Driver Verification Toolkit and was used for static verification of modules from almost all Linux kernel subsystems.

12:35
Polynomial-Time Optimal Pretty-Printing Combinators with Choice

ABSTRACT. We describe pretty-printing combinators with choice which provide optimal document layout in polynomial time. Bottom-up tree rewriting and dynamic programming (BURS) is used to calculate a set of possible layouts for the given output width. We also present the results of evaluation of suggested approach and discuss its application for the implementation of pretty-printers.

13:00-14:00Lunch Break
14:00-16:00 Session 3
14:00
The Laws of Concurrent Programming
SPEAKER: Tony Hoare

ABSTRACT. At the beginning of my academic career (1968), I wanted to express the axioms of sequential computer programming in the form of algebraic equations.  But I did not know how.  Instead I formalised the axioms as deduction rules for what became known as Hoare logic. 

I now know how I should have done it as an algebra.  Furthermore, I can add a few neat axioms that extend the algebra to concurrent programming.  From the axioms one can prove the validity of the structural proof rules for Hoare logic, as extended to concurrency by separation logic. Furthermore, one can prove the deductive rules for an operational semantics of process algebra, expressed in the style of Milner.  In fact, there is an algebraic time-reversal duality between the two forms of semantic rule.  For me, this was a long-sought unification.

 

15:00
More type inference in Java 8

ABSTRACT. Java is extended in version eight by lambda expressions and functional interfaces, where functional interfaces are interfaces with one method. Functional interfaces represent the types of lambda expressions. The type inference mechanism will be extended, such that the types of the parameters of lambda expressions can be inferred. But types of complete lambda expressions will still not be inferable. In this paper we give a type inference algorithm for complete lambda expressions. This means that fields, local variables, as well as parameters and return types of lambda expressions do not have to be typed explicitly. We therefore define for a core of Java-8 an abstract syntax and formalize the functional interfaces. Finally, we give the type inference algorithm.

 

15:15
Towards Symbolic Execution in Erlang
SPEAKER: German Vidal

ABSTRACT. Symbolic execution is at the core of many program analysis and transformation techniques, like partial evaluation, test-case generation or model checking. In this paper, we introduce a symbolic execution semantics for the functional and concurrent (based on message passing) language Erlang. We illustrate our approach through some examples. We also discuss the main design decisions and some scalability issues.

15:30
Lingva: Generating and Proving Program Properties using Symbol Elimination
SPEAKER: Ioan Dragan

ABSTRACT. We describe the Lingva tool for generating and proving complex program properties using the recently introduced symbol elimination method. We present implementation details and report on a large number of experiments using academic benchmarks and open-source software programs. Our experiments show that Lingva can automatically generate quantified invariants, possibly with alternation of quantifiers, over integers and arrays. Moreover, Lingva can be used to prove program properties expressing the intended behavior of programs.

15:45
The study of multidimensional R-Tree -based index scalability in multicore environment

ABSTRACT. In this paper we consider the scalability issues of a classic data structure used for multidimensional indexing: the R-Tree. This data structure allows for effective retrieval of records in spaces of low-dimensionality and is de-facto a standard of the industry. Following the design guidelines of the GiST model we have developed a prototype implementation which supports concurrent (parallel) access and works in read committed isolation level. Using our prototype we study threads and cores impact on the performance of the system. In order to do this, we evaluate it in several scenarios, which may be encountered during the course of DBMS operation.

16:00-16:30Coffee Break
16:30-18:40 Session 4
16:30
Supercompilation for Datatypes

ABSTRACT. Supercompilation is a method of transforming programs to obtain equivalent programs that perform fewer computation steps and allocates less memory. A transformed program defines new functions that are combinations of functions from the original program, but the datatypes in the transformed program is a subset of the datatypes defined in the original program. We will remedy this by extending supercompilation to create new datatypes.

We do this by creating new constructors that combine several constructors from the original program in a way reminiscent of how supercompilation combines several functions to create new functions.

16:55
Certifying supercompilation for Martin-Löf's type theory

ABSTRACT. The paper describes the design and implementation of a certifying supercompiler TT Lite SC, which takes an input program and produces a residual program paired with a proof of the fact that the residual program is equivalent to the input one. As far as we can judge from the literature, this is the first implementation of a certifying supercompiler for a non-trivial higher-order functional language. The proofs generated by TT Lite SC can be verified by a type checker which is independent from TT Lite SC and is not based on supercompilation. This is essential in cases where the reliability of results obtained by supercompilation is of fundamental importance. Currently, the proofs can be either verified by the type-checker built into TT Lite, or converted into Agda programs and checked by the Agda system.

17:20
Towards Specializing JavaScript Programs

ABSTRACT. Program specialization is an effective tool for transforming interpreters to compilers. We present the first steps in the construction of a specialization tool chain for JavaScript programs. We report on an application of this tool chain in a case study that transforms a realistic interpreter implemented in JavaScript to a compiler.

The difference to previous work on compiling with program specialization is threefold. First, the interpreter has not been written with specialization in mind. Second, instead of specializing the interpreter, we transform it into a generating extension, i.e., we replace parts of the interpreter's code by a corresponding code generator. Third, the implementation language of the interpreter is not a restricted toy language, but full JavaScript.

17:45
Inductive Prover Based on Equality Saturation for a Lazy Functional Language

ABSTRACT. The present paper shows how the idea of equality saturation can be used to build an inductive prover for a non-total first-order lazy functional language. We adapt equality saturation approach to a functional language by using transformations borrowed from supercompilation. A special transformation called merging by bisimilarity is used to perform proof by induction of equivalence between nodes of the E-graph. Equalities proved this way are just added to the E-graph. We also experimentally compare our prover with HOSC and HipSpec.

18:10
Asymptotic Speedups, Bisimulation and Distillation

ABSTRACT. Distillation is a fully automatic program transformation that can yield superlinear program speedups. Bisimulation is a key to the proof that distillation is correct, i.e., preserves semantics. However the proof, based on observational equivalence, is insensitive to program running times. This paper studies some examples of how distillation gives superlinear speedups. As examples, we show its working and effects on some “old chestnut” programs well-known from the early program transformation literature. An interesting part of the study is the question: How can a program running in time O(n) be bisimilar to a program running in time O(n^2)?

18:25
Automatically Partitioning Data to Facilitate the Parallelization of Functional Programs
SPEAKER: Michael Dever

ABSTRACT. In this paper we present a novel transformation technique which automatically partitions instances of generic data-types, and so facilitates the parallelization of functional programs defined on such data-types. To achieve this, given a program defined on an arbitrary data-type we automatically derive conversion functions for instances of this data-type to join-lists so that they can be efficiently partitioned in order to facilitate efficient divide-and-conquer parallelization.

19:00-23:00Welcome Party
Disclaimer | Powered by EasyChair Smart Program