PSI 2015: 10TH INTERNATIONAL ANDREI ERSHOV MEMORIAL CONFERENCE
PROGRAM FOR TUESDAY, AUGUST 25TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 4: Keynote Speech
09:00
Software Engineering, Why And What
SPEAKER: David Parnas

ABSTRACT. In the early 1960s many of the scientists and mathematicians who were interested in computer software decided that the body of knowledge
about computing was substantive enough to be the subject of a new set of academic programs. They began to lobby for, and then form, the first
Computer Science departments. Most of those who built those departments had been educated as scientists or mathematicians. They
had mastered an organized body of knowledge and been taught how to extend that knowledge. Although they differed in their emphasis, the early
Computer Science programmes were modelled on Science and Mathematics programmes and located in Science Faculties. Some critics
considered that the body of knowledge about computers could be taught within the established disciplines, but time has proven that the new
departments were needed.

In the late 1960s, some of these scientists observed that software development was a profession that had more in common with Engineering
than it did with the fields in which they had been trained. Most of the people who populated those departments had not been taught how to
apply what they had learned when building products that would be used by strangers. In contrast, engineers in the established disciplines had been 
taught a body of knowledge but given little explanation of how to extend that body of knowledge through research. Instead, they had been taught
how to use that knowledge when building products. Because engineers were being prepared to be people who were building products that others
would depend upon, their education stressed discipline and professional responsibility. It was proposed that, in addition to "Computer Science",
there was also a need for a for a new Engineering discipline called "Software Engineering". Although some Computer Scientists considered a
second programme to be redundant and superfluous, time has revealed that a programme modelled on the older Engineering programmes is
needed. Today our society depends on software in the same way that previous generations depended on physical engineering products. Most of
the graduates will be using the body of knowledge about computing and software to build products. Relatively few will be adding to that knowledge.

While the early Software Engineering pioneers correctly sensed that a new field was needed, they did not succeed in defining the capabilities
required of practitioners in the new field. More recently, there have been several efforts to identify a "body of knowledge" for Software Engineering.
In the speaker’s opinion, none have captured the essence of the field. 

This talk approaches the subject by focussing on capabilities, i.e., by addressing the question, What should a Software Engineer be taught to
do? Starting with two historical characterizations of the field by leading pioneers, it presents a set of capabilities that are required by today's
"Software Engineers” but fundamental enough that they will be required in the foreseeable future. Some examples of the work products expected of
a Software Engineer will be shown.

(After lengthy discussions with Carl Landwehr, Jochen Ludewig  Robert  Meersman, Peretz Shoval, Yair Wand, David Weiss and Elaine Weyuker whose
contributions were substantial and substantive.)

10:00-10:30Coffee Break
10:30-12:30 Session 5: Modelling
10:30
Applying MDA to Generate Hadoop Based Scientific Computing Applications
SPEAKER: unknown

ABSTRACT. This paper presents an approach of modeling and developing distributed scientific application using Model-Driven Architecture (MDA) technology. First we introduce MDA based distributed scientific computing application development process. We specially designed four high performance scientific computing (HPSC) components in order to create the platform independent model (PIM) and platform specific model (PSM) by MDA concept. We proposed to develop PIM and MapReduce PSM models and transformed the MapReduce PSM model to Java code automatically. After that, we specify the iterative MapReduce framework based on Hadoop technology. Then we apply the proposed approach to develop MapReduce application for a scientific computing, which is used to solve 3D problem of fluid dynamics in anisotropic elastic porous medium. According to MapReduce paradigm, realization of the algorithm for iteration processes is a series of MapReduce tasks. Algorithm of numerical solution of the computational problems with the help of MapReduce Hadoop technology consists of two stages: the initialization stage at which MapReduce work of the first level is performed only once and the iteration stage at which a cycle of MapReduce works of the second level is performed. The experimental results allow to conclude that the distributed application works well and with the increase in the volume of the data being processed the performance of Hadoop implementation increases. HPSC applications can be designed and developed with the help of the proposed MDA model and its basic components. This approach will possible one of the ways to do distributed scientific computing on high performance heterogeneous systems.

11:00
Hybrid Lustre
SPEAKER: unknown

ABSTRACT. Hybrid Lustre is a formal modeling language for a mixed discrete-continuous system extended from Lustre. Luster is a data-flow based synchronous language widely used in development of real-time embedded systems. While Luster lacks of a mechanism for modeling continuous behavior of physical processes which are controlled by digital controllers. Hybrid Lustre is proposed as an extension of Lustre to accommodate continuous behaviors in between discrete transitions. The continuous state change can be specified by ordinary differential equations. The syntax and the semantics of Hybrid Lustre are formally described, thus to support verifying the correctness of models of mixed discrete-continuous systems. It is successfully used in development of new generation Communication Based Zone Controller of Casco Signal LTD..

11:30
Automated Two-Phase Composition of Timed Web Services

ABSTRACT. The paper extends PlanICS web services composition system by modeling time and evolution of variables as a function of time. Its dis- tinguishing feature is focusing not only on modeling and composing of time constraints in services, but covering the whole service definition and composition process: providing an ontology with a strong type system on which definitions of typed stateless timed services are based, timed user queries, offers from service providers, and searching for services and offers matching a query. A novel idea is that services express their timed behav- ior by producing timed automata as a part of their output. Abstract and concrete planning is described, dealing respectively with service types (including time dependencies) at the coarse level, and with several offers corresponding to these types at the detailed level.

12:00
Modeling Actor Systems Using Dynamic I/O Automata

ABSTRACT. Actor-based programming has become an important technique for the development of concurrent and distributed systems. This paper presents a new automaton model for actor systems and demonstrates how the model can be used for compositional verification. The model allows expressing the detailed behavior of actor components where components are built from actors and other components. It abstracts from internal and environment behavior, supports encapsulation of actors, and captures the dynamic aspects of actor creation and exposure of actor names to the component environment, which are crucial for verification. We handle these changes at the component interface by specializing dynamic I/O automata. The model can be used as a foundation of different verification techniques. We illustrate this by combining weakest precondition techniques on the actor level with simulation proofs on the component level.

12:30-13:00 Session 6: Short Papers
Chair:
12:30
SPEAKER: unknown

ABSTRACT. We describe our research related to the task of building a static analyzer. Our target was to develop a tool suitable for interprocedural analysis of large software projects (Android, for example) in a reasonable amount of time. As the result, we introduce a summary-based interprocedural inter-unit analysis for C/C++ code based on Clang Static Analyzer symbolic execution framework. We are going to describe our approach to summary-based analysis and inter-unit analysis for source-based analysis framework. Some of the problems we are going to consider include inter-context translation of symbolic values, building bug reports, and merging ASTs across multiple translation units.

12:45
BTC Formal Model of MAP/REDUCE
SPEAKER: unknown

ABSTRACT. Hadoop framework is a widely used distributed Map/Reduce framework for big data processing in both industry and academic. In order to perform any Hadoop application execution within a cloud computing environment, a set of resources is required. The number of resources dedicated to the task determines the Hadoop application performance. Nevertheless, in a cloud pay-per-use model, this number must be taken into account in order to minimise the costs. Demanding an unsuitable number of resources may result in a loss of time and/or money. Thus, it would be very useful to know in advance the exact number of resources needed in order to support time requirements.

In this paper a formal model of Map/Reduce is presented. This model has been developed using the Timed Process Algebra BTC. In order to evaluate this model, several applications have been used. In this paper we present the results for a Map/Reduce video processing application, but the model is capable of simulating the behaviour of any Map/Reduce application without additional work. Moreover, the formal model has been validated by carrying out several experiments on a real private cloud environment.

The formal model outcomes are used in order to determine the best performance-cost agreement in a real scenario. Results show that the proposed model enables to determine in advance both the performance of a Hadoop based application within cloud environments and the best performance-cost agreement.

13:00-14:00Lunch Break
14:00-15:00 Session 7: Keynote Speech
14:00
Quantitative Analysis of Collective Adaptive Systems
SPEAKER: Jane Hillston

ABSTRACT. Quantitative formal methods, such as stochastic process algebras,

have been used for the last twenty years to support modelling of dynamic

systems in order to investigate their performance.  Application domains

have ranged from computer and communication systems, to intracellular

signalling pathways in biological cells.   Nevertheless this modelling approach 

is challenged by the demands of modelling modern collective adaptive systems, 

many of which have a strong spatial aspect, adding to the complexity of both the

modelling and the analysis tasks.

 

In this talk I will give an introduction to formal quantitative analysis and

the challenges of modelling collective adaptive systems, together with 

recent developments to address those challenges using the modelling

language CARMA.

15:00-16:00 Session 8: Security
15:00
Dynamics Security Policies and Process Opacity
SPEAKER: Damas Gruska

ABSTRACT. Dynamic security policies with respect the security property called process opacity are formalised and studied. The dynamics policies are defined in such a way that they influence what an intruder can observe as well as which part of system's behaviour is classified at a given moment. Moreover, to obtain more realistic scenario we propose a concept of intruders with limited memory. The resulting security properties are investigated and compared with other security notions based to an absence information flow.

15:30
Using Refinement in Formal Development of OS Security Model
SPEAKER: unknown

ABSTRACT. The paper presents work in progress on formal development of an operating system security model. The main goal is to improve our confidence in the quality of the model. Additional goal is to simplify its maintenance. We have successfully completed formalization and verification of the model using formal method Event-B in two ways — without use of the refinement technique and with it — in order to compare them and figure out what approach meets our purposes better. Refinement is a well-known and a widely recommended technique and it helped us to deal with complexity of the model by both improving readability and simplifying automatic proofs. However, deep understanding of the details of the security model and careful planning were absolutely necessary to achieve this. The approach without use of the refinement technique allowed to quickly start formalization and helped to study the details of the security model, but the resulting formal model became hard to maintain and explore.

16:00-16:30Coffee Break
16:30-17:30 Session 9: Algorithms
16:30
Maximally-Polyvariant Partial Evaluation in Polynomial Time
SPEAKER: Robert Glück

ABSTRACT. Maximally-polyvariant partial evaluation is a strategy for program specialization that is as accurate as possible in propagating static values. The online partial evaluator in this paper achieves this precision in time polynomial in the number of partial-evaluation configurations. This is a significant improvement because a straightforward partial evaluation algorithm can take exponential time, and no fast algorithm was known for maximally-polyvariant specialization. For an important class of quasi-deterministic specialization problems our algorithm performs in linear time, which includes Futamura’s long-standing open challenge: the linear-time specialization of a string matcher. Our results are presented using a recursive flowchart language, and can be viewed as a step towards faster Ershov generating extensions for program staging.

17:00
Сonflict Resolution in Multi-agent Systems with Typed Connections for Ontology Population
SPEAKER: unknown

ABSTRACT. The paper presents a conflict resolution algorithm for multiagent systems with agents connected by relations of different types and worth. The result of conflict resolution is a conflict-free set of agents. We apply this algorithm for the ambiguity resolution problem in ontology population based on multiagent natural language text analysis.