View: session overviewtalk overview
10:30 | P : Modular and Safe Asynchronous Programming (Continued) SPEAKER: Shaz Qadeer |
11:30 | Machine-learning State Properties SPEAKER: Madhusudan Parthasarathy ABSTRACT. Several applications of using machine learning in the realm of software engineering have emerged in recent years. One set of such applications revolve learning properties of program states using a sample of concrete states. There are many ways to obtain concrete states a program can exhibit— through runtime information gathered on test runs, through symbolic execution, or through verification engines. Given such a source of discovery of program states, a machine learning algorithm that learns an appropriate generalization of the discovered states can help us synthesize pre-conditions, contracts, and invariants. This tutorial will be on methods from machine-learning that can be used to learn such state predicates. In particular, we will discuss the various combinations of learning across the following dimensions:
|
13:30 | Machine-learning State Properties (Continued) SPEAKER: Madhusudan Parthasarathy |
15:30 | Foundations For Runtime Monitoring SPEAKER: Adrian Francalanza ABSTRACT. Runtime Verification is a lightweight technique that complements other verification methods in a multi-pronged approach towards ensuring software correctness. The technique poses novel questions to software engineers: it is not easy to see which specifications are amenable to runtime monitoring, nor is it clear which monitors effect the required runtime analysis correctly. In this tutorial, we strive towards a foundational understanding of these questions. This enables us to distill core concepts that can be extrapolated to various runtime verification settings. Particularly, we consider a specification logic that is agnostic of the verification method used, and is general enough to embed commonly used specification logics. We also present an operational semantics for an elemental framework that describes the runtime analysis carried out by monitors. This allows us to establish a correspondence between the property satisfactions in the logic and the verdicts reached by the monitors carrying out the analysis. We show how this correspondence is used to identify which subsets of the logic can be adequately monitored for. The monitoring framework also serves as a basis for defining various notions of monitor correctness. In this exposition we shall assume no prior knowledge of the specification logic and the monitoring framework considered. |