RV-CUBES 2017: AN INTERNATIONAL WORKSHOP ON COMPETITIONS, USABILITY, BENCHMARKS, EVALUATION, AND STANDARDISATION FOR RUNTIME VERIFICATION TOOLS
PROGRAM FOR FRIDAY, SEPTEMBER 15TH

View: session overviewtalk overview

15:00-15:30 Session 1: Poster Session
15:00
R2U2: Tool Overview
SPEAKER: unknown

ABSTRACT. R2U2 (Realizable, Responsive, Unobtrusive Unit) is an extensible framework for runtime System Health Management (SHM) of cyber-physical systems. R2U2 can be run in hardware (e.g., FPGAs), or software; can monitor hardware, software, or a combination of the two; and can analyze a range of different types of system requirements during runtime. An R2U2 requirement is specified utilizing a hierarchical combination of building blocks: temporal formula runtime observers (in LTL or MTL), Bayesian networks, sensor filters, Boolean testers. Importantly, the framework is extensible; it is designed to enable definitions of new building blocks in combination with the core structure. Originally deployed on Unmanned Aerial Systems (UAS), R2U2 is designed to run on a wide range of embedded platforms, from autonomous systems like rovers, satellites, and robots, to human-assistive ground systems and cockpits.

R2U2 is named after the requirements it satisfies; while the exact requriements vary by platform and mission, the ability to formally reason about realizability, responsiveness, and unobtrusiveness is necessary for flight certifiability, safety-critical system assurance, and achievement of technology readiness levels for target systems. Realizability ensures that R2U2 is sufficiently expressive to encapsulate meaningful runtime requriements while maintaining adaptability to run on differnet platforms, transition between differnet mission stages, and update quickly between missions. Responsiveness entails continuously monitoring the system under test, real-time reasoning, reporting intermediate status, and early-as-possible requirements evaluations. Unobtrusiveness ensures compliance with the crucial properties of the target architecture: functionality, certifiability, timing, tolerances, cost, or other constraints.

15:00
The MonPoly Monitoring Tool
SPEAKER: unknown

ABSTRACT. MonPoly is a monitoring tool for checking logs against formulas of metric first-order temporal logic. We provide here an overview of the tool, including its usage and history.

15:00
Lola: A Tool for Stream-based Monitoring
SPEAKER: unknown

ABSTRACT. We present a stream-based monitoring tool for the monitoring of complex system properties defined using the stream-based specification language Lola. Lola is simple and expressive language: it can describe both correctness/failure assertions along with interesting statistical measures. It combines the ease-of-use of rule-based specification languages with the expressive power of heavy-weight scripting languages or temporal logics previously needed for the description of complex stateful dependencies. The language comes with two key features: template stream expressions, which allow parameterization with data, and dynamic stream generation, where new properties can be monitored on their own time scale. We give an overview on the development and the current state of our tool in addition to a series of applications, which demonstrate the capabilities of the specification language Lola and its descendants.

15:00
E-ACSL, a Runtime Verification Tool for Safety and Security of C Programs (tool paper)
SPEAKER: unknown

ABSTRACT. This tool paper presents E-ACSL, a runtime verification tool for C programs capable of checking a broad range of safety and security properties expressed using a formal specification language. E-ACSL consumes a C program annotated with formal specifications and generates a new C program that behaves similarly to the original if the formal properties are satisfied, or aborts its execution whenever a property does not hold. This paper presents an overview of E-ACSL and its specification language.

15:00
Aerial: Almost Event-Rate Independent Algorithms for Monitoring Metric Regular Properties

ABSTRACT. We present Aerial, a tool for the online monitoring of metric regular properties. Aerial supports both the standard metric temporal logic (MTL) and the more expressive metric dynamic logic (MDL) as its property specification language. Unlike MTL, which is restricted to star-free properties, MDL can express all metric regular properties by generalizing MTL's temporal operators to arbitrary regular expressions. Aerial's distinguishing feature is its ability to monitor an event stream using memory logarithmic in the event rate. This memory efficiency is achieved by altering how Aerial outputs its monitoring verdicts.

15:00
Event Stream Processing with BeepBeep 3
SPEAKER: unknown

ABSTRACT. This paper is a short introduction to the BeepBeep 3 event stream processor. It highlights the main design decisions that informed its development, and the features that distinguish it from other Runtime Verification tools.

15:40-16:15 Session 3: Panel on "Making RV Useful"

Questions:

  • How can we improve the applicability of RV in the industry?
  • Does the RV community need a joint repository and a joint format?
  • Should RV embrace the Open Data movement in Computer Science, and how?
15:40
COEMS — open traces from the industry
SPEAKER: unknown

ABSTRACT. The runtime verification community is still fragmented with non-interoperable specifications and tools. Within the EU H2020 project COEMS “Continuous Observation of Embedded Multicore Systems”, we are contributing to the European Open Research Data Pilot that makes scientific data available to other researchers. We describe our first contributions and experience with the required data management and discuss technical issues such as metadata management, format and storage on practical examples. Based on our experience, we make suggestions on tools and formats for future RV Competitions and for desired artefacts from the EU COST Action IC1402 "ARVI -- Runtime Verification Beyond Monitoring".

15:40
A Few Things We Heard About RV Tools (Position Paper)
SPEAKER: unknown

ABSTRACT. Over the past five years, we have met with representatives of more than a dozen companies in various domains, trying to showcase the features of our runtime verification tool. These meetings gave us some very instructive feedback about the way RV is preceived by people from the industry, and about the shortcomings of current RV solutions. In this paper, we discuss a few of the comments we heard, and use them to suggest new ways of developing and evaluating RV tools in the future.

16:15-16:50 Session 4: Panel on "The Future of CRV"

Questions:

  • What does a RV hardware track benchmark look like?
  • How should RV hardware tools be evaluated?
  • What are the current issues and their possible solutions of online runtime verification competitions?
  • Can we decrease the risk of tool over-tuning by only providing the specification (syntax and semantics) of the property languages as part of the benchmark?
16:15
On the Evaluation and Comparison of Runtime Verification Tools for Hardware and Cyber-Physical Systems

ABSTRACT. The need for runtime verification (RV), and tools that enable RV in practice, is widely recognized. Systems that need to operate autonomously necessitate on-board RV technologies, from Mars rovers that need to sustain operation despite delayed communication from operators on Earth, to Unmanned Aerial Systems (UAS) that must fly without a human on-board, to robots operating in dynamic or hazardous environments that must take care to preserve both themselves and their surroundings. Enabling all forms of autonomy, from tele-operation to automated control to decision-making to learning, requires some ability for the autonomous system to reason about itself. The broader class of safety-critical systems require means of runtime self-checking to ensure their critical functions have not degraded during use.

Runtime verification addresses a vital need for self-referential reasoning and system health management, but there is not currently a generalized approach that answers the lower-level questions. What are the inputs to RV? What are the outputs? What level(s) of the system do we need RV tools to verify, from bits and sensor signals to high-level architectures, and at what temporal frequency? How do we know our runtime verdicts are correct? How do the answers to these questions change for software, hardware, or cyber-physical systems (CPS)? How do we benchmark RV tools to assess their (comparative) suitability for particular platforms? The goal of this position paper is to fuel the discussion of ways to improve how we evaluate and compare tools for runtime verification.

16:15
On the Risk of Tool Over-tuning in Run-time Verification Competitions (position paper)
SPEAKER: unknown

ABSTRACT. In this position paper we discuss the risk of tool over-tuning in runtime verification competitions. We believe that the risk is inherently enabled by the format of the competitions and we discuss it in the context of the "First International Competition on Software for Runtime Verification (CSRV~2014)", in which we participated.

16:15
Online Runtime Verification Competitions: How To Possibly Deal With Their Issues (position paper)

ABSTRACT. This short paper presents a feedback about online runtime verification competitions from an active contestant. In particular, it points out several issues and how they could possibly be fixed.