View: session overviewtalk overview
09:00 | SPEAKER: Christoph Matheja ABSTRACT. We present a graph-based tool for analysing Java programs operating on dynamic data structures. It involves the generation of an abstract state space employing a user-defined graph grammar. LTL model checking is then applied to this state space, supporting both structural and functional correctness properties. The analysis is fully automated, procedure-modular, and provides informative visual feedback including counterexamples in the case of property violations. |
09:15 | SPEAKER: Marco Eilers ABSTRACT. We present Typpete, a sound type inferencer that automatically infers Python 3 type annotations. Typpete encodes type constraints as a MaxSMT problem and uses a system of optional constraints and specific quantifier instantiation patterns to make the constraint solving process efficient. Our experimental evaluation shows that Typpete scales to real world Python programs and outperforms state-of-the-art tools. |
09:30 | The JKind Model Checker SPEAKER: Andrew Gacek ABSTRACT. JKind is an open-source industrial model checker. JKind uses multiple parallel engines to prove or falsify safety properties of infinite state models. It is portable, easy to install, performance competitive with other state-of-the-art model checkers, and has features designed to improve the results presented to users: *inductive validity cores* for proofs and *counterexample smoothing* for test-case generation. It serves as the back-end for various industrial applications. |
09:45 | SPEAKER: Itsaka Rakotonirina ABSTRACT. In this paper we describe the DeepSec prover, a tool for security-protocol analysis deciding equivalence properties, modelled as trace equivalence of two processes in a dialect of the applied pi calculus. |
10:00 | SPEAKER: Rohit Dureja ABSTRACT. We present a new safety hardware model checker SimpleCAR that serves as the “bottom-line” for evaluating and extending Complementary Approximate Reachability (CAR), a new SAT-based model checking framework inspired by classical reachability analysis. We demonstrate the performance of SimpleCAR on challenging benchmarks from the Hardware Model Checking Competition. Our experiments indicate that SimpleCAR is particularly suited for unsafety checking, or bug-finding; it is able to solve 7 unsafe instances within 1 hour that are not solvable by any other state-of-the-art technique, including BMC and IC3/PDR, within 8 hours. We also report 1 bug and 48 counterexample generation errors in the model checkers compared in our analysis. |
10:15 | SPEAKER: Dmitry Blotsky ABSTRACT. In this paper, we introduce StringFuzz: a modular SMT- LIB problem instance transformer and generator for string solvers. We supply a repository of instances generated by StringFuzz in SMT-LIB 2.0/2.5 format. We systematically compare Z3str3, CVC4, Z3str2, and Norn on groups of such instances, and identify those that are particularly challenging for some solvers. We briefly explain our observations and show how StringFuzz helped discover causes of performance degradations in Z3str3. |
09:00 | Efficient verification and metaprogramming in Lean ABSTRACT. In this talk, we provide a short introduction to the Lean theorem prover and its metaprogramming framework. We also describe how this framework extends Lean's object language with an API to many of Lean's internal structures and procedures, and provides ways of reflecting object-level expressions into the metalanguage. We provide evidence to show that our implementation is performant, and that it provides a convenient and flexible way of writing not only small-scale interactive tactics, but also more substantial kinds of automation. We view this as important progress towards our overarching goal of bridging the gap between interactive and automated reasoning. Users who develop libraries for interactive use can now more easily develop special-purpose automation to go with them thereby encoding procedural heuristics and expertise alongside factual knowledge. At the same time, users who want to use Lean as a back end to assist in complex verification tasks now have flexible means of adapting Lean's libraries and automation to their specific needs. As a result, our metaprogramming language opens up new opportunities, allowing for more natural and intuitive forms of interactive reasoning, as well as for more flexible and reliable forms of automation. More information about Lean can be found at http://leanprover.github.io. The interactive book "Theorem Proving in Lean'' (https://leanprover.github.io/theorem_proving_in_lean) is the standard reference for Lean. The book is available in PDF and HTML formats. In the HTML version, all examples and exercises can be executed in the reader's web browser. |
09:00 | SPEAKER: Daniela Inclezan ABSTRACT. We describe an application of Answer Set Programming to the understanding of narratives about stereotypical activities, demonstrated via question answering. Substantial work in this direction was done by Erik Mueller, who modeled stereotypical activities as scripts. His systems were able to understand a good number of narratives, but could not process texts describing exceptional scenarios. We propose addressing this problem by using a theory of intentions developed by Blount, Gelfond, and Balduccini. We present a methodology in which we substitute scripts by activities (i.e., hierarchical plans associated with goals) and employ the concept of an intentional agent to reason about both normal and exceptional scenarios. We exemplify the application of this methodology by answering questions about a number of restaurant stories. |
09:30 | ABSTRACT. We show how a bi-directional grammar can be used to specify and verbalise answer set programs in controlled natural language. We start from a program specification in controlled natural language and translate this specification automatically into an executable answer set program. The resulting answer set program can be modified following certain naming conventions and the revised version of the program can then be verbalised in the same subset of natural language that was used as specification language. The bi-directional grammar is parametrised for processing and generation, deals with referring expressions, and exploits symmetries in the data structure of the grammar rules whenever these grammar rules need to be duplicated. We demonstrate that verbalisation requires sentence planning in order to aggregate similar structures with the aim to improve the readability of the generated specification. Without modifications, the generated specification is always semantically equivalent to the original one; our bi-directional grammar is the first one that allows for semantic round-tripping in the context of controlled natural language processing. |
10:00 | SPEAKER: Carmine Dodaro ABSTRACT. Answer Set Programming (ASP) is a logic-based knowledge representation framework, supporting -among other reasoning modes- the central task of query answering. In the propositional case, query answering amounts to computing cautious consequences of the input program among the atoms in a given set of candidates, where a cautious consequence is an atom belonging to all stable models. Currently, the most efficient algorithms either iteratively verify the existence of a stable model of the input program extended with the complement of one candidate, where the candidate is heuristically selected, or introduce a clause enforcing the falsity of at least one candidate, so that the solver is free to choose which candidate to falsify at any time during the computation of a stable model. This paper introduces new algorithms for the computation of cautious consequences, with the aim of driving the solver to search for stable models discarding more candidates. Specifically, one of such algorithms enforces minimality on the set of true candidates, where different notions of minimality can be used, and another takes advantage of unsatisfiable core computation. The algorithms are implemented in WASP, and experiments on benchmarks from the latest ASP competitions show that the new algorithms perform better than the state-of-the-art. |
09:00 | SPEAKER: Nicolas Jeannerod ABSTRACT. We investigate a logic of an algebra of trees including the update operation, which expresses that a tree is obtained from an input tree by replacing a particular direct subtree of the input tree, while leaving the rest intact. This operation improves on the expressivity of existing logics of tree algebras in our case of feature trees, which allow for an unbounded number of children of a node in a tree. We show that the first-order theory of this algebra is decidable via a weak quantifier elimination procedure which is allowed to swap existential quantifiers for universal quantifiers. This study is motivated by the logical modeling of transformations on UNIX file system trees expressed in a simple programming language. |
09:30 | SPEAKER: Johannes Greiner ABSTRACT. The CSP of a first-order theory $T$ is the problem of deciding for a given finite set $S$ of atomic formulas whether $T \cup S$ is satisfiable. Let $T_1$ and $T_2$ be two theories with countably infinite models and disjoint signatures. Nelson and Oppen presented conditions that imply decidability (or polynomial-time decidability) of $\mathrm{CSP}(T_1 \cup T_2)$ under the assumption that $\mathrm{CSP}(T_1)$ and $\mathrm{CSP}(T_2)$ are decidable (or polynomial-time decidable). We show that for a large class of $\omega$-categorical theories $T_1, T_2$ the Nelson-Oppen conditions are not only sufficient, but also necessary for polynomial-time tractability of $\mathrm{CSP}(T_1 \cup T_2)$ (unless P=NP). |
10:00 | Focussing, MALL and the polynomial hierarchy ABSTRACT. We investigate how to extract alternating time bounds from focussed proofs, treating synchronous phases as nondeterministic computation and asynchronous phases as co-nondeterministic computation. We refine the usual presentation of focussing to account for deterministic computations in proof search, which correspond to invertible rules that do not branch, more faithfully associating phases of focussed proof search to their alternating time complexity. As our main result, we give a focussed system for affine MALL and give encodings to and from true quantified Boolean formulas (QBFs): in one direction we encode QBF satisfiability and in the other we encode focussed proof search. Moreover we show that the composition of the two encodings preserves quantifier alternation, hence yielding fragments of affine MALL complete for each level of the polynomial hierarchy. This refines the well-known result that affine MALL is PSPACE-complete. |
10:00 | SPEAKER: Uli Fahrenberg ABSTRACT. In this paper, we propose a novel framework for the synthesis of robust and optimal energy-aware controllers. The framework is based on energy timed automata, allowing for easy expression of timing-constraints and variable energy-rates. We prove decidability of the energy-constrained infinite-run problem in settings with both certainty and uncertainty of the energy-rates. We also consider the optimization problem of identifying the minimal upper bound that will permit existence of energy-constrained infinite runs. Our algorithms are based on quantifier elimination for linear real arithmetic. Using Mathematica and Mjollnir, we illustrate our framework through a real industrial example of a hydraulic oil pump. Compared with previous approaches our method is completely automated and provides improved results. |
11:00 | Encoding fairness in a synchronous concurrent program algebra SPEAKER: Ian J. Hayes ABSTRACT. Concurrent program refinement algebra provides a suitable basis for supporting mechanised reasoning about shared-memory concurrent programs in a compositional manner, for example, it supports the rely/guarantee approach of Jones. The algebra makes use of a synchronous parallel operator motivated by Aczel’s trace model of concurrency and with similarities to Milner’s SCCS. This paper looks a defining a form of fairness within the program algebra. The encoding allows one to reason about the fair execution of a single process in isolation as well as define fair parallel in terms of base parallel operator, of which no fairness properties are assumed. An algebraic theory to support fairness and fair parallel is developed. |
11:30 | SPEAKER: Robert Colvin ABSTRACT. Modern processors deploy a variety of weak memory models, which for efficiency reasons may (appear to) execute instructions in an order different to that specified by the program text. The consequences of instruction reordering can be complex and subtle, and can impact on ensuring correctness. Previous work on the semantics of weak memory models has focussed on the behaviour of assembler-level programs. In this paper we utilise that work to extract some general principles underlying instruction reordering, and apply those principles to a wide-spectrum language encompassing abstract data types as well as low-level assembler code. The goal is to support reasoning about implementations of data structures for modern processors with respect to an abstract specification. Specifically, we define an operational semantics, from which we derive some properties of program refinement, and encode the semantics in the rewriting engine Maude as a model checking tool. The tool is used to validate the semantics against the behaviour of a set of litmus tests (small assembler programs) reported on hardware, and also to model check implementations of data structures from the literature against their abstract specifications. |
12:00 | SPEAKER: Daniel Fava ABSTRACT. A multitude of weak memory models exists supporting various types of relaxations and different synchronization primitives. On one hand, such models must be lax enough to account for hardware and compiler optimizations; on the other, the more lax the model, the harder it is to understand and program for. Though the right balance is up for debate, a memory model should provide what is known as the SC-DRF guarantee, meaning that data-race free programs behave in a sequentially consistent manner. |
11:00 | Stepwise Development and Model Checking of a Distributed Interlocking System - using RAISE ABSTRACT. This paper considers the challenge of designing and verifying control protocols for geographically distributed railway interlocking systems. It describes for a real-world case study how this can be tackled by stepwise development and model checking of state transition models in an extension of the RAISE Specification Language (RSL). This method also allows different variants of the control protocols to be explored. |
11:30 | Resource-aware Design for Reliable Autonomous Applications with Multiple Periods SPEAKER: Rongjie Yan ABSTRACT. Reliability is the most important design issue for current autonomous vehicles. How to guarantee reliability and reduce hardware cost is key for the design of such complex control systems intertwined with scenario-related multi-period timing behaviors. The paper presents a reliability and resource-aware design framework for embedded implementation of such autonomous applications, where each scenario may have its own timing constraints. The constraints are formalized with the consideration of different redundancy based fault-tolerant techniques and software to hardware allocation choices, which capture the static and various causality relations of such systems. Both exact and heuristic-based methods have been implemented to derive the lower bound of hardware usage, in terms of processor, for the given reliability requirement. The case study on a realistic autonomous vehicle controller demonstrates the effectiveness and feasibility of the framework. |
12:00 | SPEAKER: Philipp Berger ABSTRACT. This paper presents our experience with formal verification of C code that is automatically generated from Simulink open-loop controller models. We apply the state-of-the-art commercial model checker BTC EmbeddedPlatform to two industrial case studies: Driveline State Request and E-Clutch Control. These case studies contain various features (decision logic, floating-point arithmetic, rate limiters and state-flow systems) implemented in discrete-time logic. The diverse features and the extensive use of floating-point variables make the formal code verification more challenging. The paper reports our findings, identifies shortcomings and strengths of formal verification when adopted in an automotive setting. We also provide recommendations to tool developers and requirement engineers so as to integrate formal code verification into the automotive mass product development. |
11:00 | SPEAKER: Martin Gebser ABSTRACT. Automated storage and retrieval systems are principal components of modern production and warehouse facilities. In particular, automated guided vehicles nowadays substitute human-operated pallet trucks in transporting production materials between storage locations and assembly stations. While low-level control systems take care of navigating such driverless vehicles along programmed routes and avoid collisions even under unforeseen circumstances, in the common case of multiple vehicles sharing the same operation area, the problem remains how to set up routes such that a collection of transport tasks is accomplished most effectively. We address this prevalent problem in the context of car assembly at Mercedes-Benz Ludwigsfelde GmbH, a large-scale producer of commercial vehicles, where routes for automated guided vehicles used in the production process have traditionally been hand-coded by human engineers. Such ad-hoc methods may suffice as long as a running production process remains in place, while any change in the factory layout or production targets necessitates tedious manual reconfiguration, not to mention the missing portability between different production plants. Unlike this, we propose a declarative approach based on Answer Set Programming to optimize the routes taken by automated guided vehicles for accomplishing transport tasks. The advantages include a transparent and executable problem formalization, provable optimality of routes relative to objective criteria, as well as elaboration tolerance towards particular factory layouts and production targets. Moreover, we demonstrate that our approach is efficient enough to deal with the transport tasks evolving in realistic production processes at the car factory of Mercedes-Benz Ludwigsfelde GmbH. |
11:00 | SPEAKER: Simon Robillard ABSTRACT. The absence of a finite axiomatization of the first-order theory of datatypes and codatatypes represents a challenge for automatic theorem provers. We propose two approaches to reason by saturation in this theory: one is a conservative theory extension with a finite number of axioms; the other is an extension of the superposition calculus, in conjunction with axioms. Both techniques are refutationally complete with respect to nonstandard models of datatypes and non-branching codatatypes. They take into account the acyclicity of datatype values and the existence and uniqueness of cyclic codatatype values. We implemented them in the first-order prover Vampire and compare them experimentally. |
11:30 | SPEAKER: Alexander Bentkamp ABSTRACT. We introduce refutationally complete superposition calculi for intentional and extensional lambda-free higher-order logic, a formalism that allows partial application and applied variables. The intentional variants perfectly coincide with standard superposition on first-order clauses. The calculi are parameterized by a well-founded term order that need not be compatible with arguments, making it possible to employ the lambda-free higher-order lexicographic path and Knuth-Bendix orders. We implemented the calculi in the Zipperposition prover and evaluated them on TPTP benchmarks. They appear promising as a stepping stone towards complete, efficient automatic theorem provers for full higher-order logic. |
12:00 | SPEAKER: Nicholas Smallbone ABSTRACT. We present several translations from first-order Horn formulas to equational logic. The goal of these translations is to allow equational theorem provers to efficiently reason about non-equational problems. Using the translations we were able to solve 33 problems of rating 1.0 from the TPTP. |
12:00 | SPEAKER: Jérôme Dohrau ABSTRACT. Information about the memory locations accessed by a program is, for instance, required for program parallelisation and program verification. Existing inference techniques for this information provide only partial solutions for the important class of array-manipulating programs. In this paper, we present a static analysis that infers the memory footprint of an array program in terms of permission pre- and postconditions as used, for example, in separation logic. This formulation allows our analysis to handle concurrent programs and produces specifications that can be used by verification tools. Our analysis expresses the permissions required by a loop via maximum expressions over the individual loop iterations. These maximum expressions are then solved by a novel maximum elimination algorithm, in the spirit of quantifier elimination. Our approach is sound and is implemented; an evaluation on existing benchmarks for memory safety of array programs demonstrates accurate results, even for programs with complex access patterns and nested loops. |
12:15 | SPEAKER: Francesco Ranzato ABSTRACT. We study from a computability perspective static program analysis, namely detecting sound program assertions, and verification, namely sound checking of program assertions. We first provide a general computability model for domains of program assertions and corresponding program analysers and verifiers. Next, we formalize and prove an instantiation of Rice's Theorem for static program analysis and verification. Then, within this general model, we provide and show a precise statement of the popular belief that program analysis is a harder problem than program verification: we prove that for finite domains of program assertions, program analysis and verification are equivalent problems, while for infinite domains, program analysis is strictly harder than verification. |
Public debate on "Ethics & Morality of Robotics" with panelists specializing in ethics, law, computer science, data security and privacy:
- Prof Matthias Scheutz, Dr Sandra Wachter, Prof Jeannette Wing, Prof Francesca Rossi, Prof Luciano Floridi & Prof Ben Kuipers
See http://www.floc2018.org/public-debate/ for further details and to register.
16:00 | SPEAKER: Andrea Vandin ABSTRACT. QFLan offers modeling and analysis of highly reconfigurable systems, like product lines, which are characterized by combinatorially many system variants (or products) that can be obtained via different combinations of installed features. The tool offers a modern integrated development environment for the homonym probabilistic feature-oriented language. QFLan allows the specification of a family of products in terms of a feature model with quantitative attributes, which defines the valid feature combinations, and probabilistic behavior subject to quantitative constraints. The language's behavioral part enables dynamic installation, removal and replacement of features. QFLan has a discrete-time Markov chain semantics, permitting quantitative analyses. Thanks to a seamless integration with the statistical model checker MultiVeStA, it allows for analyses like the likelihood of specific behavior or the expected average value of non-functional aspects related to feature attributes. |
16:30 | SPEAKER: Thomas Letan ABSTRACT. Modern systems have grown in complexity, and the attack surface has increased accordingly. Even though system components are generally carefully designed, and even verified, by different groups of people, the composition of these components is often regarded with less attention. This paves the way for “architectural attacks,” a class of security vulnerabilities where the attacker is able to threaten the security of the system even if each of its component continues to act according to their specifications. In this article, we introduce FreeSpec, a Coq framework built upon the key idea that components can be modelled as programs with algebraic effects to be realised by other components. FreeSpec allows for the modular modelling of a complex system, by defining idealised components connected together, and the modular verification of the properties of their composition. In doing so, we propose a novel approach for the Coq proof assistant to reason about programs with effects in a modular way. |
17:00 | SPEAKER: Heiko Becker ABSTRACT. Recent renewed interest in optimization and analysis of floating-point programs has lead to a diverse array of new tools for numerical programs. These tools are often complementary, each focusing on a distinct aspect of numerical programming. Building reliable floating point applications typically requires addressing several of these aspects, which makes easy composition essential. This paper describes the composition of two recent floating-point tools: Herbie, which performs accuracy optimization, and Daisy, which performs accuracy verification. We find that the combination provides numerous benefits to users, such as being able to use Daisy to check whether Herbie's unsound optimizations improved the worst-case roundoff error, as well as benefits to tool authors, including uncovering a number of bugs in both tools. The combination also allowed us to compare the different program rewriting techniques implemented by these tools for the first time. The paper lays out a road map for combining other floating-point tools and for surmounting common challenges. |
17:30 | A Formally Verified Floating-Point Implementation of the Compact Position Reporting Algorithm SPEAKER: Cesar Munoz ABSTRACT. The Automatic Dependent Surveillance-Broadcast (ADS-B) system allows aircraft to communicate their current state, including position and velocity information, to other aircraft in their vicinity and to ground stations. The Compact Position Reporting (CPR) algorithm is the ADS-B module responsible for the encoding and decoding of aircraft positions. CPR is highly sensitive to computer arithmetic since it heavily relies on functions that are intrinsically unstable such as floor and modulo. In this paper, a formally-verified double-precision floating-point implementation of the CPR algorithm is presented. The verification proceeds in three steps. First, an alternative version of CPR, which reduces the floating-point rounding error is proposed. Then, the Prototype Verification System (PVS) is used to formally prove that the ideal real-number counterpart of the improved algorithm is mathematically equivalent to the standard CPR definition. Finally, the static analyzer Frama-C is used to verify that the double-precision implementation of the improved algorithm is correct with respect to its operational requirement. The alternative algorithm is currently being considered for inclusion in the revised version of the ADS-B standards document as the reference implementation of the CPR algorithm. |
16:00 | Formal Verification of Automotive Simulink Controller Models: Empirical Technical Challenges, Evaluation and Recommendations SPEAKER: Johanna Nellen ABSTRACT. The automotive industry makes increasing usage of Simulink-based software development. Typically, automotive Simulink designs are analyzed using non-formal test methods, which do not guarantee the absence of errors. In contrast, formal verification techniques aim at providing formal guarantees or counterexamples that the analyzed designs fulfill their requirements for all possible input and parameters. Therefore, the safety standard ISO 26262 recommends the usage of formal methods for the software development process for safety-critical systems. In this paper, we report on the application of formal verification to check discrete-time properties of a previously tested Simulink model for a park assistant feature using the commercial Simulink Design Verifier tool. During our evaluation, we experienced a gap between the offered functionalities and typical industrial needs, which hindered the successful application of this tool in the context of model-based development. We discuss these issues and propose solutions related to system development, requirements specification and verification tools, in order to prepare the ground for the effective integration of computer-assisted formal verification in automotive Simulink-based development. |
16:30 | SPEAKER: Claudio Menghi ABSTRACT. Robot applications are increasingly based on teams of robots that collaborate to perform a desired mission. Such applications ask for decentralized techniques that allow for tractable automated planning. Another aspect that current robot applications must consider is partial knowledge about the environment in which the robots are operating and the uncertainty associated with the outcome of the robots' actions. Current planning techniques used for teams of robots that perform complex missions do not systematically address these challenges: they are either based on centralized solutions and hence not scalable, they consider rather simple missions, such as A-to-B travel, they do not work in partially known environments. We present a planning solution that decomposes the team of robots into subclasses, considers high-level missions given in temporal logic, and at the same time works when only partial knowledge of the environment is available. We prove the correctness of the solution and evaluate its effectiveness on a set of realistic examples. |
17:00 | SPEAKER: Khalil Ghorbal ABSTRACT. Vector Lyapunov functions are a multi-dimensional extension of the more familiar (scalar) Lyapunov functions, commonly used to prove stability properties in systems described by non-linear ordinary differential equations (ODEs). This paper explores an analogous vector extension for so-called barrier certificates used in safety verification. As with vector Lyapunov functions, the approach hinges on constructing appropriate comparison systems, i.e., related differential equation systems from which properties of the original system may be inferred. The paper presents an accessible development of the approach, demonstrates that most previous notions of barrier certificate are special cases of comparison systems, and discusses the potential applications of vector barrier certificates in safety verification and invariant synthesis. |
17:30 | SPEAKER: Hana Chockler ABSTRACT. Vacuity is a leading sanity check in model-checking, applied when the system is found to satisfy the specification. The check detects situations where the specification passes in a trivial way, say when a specification that requires every request to be followed by a grant is satisfied in a system with no requests. Such situations typically reveal serious problems in the modelling of the system or the specification, and indeed vacuity detection is a part of most industrial model-checking tools. Existing research and tools for vacuity concern discrete-time systems and specification formalisms. We introduce real-time vacuity, which aims to detect problems with real-time modelling. Real-time logics are used for the specification and verification of systems with a continuous-time behavior. We study vacuity for the branching real-time logic TCTL, and focus on vacuity with respect to the time constraints in the specification. Specifically, the logic TCTL includes a single temporal operator U^J, which specifies real-time eventualities in real-time systems: the parameter J ⊆ IR≥0 is an interval with integral boundaries that bounds the time in which the eventuality should hold. We define several tightenings for the U^J operator. These tightenings require the eventuality to hold within a strict subset of J. We prove that vacuity detection for TCTL is PSPACE-complete, thus it does not increase the complexity of model-checking of TCTL. Our contribution involves an extension, termed TCTL+, of TCTL, which allows the interval J not to be continuous, and for which model-checking stays in PSPACE. Finally, we discuss ways to rank vacuity results by their significance, and extend the framework of ranking vacuity to TCTL. |
17:30 | IJCAR competition presentations |
FLoC banquet at Ashmolean Museum. Drinks and food available from 7pm (pre-booking via FLoC registration system required; guests welcome).