FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
FOMLAS ON SUNDAY, JULY 31ST
Days:
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 1D

FoMLAS Session 1

Chair:
Location: Ullmann 201
09:00
Why Robust Natural Language Understanding is a Challenge
PRESENTER: Marco Casadio

ABSTRACT. With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations. Recently, many methods for verifying networks’ general properties of robustness have been proposed, but they are mostly applied in Computer Vision. In this paper we propose a Verification method for Natural Language Understanding classification based on larger regions of interest, and we discuss the challenges of such task. We observe that, although the data is almost linearly separable, the verifier does not output positive results and we explain the problems and implications.

(Submitted as Extended Abstract)

09:30
An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks
PRESENTER: Matan Ostrovsky

ABSTRACT. Convolutional neural networks have gained vast popularity due to their excellent performance in the fields of computer vision, image processing, and others. Unfortunately, it is now well known that convolutional networks often produce erroneous results --- for example, minor perturbations of the inputs of these networks can result in severe classification errors. Numerous verification approaches have been proposed in recent years to prove the absence of such errors, but these are typically geared for fully connected networks and suffer from exacerbated scalability issues when applied to convolutional networks. To address this gap, we present here the CNN-Abs framework, which is particularly aimed at the verification of convolutional networks. The core of CNN-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract. CNN-Abs is designed to use existing verification engines as a backend, and our evaluation demonstrates that it can significantly boost the performance of a state-of-the-art DNN verification engine, reducing runtime by 15.7% on average.

10:00
Neural Networks in Imandra: Matrix Representation as a Verification Choice
PRESENTER: Remi Desmartin

ABSTRACT. The demand for formal verification tools for neural networks has increased as neural networks have been deployed in a growing number of safety-critical applications. Matrices are a data structure essential to formalising neural networks. Functional programming languages encourage diverse approaches to matrix definitions. This feature has already been successfully exploited in different applications. The question we ask is whether, and how, these ideas can be applied in neural network verification. A functional programming language Imandra combines the syntax of a functional programming language and the power of an automated theorem prover. Using these two key features of Imandra, we explore how different implementations of matrices can influence automation of neural network verification.

10:30-11:00Coffee Break
11:00-12:30 Session 10E

FoMLAS Session 2

Chair:
Location: Ullmann 201
11:00
Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations
PRESENTER: Jianglin Lan

ABSTRACT. We present a novel semidefinite programming (SDP) relaxation that enables tight and efficient verification of neural networks. The tightness is achieved by combining SDP relaxations with valid linear cuts, constructed by using the reformulation-linearisation technique (RLT). The computational efficiency results from a layerwise SDP formulation and an iterative algorithm for incrementally adding RLT-generated linear cuts to the verification formulation. The layer RLT-SDP relaxation here presented is shown to produce the tightest SDP relaxation for ReLU neural networks available in the literature. We report experimental results based on MNIST neural networks showing that the method outperforms the state-of-the-art methods while maintaining acceptable computational overheads. For networks of approximately 10k nodes (1k, respectively), the proposed method achieved an improvement in the ratio of certified robustness cases from 0% to 82% (from 35% to 70%, respectively).

11:30
A Cascade of Checkers for Run-time Certification of Local Robustness
PRESENTER: Ravi Mangal

ABSTRACT. Neural networks are known to be susceptible to adversarial examples. Different techniques have been proposed in the literature to address the problem, ranging from adversarial training with robustness guarantees to post-training and run-time certification of local robustness using either inexpensive but incomplete verification or sound, complete, but expensive constraint solving. We advocate for the use of a run- time cascade of over-approximate, under-approximate, and exact local robustness checkers. The exact check in the cascade ensures that no unnecessary alarms are raised, an important requirement for autonomous systems where resorting to fail-safe mechanisms is highly undesirable. Though exact checks are expensive, via two case studies, we demonstrate that the exact check in a cascade is rarely invoked in practice.

12:00
RoMA: a Method for Neural Network Robustness Measurement and Assessment
PRESENTER: Natan Levy

ABSTRACT. Neural network models have become the leading solution for a large variety of tasks, such as classification, language processing, and others. However, their reliability is heavily plagued by adversarial inputs: inputs generated by adding tiny perturbations to correctly-classified inputs, and for which the neural network produces erroneous results. In this paper, we present a new method called Robustness Measurement and Assessment (RoMA), which measures the robustness of a neural network model against such adversarial inputs. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. The type of robustness assessment afforded by RoMA is inspired by state-of-the-art certification practices, and could constitute an important step towards integrating neural networks in safety-critical systems.

12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

14:00-15:30 Session 14F

FoMLAS Session 3

Chair:
Location: Ullmann 201
14:00
VNN-COMP 2022
PRESENTER: Mark Müller

ABSTRACT. Over the last years, the formal verification of safety properties of neural networks has received ever-increasing attention from the research community leading to a vast range of applicable tools.

To help practitioners decide which verifier to apply, we hosted the ‘Verification of Neural Networks Competition’ (VNN-COMP’22), facilitating the comparison of current state-of-the-art systems on a range of diverse benchmarks using (cost) equivalent hardware. In this talk, we will present an analysis of the results and have successful participants present the core ideas underlying their methods. Furthermore, we will gather feedback on this year's processes and discuss ideas for future iterations of the competition.

This talk will be delivered jointly by Christopher Brix and Mark Müller.

15:30-16:00Coffee Break
16:00-17:00 Session 19D

FoMLAS Session 4

Location: Ullmann 201
16:00
Counter-Example Guided Neural Network Compression Refinement (CEG4N)

ABSTRACT. Neural networks are essential components of learning-based software systems. However, their high compute, memory, and power requirements make using them in low resources domains challenging. For this reason, neural networks are often compressed before deployment. Existing compression techniques tend to degrade the network accuracy. We propose Counter-Example Guided Neural Network Compression Refinement (CEG4N). This technique combines search-based quantization and equivalence verification: the former minimizes the computational requirements, while the latter guarantees that the network’s output does not change after compression. We evaluate CEG4N on a diverse set of benchmarks that include large and small networks. Our technique successfully compressed 80% of the networks in our evaluation while producing models with up to 72% better accuracy than state-of-the-art techniques.

16:30
Goal-Aware RSS for Complex Scenarios via Program Logic

ABSTRACT. I'd like to present the content of the attached manuscript, currently in press for IEEE Transaction on Intelligent Vehicles.

The work establishes a formal logic foundation for RSS, a recent methodology for "theorem proving" safety of automated driving systems. RSS is proposed by researchers at Intel/Mobileye and is attracting a lot of interests from industry and academia alike, but its formal aspects have been underdeveloped (although RSS is really about theorem proving). In the paper, we introduce a logic dFHL---it can be seen as an adaptation of Platzer's differential dynamic logic---and a workflow for deriving formally verified safety rules. This extension of RSS allows one to prove the safety of much more complicated scenarios than RSS could previously handle. The workflow is experimentally evaluated, too.

The work treats automated vehicles as black boxes, so that it accommodates ML components that cannot be logically modeled. I believe the work will be of the audience's interest 1) because of the methodology that black-box models are "verified" by imposing formally guaranteed rules/contracts, and 2) automated driving is a hot application topic.