Editors: Nina Narodytska, Omri Isac and Guy Katz
Authors, Title and Abstract | Paper | Talk |
---|---|---|
ABSTRACT. We present a novel semidefinite programming (SDP) relaxation that enables tight and efficient verification of neural networks. The tightness is achieved by combining SDP relaxations with valid linear cuts, constructed by using the reformulation-linearisation technique (RLT). The computational efficiency results from a layerwise SDP formulation and an iterative algorithm for incrementally adding RLT-generated linear cuts to the verification formulation. The layer RLT-SDP relaxation here presented is shown to produce the tightest SDP relaxation for ReLU neural networks available in the literature. We report experimental results based on MNIST neural networks showing that the method outperforms the state-of-the-art methods while maintaining acceptable computational overheads. For networks of approximately 10k nodes (1k, respectively), the proposed method achieved an improvement in the ratio of certified robustness cases from 0% to 82% (from 35% to 70%, respectively). | Jul 31 11:00 | |
ABSTRACT. Neural networks are successfully used in many domains including safety and security critical applications. As a result researchers have proposed formal verification techniques for verifying neural network properties. A large majority of previous efforts have focused on checking local robustness in neural networks. We instead focus on another neural network security issue, namely data poisoning, whereby an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger causes the classifier to predict some target class. In this paper, we show how to formulate absence of data poisoning as a property that can be checked with off-the-shelf verification tools, such as Marabou and nneum. Counterexamples of failed checks constitute potential triggers that we validate through testing. We further show that the discovered triggers are ‘transferable’ from a small model to a larger, better-trained model, allowing us to analyze state-of-the art performant models trained for image classification tasks. | Aug 01 09:30 | |
ABSTRACT. Neural networks are known to be susceptible to adversarial examples. Different techniques have been proposed in the literature to address the problem, ranging from adversarial training with robustness guarantees to post-training and run-time certification of local robustness using either inexpensive but incomplete verification or sound, complete, but expensive constraint solving. We advocate for the use of a run- time cascade of over-approximate, under-approximate, and exact local robustness checkers. The exact check in the cascade ensures that no unnecessary alarms are raised, an important requirement for autonomous systems where resorting to fail-safe mechanisms is highly undesirable. Though exact checks are expensive, via two case studies, we demonstrate that the exact check in a cascade is rarely invoked in practice. | Jul 31 11:30 | |
ABSTRACT. Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users' expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying image and NLP classifiers. We develop vegas, the first general framework for verifying both single-step and multi-step properties of these schedulers based on carefully designed algorithms that combine abstractions, refinements, solvers, and proof transfer. Our experimental results show that vegas achieves significant speed-up when verifying important properties of a state-of-the-art GNN-based scheduler compared to previous methods. | Aug 01 09:00 | |
ABSTRACT. Neural networks are essential components of learning-based software systems. However, their high compute, memory, and power requirements make using them in low resources domains challenging. For this reason, neural networks are often compressed before deployment. Existing compression techniques tend to degrade the network accuracy. We propose Counter-Example Guided Neural Network Compression Refinement (CEG4N). This technique combines search-based quantization and equivalence verification: the former minimizes the computational requirements, while the latter guarantees that the network’s output does not change after compression. We evaluate CEG4N on a diverse set of benchmarks that include large and small networks. Our technique successfully compressed 80% of the networks in our evaluation while producing models with up to 72% better accuracy than state-of-the-art techniques. | Jul 31 16:00 | |
ABSTRACT. Neural network models have become the leading solution for a large variety of tasks, such as classification, language processing, and others. However, their reliability is heavily plagued by adversarial inputs: inputs generated by adding tiny perturbations to correctly-classified inputs, and for which the neural network produces erroneous results. In this paper, we present a new method called Robustness Measurement and Assessment (RoMA), which measures the robustness of a neural network model against such adversarial inputs. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. The type of robustness assessment afforded by RoMA is inspired by state-of-the-art certification practices, and could constitute an important step towards integrating neural networks in safety-critical systems. | Jul 31 12:00 | |
ABSTRACT. Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks. However, as highlighted by many recent studies, even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN. This renders DNNs vulnerable to strategic input manipulations by attackers, and also prone to oversensitivity to environmental noise. To mitigate this phenomenon, practitioners apply joint classification by an ensemble of DNNs. By aggregating the classification outputs of different individual DNNs for the same input, ensemble-based classification reduces the risk of misclassifications due to the specific realization of the stochastic training process of any single DNN. However, the effectiveness of a DNN ensemble is highly dependent on its members not simultaneously erring on many different inputs. In this case study, we harness recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed --- resulting in more robustly-accurate ensemble-based classification. Our proposed framework uses a DNN verifier as a backend, and includes heuristics that help reduce the high complexity of directly verifying ensembles. More broadly, our work puts forth a novel universal objective for formal verification that can potentially improve the robustness of real-world, deep-learning-based systems across a variety of application domains. | Aug 01 10:00 | |
ABSTRACT. Convolutional neural networks have gained vast popularity due to their excellent performance in the fields of computer vision, image processing, and others. Unfortunately, it is now well known that convolutional networks often produce erroneous results --- for example, minor perturbations of the inputs of these networks can result in severe classification errors. Numerous verification approaches have been proposed in recent years to prove the absence of such errors, but these are typically geared for fully connected networks and suffer from exacerbated scalability issues when applied to convolutional networks. To address this gap, we present here the CNN-Abs framework, which is particularly aimed at the verification of convolutional networks. The core of CNN-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract. CNN-Abs is designed to use existing verification engines as a backend, and our evaluation demonstrates that it can significantly boost the performance of a state-of-the-art DNN verification engine, reducing runtime by 15.7% on average. | Jul 31 09:30 | |
ABSTRACT. Deep neural networks (DNNs) have become increasingly popular in recent years. However, despite their many successes, DNNs may also err and produce incorrect and potentially fatal outputs in safetycritical settings, such as autonomous driving, medical diagnosis, and airborne collision avoidance systems. Much work has been put into detecting such erroneous behavior in DNNs, e.g., via testing or verification, but removing these errors after their detection has received lesser attention. We present here a new framework, called 3M-DNN, for repairing a given DNN, which is known to err on some set of inputs. The novel repair procedure employed by 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine. To the best of our knowledge, our method is the first one that allows repairing the network by simultaneously modifying the weights of multiple layers. This is achieved by splitting the network into sub-networks, and applying a single-layer repairing technique to each component. We evaluated 3M-DNN on an extensive set of benchmarks, obtaining promising results. | Aug 01 16:00 | |
ABSTRACT. Verification of neural network specifications is currently an active field of research in automated theorem proving. However, the actual act of verification is merely one step in the process of constructing a verified network. Prior to verification the specification should influence the training of the network, and afterwards users may want to export the verified specification to other verification environments in order to prove a specification about a larger system that uses the network. Currently there is little consensus on how best to connect these different stages. In this talk we will describe our proposed solution to this problem: the Vehicle specification system. Vehicle allows the user to write a single specification in a high-level human readable form, and the Vehicle compiler then compiles it down to different targets, including training frameworks, verifiers and interactive theorem provers. In this talk we will discuss the various design decisions involved in the specification language itself and hope to solicit feedback from the verification community. (Submitted as Extended Abstract) | Aug 01 11:30 | |
ABSTRACT. Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification com- munity has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question. In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to- check witness of unsatisfiability, which attests to the absence of errors. Our proof production is based on an efficient adaptation of the well-known Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. As a proof of concept, we implemented our technique on top of the Marabou DNN verifier. Our evaluation on a safety- critical system for airborne collision avoidance shows that proof production succeeds in almost all cases, and entails only a small overhead. | Aug 01 14:00 | |
ABSTRACT. With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations. Recently, many methods for verifying networks’ general properties of robustness have been proposed, but they are mostly applied in Computer Vision. In this paper we propose a Verification method for Natural Language Understanding classification based on larger regions of interest, and we discuss the challenges of such task. We observe that, although the data is almost linearly separable, the verifier does not output positive results and we explain the problems and implications. (Submitted as Extended Abstract) | Jul 31 09:00 | |
ABSTRACT. The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications has drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice. It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification. Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL). This raises several research questions on the technical side of "training with constraints". What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions. (Submitted as Extended Abstract) | Aug 01 12:00 | |
ABSTRACT. The demand for formal verification tools for neural networks has increased as neural networks have been deployed in a growing number of safety-critical applications. Matrices are a data structure essential to formalising neural networks. Functional programming languages encourage diverse approaches to matrix definitions. This feature has already been successfully exploited in different applications. The question we ask is whether, and how, these ideas can be applied in neural network verification. A functional programming language Imandra combines the syntax of a functional programming language and the power of an automated theorem prover. Using these two key features of Imandra, we explore how different implementations of matrices can influence automation of neural network verification. | Jul 31 10:00 | |
ABSTRACT. Classifiers learnt from data are increasingly being used as components in systems where safety is a critical concern. In this work, we present a formal notion of safety for classifiers via constraints called safe-ordering constraints. These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature. For classifiers implemented using neural networks, we also present a run-time mechanism for the enforcement of safe-ordering constraints. Our approach is based on a self-correcting layer, which provably yields safe outputs regardless of the characteristics of the classifier input. We compose this layer with an existing neural network classifier to construct a self-correcting network (SC-Net), and show that in addition to providing safe outputs, the SC-Net is guaranteed to preserve the classification accuracy of the original network whenever possible. Our approach is independent of the size and architecture of the neural network used for classification, depending only on the specified property and the dimension of the network’s output; thus it is scalable to large state-of-the-art networks. We show that our approach can be optimized for a GPU, introducing run-time overhead of less than 1ms on current hardware-even on large, widely-used networks containing hundreds of thousands of neurons and millions of parameters. | Aug 01 16:30 | |
ABSTRACT. Formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes also automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism applies temporal logic with predicates for characterizing events and uses universal quantification to allow enumeration of objects. While we have applied the formalism successfully to two complex use cases, several limitations such as monitorability or lack of quantitative satisfaction also reveal further improvement potential. | Aug 01 11:00 | |
ABSTRACT. I'd like to present the content of the attached manuscript, currently in press for IEEE Transaction on Intelligent Vehicles. The work establishes a formal logic foundation for RSS, a recent methodology for "theorem proving" safety of automated driving systems. RSS is proposed by researchers at Intel/Mobileye and is attracting a lot of interests from industry and academia alike, but its formal aspects have been underdeveloped (although RSS is really about theorem proving). In the paper, we introduce a logic dFHL---it can be seen as an adaptation of Platzer's differential dynamic logic---and a workflow for deriving formally verified safety rules. This extension of RSS allows one to prove the safety of much more complicated scenarios than RSS could previously handle. The workflow is experimentally evaluated, too. The work treats automated vehicles as black boxes, so that it accommodates ML components that cannot be logically modeled. I believe the work will be of the audience's interest 1) because of the methodology that black-box models are "verified" by imposing formally guaranteed rules/contracts, and 2) automated driving is a hot application topic. | Jul 31 16:30 |