CPAIOR2021: 18TH INTERNATIONAL CONFERENCE ON THE INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE AND OPERATIONS RESEARCH
PROGRAM FOR MONDAY, JULY 5TH
Days:
next day
all days

View: session overviewtalk overview

11:00-12:15Registration and Lunch

Registration for the physical part of the conference. Lunch is served during this time at the conference location.

12:30-14:30 Session 2: Master Class - Part 1
12:30
Safety in AI Systems: SMT-Based Verification of Deep Neural Networks

ABSTRACT. Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present an SMT-based technique for formally verifying properties of deep neural networks (or providing counter-examples). The technique can be used to prove various safety properties of neural networks, or to measure a network's robustness to adversarial inputs. Our approach is based on the simplex method, extended to handle piecewise-linear constraints, such as Rectified Linear Units (ReLUs), which are a crucial ingredient in many modern neural networks. We evaluate our technique on a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). We will also briefly discuss some of the cutting edge research in this field.

13:30
Formal Reasoning Methods in Machine Learning Explainability

ABSTRACT. The envisioned applications of machine learning (ML) in safety critical application hinges on systems that are robust in their operation and that can be trusted. This talk overviews recent efforts on applying automated reasoning tools in explaining non-interpretable (black-box) ML models, and relates such efforts with past work on reasoning about inconsistent logic formulas. Moreover, the talk details the computation of rigorous explanations of black-box models, and how these serve for assessing the quality of widely used heuristic explanation approaches. The talk also covers important properties of rigorous explanations, namely duality properties between different kinds of explanation. Finally, the talk briefly overviews ongoing work on tratactble approaches for explainability.

14:30-15:00Coffee Break
15:00-17:00 Session 3: Master Class - Part 2
15:00
Towards verifying AI systems based on deep neural networks

ABSTRACT. There has been considerable interest of late in the implementation of AI methods in a broad range of applications, including autonomous systems. A key difficulty in the use of AI is its certification. Formal verification has long been developed to assess and debug traditional software systems, including hardware, but its development to AI-systems remains problematic. A particularly severe open challenge is the verification of AI systems based on neural networks. In this talk I will try to summarise some of the recent work in the community towards the verification of neural systems. I will cover both the verification of deep neural networks in isolation, and of closed-loop systems, where some of the agent components are realised via deep neural networks. I will survey both exact and approximate methods, including MILP-based approaches, dependency analysis, linear relaxations, and symbolic interval propagation. Time permitting, I will demo some of the present open-source toolkits developed and maintained in the verification of autonomous systems at Imperial College London, including Venus, Verinet, and VenMAS. We will observe that the verification of neural models with hundreds of thousands of nodes is now feasible with further advances likely to be achieved in the near future.

16:00
Reasoning About the Probabilistic Behavior of Classifiers

ABSTRACT. This talk will overview our recent work on using probabilistic models of the feature distribution to reason about the expected behavior of learned classifiers. This includes probabilistic sufficient explanations, which formulate explaining an instance of classification as choosing the `simplest' subset of features such that only observing those features is probabilistically `sufficient' to explain the classification. That is, sufficient to give us strong probabilistic guarantees that the model will behave similarly when all features are observed under the data distribution. We leverage tractable probabilistic reasoning tools such as probabilistic circuits and expected predictions to design scalable algorithms for these tasks.

17:00-17:30Coffee Break
17:30-19:30 Session 4: Master Class - Part 3
17:30
Certified Artificial Intelligence

ABSTRACT. Despite surpassing human-level performance in many challenging domains such as vision, planning, and natural sciences, there remain concerns about the fragility of modern data-driven AI systems when applied in the real-world, which poses a threat to their wider adoption. Indeed, obtaining AI systems theoretically guaranteed to be safe and reliable is a fundamental challenge of critical importance. In this talk, I will present a path towards addressing this fundamental problem. Specifically, I will introduce new mathematical methods combining convex optimization with the classical abstract interpretation framework that enables scalable and precise logical reasoning about the (potentially infinite number of) behaviors of an AI system (e.g., a deep neural network). I will then show how these methods enable both the creation of state-of-the-art automated verifiers for modern AI systems and the design of new provable training techniques. Finally, I will outline several promising future research directions.

18:30
Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities

ABSTRACT. As machine learning is deployed in all aspects of society, it has become increasingly important to ensure stakeholders understand and trust these models. Decision makers must have a clear understanding of the model behavior so they can diagnose errors and potential biases in these models, and decide when and how to employ them. However, most accurate models that are deployed in practice are not interpretable, making it difficult for users to understand where the predictions are coming from, and thus, difficult to trust. Recent work on explanation techniques in machine learning offers an attractive solution: they provide intuitive explanations for “any” machine learning model by approximating complex machine learning models with simpler ones. In this tutorial, we will discuss several post hoc explanation methods, and focus on their advantages and shortcomings. For each family of explanation techniques, we will provide their problem setup, prominent methods, example applications, and finally, discuss their vulnerabilities and shortcomings. We hope to provide a practical and insightful introduction to explainability in machine learning. (this material was developed in collaboration with Himabindu Lakkaraju and Julius Adebayo)

20:00-22:00 Dinner at Rathauskeller

Dinner will be served at the Rathauskeller. We will walk there together after the master class.