FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
SOMLMFM ON FRIDAY, JULY 13TH

View: session overviewtalk overviewside by side with other conferences

09:00-10:30 Session 83K: Machine learning and logic
Location: Maths LT1
09:00
Machine learning and logic: Fast and slow thinking

ABSTRACT. There is a recent perception that computer science is undergoing a Kuhnian paradigm shift, with CS as a model-driven science being replaced as a data-driven science. I will argue that, in general new scientific theories refine old scientific theories, rather than replace them. Thus, data-driven CS and model-driven CS complement each other, just as fast thinking and slow thinking complement each other in human thinking, as explicated by Daniel Kahneman. I will then use automated vehicles as an example, where in recent years, car makers and tech companies have been racing to be the first to market. In this rush there has been little discussion of how to obtain scalable standardization of safety assurance, without which this technology will never be commercially deployable. Such assurance requires formal methods, and combining machine learning with logic is the challenge of the day.

09:30
Provably beneficial artificial intelligence

ABSTRACT. I will briefly survey recent and expected developments in AI and their implications. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help, as well as the ability to learn more about those objectives from observation of human behaviour.

10:00
Towards Robust and Explainable Artificial Intelligence

ABSTRACT. Deep learning has led to rapid progress being made in the field of machine learning and artificial intelligence, leading to dramatically improved solutions of many challenging problems such as image understanding, speech recognition, and automatic game playing. Despite these remarkable successes, researchers have observed some intriguing and troubling aspects of the behaviour of these models. A case in point is the presence of adversarial examples which make learning based systems fail in unexpected ways. Such behaviour and the difficultly of interpreting the behaviour of neural networks is a serious hindrance in the deployment of these models for safety-critical applications. In this talk, I will review the challenges in developing models that are robust and explainable and discuss the opportunities for collaboration between the formal methods and machine learning communities.

10:30-11:00Coffee Break
11:00-12:30 Session 86L: Ethical and fair AI
Location: Maths LT1
11:00
Ethically Aligned AI Systems

ABSTRACT. AI systems are increasingly acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Humans and machines will often need to work together and agree on common decisions. This requires alignment around some common moral values and ethical principles. Humans will accept and trust more machines that behave at least as ethically as other humans in the same environment. Also, this would make it easier for machines to explain their behavior in terms understandable by humans. Moreover, shared moral values and ethical principles will help in collective human-machine decision making, where consensus or compromise is needed. In this talk I will present some challenges in designing ethically aligned AI systems that humans can trust, and describe technical solutions for such challenges.

11:30
Deep Learning Methods for Large Scale Theorem Proving

ABSTRACT. Machine Learning (with hand engineered features) has become an enabling factor for reasoning in large theories. Here I will give overview of some recent work utilizing deep neural networks for both premise selection and guiding automated theorem provers. Also we will highlight connections and recent efforts on utilizing natural language corpora for automated reasoning.

12:00
AI2: AI Safety and Robustness with Abstract Interpretation

ABSTRACT. I will present AI2 (ai2.ethz.ch), a new approach for ensuring safety and robustness of AI systems based on abstract interpretation. AI2 is based on symbolic over-approximation and can automatically reason about safety properties (e.g., robustness) of realistic deep neural networks (e.g., convolutional). In the talk I will discuss how to soundly and precisely approximate the behavior of fully-connected layers with rectified linear unit activations (ReLU), convolutional layers with ReLU, and max pooling layers, allowing AI2 to scale to larger networks. I will also present a new abstraction which allows a trade-off between scalability and precision. Finally, I will discuss several open research challenges in applying symbolic methods to AI systems.

12:30-14:00Lunch Break
14:00-15:30 Session 87L: Applications of AI
Location: Maths LT1
14:00
The realities of applied AI

ABSTRACT. Artificial Intelligence will improve productivity across a broad range of applications and across a broad range of industries. The benefit to humanity will be substantial but there are important lessons to learn and steps to take. NVIDIA works closely with world-wide researchers, Enterprise & startups in both getting started & problem-solving. This talk will address some of the issues to be considered and explain why software and community are key to success with AI.

14:30
Safe reinforcement learning via formal methods

ABSTRACT. Formal verification provides a high degree of confidence in safe system operation, but only if reality matches the verified model. Although a good model will be accurate most of the time, even the best models are incomplete. This is especially true in Cyber-Physical Systems because high-fidelity physical models of systems are expensive to develop and often intractable to verify. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments, but do not provide guarantees of safe operation.

This talk presents joint work with Nathan Fulton on provably safe learning that provides the best of both worlds: the exploration and optimization capabilities of learning along with the safety guarantees of formal verification. The main insight is that formal verification combined with verified runtime monitoring can ensure the safety of a learning agent. Verification results are preserved whenever learning agents limit exploration within the confounds of verified control choices as long as observed reality comports with the model used for off-line verification. When a model violation is detected, the agent abandons efficiency and instead attempts to learn a control strategy that guides the agent to a modeled portion of the state space.

15:00
Programming from examples: PL meets ML

ABSTRACT. Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from example-based specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquer-based deductive search paradigm to inductively reduce the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these three PBE components leverage both formal methods and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This not only leads to better heuristics, but also enables easier development, maintenance, and even personalization of a PBE system.

15:30-16:00Coffee Break
16:00-16:40 Session 88I: Explainable machine learning
Location: Maths LT1
16:00
Influence-directed explanations for machine learning systems

ABSTRACT. Machine learning systems are often inscrutable black boxes: they are effective predictors but lack human-understandable explanations. I will describe a new paradigm for influence-directed explanations that address this problem. Influence-directed explanations shed light on the inner workings of black-box machine learning systems by identifying components that causally influence system behavior and by providing human-understandable interpretation to the concepts represented by these components. I will describe instances of this paradigm that are model-agnostic and instances that are specific to deep neural networks. Influence-directed explanations serve as a foundation for reasoning about fairness and privacy of machine learning models. Our initial exploration suggests that formal methods for analysis of probabilistic systems have an important role to play in efficient generation of influence-directed explanations. Joint work with colleagues at Carnegie Mellon University.

16:20
Safety verification for deep neural networks with provable guarantees

ABSTRACT. Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing automated verification techniques for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques exploit Lipschitz continuity of the networks and aim to approximate, for a given set of inputs, the reachable set of network outputs in terms of lower and upper bounds, in anytime manner, with provable guarantees. We develop novel algorithms based on games and global optimisation, and evaluate them on state-of-the-art networks.

16:40-18:00 Session 91: Panel Session & Summary

Panel session preceded by an introductory talk.

Location: Maths LT1
16:40
What Just Happened in AI

ABSTRACT. I will discuss in this talk my perspective on recent developments in artificial intelligence, which have been triggered by the successes of deep learning. The discussion will be based on a position paper that will appear in CACM with the title “Human-Level Intelligence or Animal-Like Abilities?” My presentation will be hinged on a distinction between model-based and function-based approaches to AI, and will address a spectrum of issues and considerations, while placing current developments in a historical perspective. I will also share thoughts on how best to capitalize on current progress as we move forward, touching on both educational and research needs.

17:00
Panel session

ABSTRACT. The panel session will be moderated by:

  • Francesca Rossi, AI Ethics Global Leader / Distinguished Research Staff Member at IBM Research AI, and Professor of Computer Science—specializing in artificial intelligence, University of Padova

and the panelists are:

  • Adnan Darwiche, Professor and Chairman of the Science department at UCLA, who directs the Automated Reasoning Ggroup which focuses on probabilistic and logical reasoning, and their applications to machine learning
  • Stuart Russell, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley, and Adjunct Professor of Neurological Surgery, University of California, San Francisco, who is interested in building systems that can act intelligently in the real world
  • Vasu Singh, who is managing safety and rigorous software engineering of the Deep Learning SW at NVIDIA and prior to this worked in the embedded/automotive industry (Lam Research, BMW)
  • Chris Holmes, Professor of Biostatistics at the Department of Statistics and the Nuffield Department of Medicine at the University of Oxford, whose research explores the potential of computational statistics and statistical machine learning to assist in the medical and health sciences
19:00-21:30 Workshops dinner at Keble College

Workshops dinner at Keble College. Drinks reception from 7pm, to be seated by 7:30 (pre-booking via FLoC registration system required; guests welcome).

Location: Keble College