View: session overviewtalk overview
09:00 | Case Study: Runtime Safety Verification of Neural Network Controlled System ABSTRACT. Neural networks are increasingly used to control and make decisions in fields such as robotics and autonomous vehicles. Despite their effectiveness, the deployment of neural network-controlled systems (NNCSs) in safety-critical applications raises significant safety concerns. Recent advances in neural network verification and reachability analy- sis tools have begun to address these issues, yet the majority of these efforts focus on offline time-irrelevant verification tasks. This gap over- looks critical aspects of verifying control and ensuring safety in real-time scenarios. This paper presents a detailed case study on using a state- of-the-art NNCS reachability analysis tool, POLAR-Express, for run- time safety verification in a Turtlebot navigation system. The Turtlebot, equipped with a neural network controller for steering, operates in a com- plex environment with obstacles. Thus, we further developed a safe online controller switching strategy that switches between the original NNCS controller and an obstacle avoidance controller based on the verification results, to ensure safety and maintain control performance. Our exper- iments, conducted in a ROS2 Flatland simulation environment, explore the capabilities and limitations of using POLAR-Express for runtime verification in dynamic environments, and demonstrate the effectiveness of our switching strategy. |
09:30 | Gaussian-Based and Outside-the-Box Runtime Monitoring Join Forces ABSTRACT. Since neural networks can make wrong predictions even with high confidence, monitoring their behavior at runtime is important, especially in safety-critical domains like autonomous driving. In this paper, we combine ideas from previous monitoring approaches based on observing the activation values of hidden neurons. In particular, we combine the Gaussian-based approach, which observes whether the current value of each monitored neuron is similar to typical values observed during training, and the Outside-the-Box monitor, which creates clusters of the acceptable activation values, and, thus, considers the correlations of the neurons' values. |
10:00 | Box-based Monitor Approach for Out-of-Distribution Detection in YOLO: An Exploratory Study PRESENTER: Weicheng He ABSTRACT. Deep neural networks, despite their impressive performance across various tasks, often produce overconfident predictions on out-of-distribution (OoD) data, which can lead to severe consequences, especially in safety-critical applications. Monitoring OoD samples at runtime is thus essential. While this problem has been extensively studied in image classification and recently in object detection with the Faster R-CNN architecture, the state-of-the-art YOLO series remains underexplored. In this short paper, we present an initial exploration into OoD detection for YOLO models, proposing a box-based monitor approach. Our preliminary results demonstrate that this box-based monitor outperforms several existing logits-based scoring methods, achieving a significant 20% reduction in false positive rates for OoD samples while maintaining a high true positive rate for in-distribution samples. This work introduces novel, yet not fully developed, ideas and emerging techniques in the realm of monitoring OoD inputs for YOLO series object detection models. Future research will focus on leveraging feature space information to enhance our results further. This paper aims to spark productive debate and provide impetus for future research, highlighting both the potential and the challenges of integrating OoD detection with the YOLO architecture for effective runtime monitoring. |