Tags:abstraction and refinement, correct-by-construction, Deep reinforcement learning and model checking
Abstract:
Deep reinforcement learning (DRL) has demonstrated its power in developing intelligent systems. These systems shall be formally guaranteed to be trustworthy when they are applied to safety-critical domains, which is typically achieved by formal verification performed after training. This train-then-verify process has two limits: (i) systems are difficult to verify because their state space is continuous and infinite, and their AI components (i.e., neural networks) are almost inexplicable, and (ii) the ex post facto detection of bugs increases both the time- and money-wise cost of training and deployment. In this paper, we propose a novel verification-in-the-loop training framework called Trainify for developing verifiable DRL systems driven by counterexample-guided abstraction and refinement. Specifically, Trainify trains a DRL system on a finite set of coarsely abstracted but efficiently verifiable state spaces. When verification fails, we refine the abstraction based on returned counterexamples and train again on the refined states. The process is iterated until all the properties are verified against the trained system. We demonstrate the effectiveness of our framework by training six classic control systems. Our framework can develop a more reliable DRL system with provable guarantees without sacrificing system performances such as accumulated awards and robustness compared with conventional DRL approaches.
Trainify: a CEGAR-Driven Training and Verification Framework for Safe Deep Reinforcement Learning