Tags:AI Safety, Apprenticeship Learning, Counterexample-Guided Inductive Synthesis, Inverse Reinforcement Learning, Probabilistic Model Checking and Reinforcement Learning
Abstract:
Apprenticeship learning (AL) is a class of “learning from demonstrations” techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent has to derive a good policy by observing an expert’s demonstrations. In this paper, we study the problem of how to make AL algorithms inherently safe while still meeting its learning objective. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure both safety and performance of the learnt policy. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.