Tags:Linear programming, Markov decision process, Policy iteration, Quantitative model checking and Value iteration
Abstract:
Model checking undiscounted reachability and expected-reward properties on Markov decision processes (MDPs) is key for the verification of systems that act under uncertainty. We give a detailed overview of today's state-of-the-art algorithms for MDP model checking with a focus on performance and correctness. We highlight their fundamental differences, and describe various optimizations and implementation variants. We experimentally compare floating-point and exact-arithmetic implementations of all algorithms on three benchmark sets using two probabilistic model checkers. Our results show that (optimistic) value iteration is a sensible default, but other algorithms are preferable in specific settings. This presentation thereby provides a guide for MDP verification practitioners---tool builders and users alike.
The original Practitioner's Guide to MDP Model Checking Algorithms (https://link.springer.com/chapter/10.1007/978-3-031-30823-9_24) was nominated for a best paper award at TACAS'23. This presentation is based on the revised version of that paper which will be part of the invited special issue of TACAS'23. The intended presenter is M. Weininger.
The Revised Practitioner's Guide to MDP Model Checking Algorithms