Tags:Boolean satisfiability, explainable decision set and machine learning
Abstract:
The successes of machine learning in recent years triggered a fast growing range of applications. In important settings, including safety critical applications, accurate predictions do not suffice; one expects the machine learning model to also explain the predicions made, in forms understandable by humans. Recent work proposed explainable models based on decision sets which can be viewed as unordered sets of rules, respecting some sort of rule non-overlap constraint. This paper investigates existing solutions for computing decision sets and identifies a number of drawbacks, related with rule overlap and succinctness of explanations, the accuracy of achieved results, but also the efficiency of proposed approaches. To address these drawbacks, the paper develops novel SAT-based solutions for learning decision sets. Experimental results on computing decision sets for representative datasets demonstrate that SAT enables solutions that are not only the most efficient, but also offer stronger guarantees in terms of rule non-overlap.
A SAT-Based Approach to Learn Explainable Decision Sets