Tags:complexity, constraint programming, constraint propagation and minimum explanations
Abstract:
Explaining the outcome of programs has become one of the main concerns in AI research. In constraint programming, a user may want the system to explain why a given variable assignment is not feasible or how it came to the conclusion that the problem does not have any solution. One solution to the latter is to return to the user a sequence of simple reasoning steps that lead to inconsistency. Arc consistency is a well-known form of reasoning that can be understood by a human. We consider explanations as sequences of propagation steps of a constraint on a variable (i.e. the ubiquitous revise function in arc consistency algorithms). We characterize, on binary CSPs, cases for which providing a shortest such explanation is easy: when domains are Boolean or when the constraint graph has maximum degree two. However, it is NP-hard in general. Surprisingly, it remains NP-hard for as few as four variables. It also remains NP-hard if the problem has a tree structure, despite the fact that arc consistency is a decision procedure on trees.
Complexity of Minimum-Size Arc-Inconsistency Explanations