Tags:bipolar argumentation, explainable AI and gradual semantics
Abstract:
Causal models are playing an increasingly important role in machine learning, particularly in the realm of explainable AI. We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for the models’ outputs. The conceptualisation is based on reinterpreting desirable properties of semantics of AFs as explanation moulds, which are means for characterising argumentatively the relations in the causal model. We demonstrate our methodology by reinterpreting the property of Bi-variate Reinforcement as an explanation mould to forge bipolar AFs as explanations for the outputs of causal models. We perform a theoretical evaluation of these argumentative explanations, examining whether they satisfy a range of desirable explanatory and argumentative properties.
Explaining Causal Models with Argumentation: the Case of Bi-Variate Reinforcement