MAI-XAI25: 2nd Workshop on Multimodal, Interactive and Affective eXplainable Artificial Intelligence European Conference on Artificial Intelligence (ECAI) Bologna, Italy, October 25-26, 2025 |
Conference website | https://sites.google.com/view/mai-xai25/ |
Submission link | https://easychair.org/conferences/?conf=maixai25 |
Abstract registration deadline | May 15, 2025 |
Submission deadline | May 21, 2025 |
MAI-XAI25 is a Workshop collocated with the European Confernce on Artificial Intelligence (ECAI 2025).
It represents a distinctive perspective in the realm of Explainable AI (XAI), going on with the series started last year with the first MAI-XAI@ECAI2024.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference.
Accepted manuscripts will be published in CEUR Workshop Proceedings (CEUR-WS.org). Papers must be written in English, be prepared for double-blind review using the CEUR-WS template.
We aim to offer researchers and practitioners the opportunity to identify new promising research directions on XAI. Attendants are encouraged to present case studies in real-world applications where XAI has been successfully applied, emphasizing the practical benefits and challenges encountered.
The following paper categories are welcome:
- Full papers (10-15 pages) describing novel techniques for providing some insights into the explainability of data and models but also emphasizing the pressing need to address nuanced challenges for real-world applications, with multi-modal affective interaction emerging as a crucial requirement.
- Short papers (5-9 pages) describing work in progress.
List of Topics
Multimodal XAI
-
XAI for multi-modal data retrieval, collection, augmentation, generation, and validation: From data explainability to understanding and mitigating data bias
-
XAI for Human-Computer Interaction (HCI): From explanatory user interfaces to interactive and interpretable machine learning approaches with human-in-the-loop and machine-in-the-loop approaches
-
Augmented reality for multi-modal XAI
-
XAI approaches leveraging application-specific domain knowledge: From concepts to large knowledge repositories (ontologies) and corpus
-
Design and validation of multi-modal explainers: From endowing explainable models with multi-modal explanation interfaces to measuring model explainability and evaluating the quality of XAI systems
-
Quantifying XAI: From defining metrics and methodologies to assessing the effectiveness of explanations in enhancing user understanding, reliance, and trust
-
Large knowledge bases and graphs that can be used for multi-modal explanation generation
-
Large language models and their generative power for multi-modal XAI
-
Proof-of-concepts and demonstrators of how to integrate effective and efficient XAI into real-world human decision-making processes
-
Ethical, Legal, Socio-Economic and Cultural (ELSEC) considerations in XAI: Examining ethical implications surrounding the use of high-risk AI applications, including potential biases and the responsible deployment of sustainable “green” AI in sensitive domains
Affective XAI
- Explainable affective computing in healthcare, psychology, physiology, education, entertainment, and gaming
-
Privacy, fairness, and ethical considerations in affective computing
-
Multimodal (textual, visual, vocal, physiological) emotion recognition systems
-
User environments for the design of systems to better detect and classify affect
-
Sentiment analysis and explainability
-
Social robots and explainability
-
Emotion-aware recommender systems
-
Accuracy and explainability in emotion recognition
-
Machine learning using biometric data to classify biosignals
-
Virtual reality in affective computing
-
Human–Computer Interaction (HCI) and Human in the Loop (HITL) approaches in affective computing
Interactive XAI
-
Dialogue-based approaches to XAI
-
Use of multiple modalities in XAI systems
-
Approaches to dynamically adapt explainability in interaction with a user
-
XAI approaches that use a model of the partner to adapt explanations
-
XAI approaches for collaborative decision-making between humans and AI models
-
Methods to measure and evaluate the understanding of the users of a model
-
Methods to measure and evaluate the ability to use models effectively in downstream tasks
-
Interactive methods by which a system and a user can negotiate what is to be explained
-
Modelling the social functions and aspects of an explanation
-
Methods to identify users’ information and explainability needs
Committees
Organizing committee
- Philipp Cimiano
- Fosca Giannotti
- Tim Miller
- Bárbara Hammer
- Alejandro Catalá Bolos
- Peter Flach
- Jose M. Alonso-Moral
Contact
All questions about submissions should be emailed to Philipp Cimiano and Jose Maria Alonso-Moral (https://sites.google.com/view/mai-xai25/contact).