SemEx 2019: 1st Workshop on Semantic Explainability Newport Beach, CA, United States, January 29-February 1, 2019 |
Conference website | http://www.semantic-explainability.com/ |
Submission link | https://easychair.org/conferences/?conf=semex2019 |
Submission deadline | December 3, 2018 |
In recent years, the explainability of complex systems such as decision support systems, automatic decision systems, machine learning-based/trained systems, and artificial intelligence in general has been expressed not only as a desired property, but also as a property that is required by law. For example, the General Data Protection Regulation’s (GDPR) „right to explanation“ demands that the results of ML/AI-based decisions are explained. The explainability of complex systems, especially of ML-based and AI-based systems, becomes increasingly relevant as more and more aspects of our lives are influenced by these systems‘ actions and decisions.
Several workshops address the problem of explainable AI. However, none of these workshops has a focus on semantic technologies such as ontologies and reasoning. We believe that semantic technologies and explainability coalesce in two ways. First, systems that are based on semantic technologies must be explainable like all other AI systems. In addition, semantic technology seems predestined to support in rendering explainable those systems that are not themselves based on semantic technologies.
Turning a system that already makes use of ontologies into an explainable system could be supported by the ontologies, as ideally the ontologies capture some aspects of the users‘ conceptualizations of a problem domain. However, how can such systems make use of these ontologies to generate explanations of actions they performed and decisions they took? Which criteria must an ontology fulfill so that it supports the generation of explanations? Do we have adequate ontologies that enable to express explanations and enable to model and reason about what is understandable or comprehensible for a certain user? What kind of lexicographic information is necessary to generate linguistic utterances? How to evaluate a system‘s understandability? How to design ontologies for system understandability? What are models of human-machine interaction where the system enables to interact with the system until the user understood a certain action or decision? How can explanatory components be reused with other systems that they have not been designed for?
Turning systems that are not yet based on ontologies but on sub-symbolic representations/distributed semantics such as deep learning-based approaches into explainable systems might be supported by the use of ontologies. Some efforts in this field have been referred to as neural-symbolic integration.
This workshop aims to bring together international experts interested in the application of semantic technologies for explainability of artificial intelligence/machine learning to stimulate research, engineering and evaluation – towards making machine decisions transparent, re-traceable, comprehensible, interpretable, explainable, and reproducible. Semantic technologies have the potential to play an important role in the field of explainability since they lend themselves very well to the task, as they enable to model users‘ conceptualizations of the problem domain. However, this field has so far only been only rarely explored.
Submission Guidelines
Please note that there is another ICSC workshop with also has a focus on explainability and transparency: the 1st International Workshop on Intelligence & Interaction in Knowledge Engineering – IIKE. If you work is in the area of explainability/transparency of intelligent systems but does not make use of semantic technologies, then consider submitting your publications to the IIKE workshop.
Manuscripts must be written in English, must not be longer than eight (8) pages, and must follow the instructions found here.
All paper submissions will be carefully reviewed by at least three experts and reviews will be returned to the author(s) with comments to ensure the high quality of the accepted papers. The authors of accepted papers must guarantee that their paper will be presented at the workshop. Please only submit original material where copyright of all parts is owned by the authors declared and which is not currently under review elsewhere. Please see the IEEE policies for further information.
Only electronic submission will be accepted. Technical paper authors MUST submit their manuscripts through EasyChair. Please follow this link (please register if not an EasyChair user). Manuscripts may only be submitted in PDF format.
List of Topics
Topics of Interest include but are not limited to:
- Explainability of machine learning models based on semantics/ontologies
- Exploiting semantics/ontologies for explainable/traceable recommendations
- Explanations based on semantics/ontologies in the context of decision making/decision support systems
- Semantic user modelling for personalized explanations
- Design criteria for explainability-supporting ontologies
- Dialogue management and natural language generation based on semantics/ontologies
- Visual explanations based on semantics/ontologies
- Multi-modal explanations using semantics/ontologies
- Interactive/incremental explanations based on semantics/ontologies
- Ontological modeling of explanations and user profiles
Committees
Program Committee
- Ahmet Soylu – Norwegian University of Science and Technology / SINTEF Digital, Norway
- Amrapali Zaveri – Maastricht University, Netherlands
- Andreas Harth – Fraunhofer IIS, Germany
- Anisa Rula – University of Milano – Bicocca, Italy
- Axel-Cyrille Ngonga Ngomo – Paderborn University, Germany
- Axel Polleres – Wirtschaftsuniversität Wien, Austria
- Basil Ell – Bielefeld University, Germany and University of Oslo, Norway
- Benno Stein – Bauhaus-Universität Weimar, Germany
- Elena Cabrio – Université Côte d’Azur, Inria, CNRS, I3S, France
- Ernesto Jimenez-Ruiz – The Alan Turing Institute, UK
- Francesco Osborne – The Open University, UK
- Gong Cheng – Nanjing University, China
- Heiner Stuckenschmidt – University of Mannheim, Germany
- Jürgen Ziegler – University of Duisburg-Essen, Germany
- Mariano Rico – Universidad Politécnica de Madrid, Spain
- Maribel Acosta – Karlsruhe Institute of Technology, Germany
- Martin G. Skjæveland – University of Oslo, Norway
- Mathieu d’Aquin – National University of Ireland Galway, Ireland
- Menna El-Assady – University of Konstanz, Germany
- Michael Kohlhase – Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
- Pascal Hitzler – Wright State University, USA
- Philipp Cimiano – Bielefeld University, Germany
- Serena Villata – Université Côte d’Azur, CNRS, Inria, I3S, France
- Stefan Schlobach – Vrije Universiteit Amsterdam, The Netherlands
- Steffen Staab – University of Koblenz-Landau, Germany
Organizing committee
- Philipp Cimiano – Bielefeld University
- Basil Ell – Bielefeld University, Oslo University
- Axel-Cyrille Ngonga Ngomo – Paderborn University
Contact
All questions about submissions should be emailed to Basil Ell: bell AT techfak DOT uni-bielefeld DOT de
Also, check the workshop website for updates: http://www.semantic-explainability.com/