IMA2019: First International Workshop On Interpretability: Methodologies and Algorithms |
Website | https://sites.google.com/view/ima2019 |
Submission link | https://easychair.org/conferences/?conf=ima2019 |
Abstract registration deadline | October 25, 2019 |
Submission deadline | October 28, 2019 |
Call for Papers
First Annual International Workshop on Interpretability: Methodologies and Algorithms - IMA 2019 2 December, 2019, Adelaide, Australia
in conjunction with
32nd Australasian Joint Conference on Artificial Intelligence and 17th Australasian Data Mining Conference,
2-5 December, 2019, Adelaide, Australia.
Important dates
Manuscript submission
Aims and Scope
Algorithms and methodologies for machine learning, automated evaluation, matching, security identification and discrimination algorithms and other data-driven tasks are essential components of contemporary decision engineering. From the perspective of the developers of such methods and algorithms, the highest priority through the years have been system performance, measured in terms of accuracy, efficiency and robustness. From the perspective of the stakeholders, involved in decision engineering, there is a number of additional important criteria, related to the interpretability of the behaviour of the system and the outcomes it produces, to the levels of comprehensibility of derived models and the capacity/capability to explain their outputs, and to the ability to estimate the short term and long-term impact of model utilisation. Understanding the discrimination mechanisms and biases of methods and algorithms in projects settings, on the one hand, and ensuring comfortable level of model comprehensibility and results interpretability, on the other hand, are key in the areas of clinical decision making, medicine, security systems, defence, space industry, financial assessment, and many other areas. Hence, this is a significant and impactful area of research, practice and technology development, which offers machine learning, data science, analytics and decision making communities methodologies that ensure the necessary level of understanding of how decisions have been generated, thus enabling systematic approach to identification and response to bias, errors, noise and disturbance or other problems. If such frameworks are not in place, the organisations that use these systems risk losing the base of their practices. Major big technology developers, including Google, IBM, Microsoft, Tesla and Amazon have been considering practices related to the area (for instance, Google’s Responsible AI Practices – Interpretability). Also, similar efforts about AI and ethics have been initiated by major professional societies, including ACS, ACM and IEEE.
The main goal of the workshop is to provide joint forum for industry, government and academia for presentation and discussion of the newest mature and greenhouse ideas, research and practical developments, including practical methodologies that address the challenges of interpretability and comprehensibility in machine learning and broader artificial intelligence in industry settings. The workshop aims to connect experts in the area of explainable AI, experts in interpretability of machine learning algorithms and experts in data science project methodologies. The workshop will also identify the short- and long-term research directions and best implementation practices in the field and preferences of the potential end users. The workshop also will provide forum for legislators, involved in the development and implementation of legislation (like the European Union's General Data Protection Regulation (GDPR)), related to the rights and rules of explanation.
Topics of interest.
The major topics include but are not limited to:
- The concepts of interpretability, comprehensibility and explainability in machine learning and broader data-driven algorithmic decision making;
- Degrees of interpretability and respective interpretability features;
- Methodologies supporting interpretability/comprehensibility in data science projects;
- Interpretability/comprehensibility as core part of user experience;
- Interactive and iterative methods supporting interpretability and comprehensibility;
- Design of interpretable models;
- Interpretability methods for ‘black-box’ type of machine learning models;
- Impact of data characteristics on the solution interpretability;
- Data preprocessing and its effect on interpretability;
- Interpretability issues across text, image, audio and video data;
- Interpretability and accuracy;
- Local and global explainability techniques of AI/ML models;
- Practical aspects of achieving ML/AI solution interpretability in industry settings;
- Transparency in machine learning and data-driven decision algorithms;
- Design of symbolic and visual analytics means for support of interpretability;
- Psychological and cultural aspects of interpretability/comprehensibility;
- Causality in predictive modelling and interpretability of causal relationships.
Workshop organisation
Workshop Chairs
Inna Kolyshkina, Analytikk Consulting
Simeon Simoff, Western Sydney University
Program committee
Shlomo Berkovsky, Australian Institute of Health Innovation, Macquarie University
Volker Gruhn, Lehrstuhl für Software Engineering, Universität Duisburg-Essen
Warwick Graco, Operational Analytics, Australian Taxation Office
Helen Chen, Professional Practice Centre for Health Systems, University of Waterloo
Jerzy Korczak, Wroclaw University of Economics
Reza Abbasi-Asl, University of California, San Fransisco
Riccardo Guidotti, KDD Lab, ISTI-CNR and University of Pisa
Cengiz Oztireli, Disney Research and ETH Zürich
Przemyslaw Biecek, Faculty of Mathematics, Informatics and Mechanics, University of Warsaw
Jake Hofman, Microsoft Research
Submission, review process and publication
All submitted papers are required to follow Springer’s authors’ guidelines. Authors should use Springer's proceedings templates, either for LaTeX or for Word for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers.
Submissions must be in PDF only. There are no limits on page numbers.
Each contributed paper will follow a double-blind peer review process and will be reviewed by three reviewers from the Program Committee. Peer-reviewed papers, accepted for presentation at the workshop will be published in the proceedings of the workshop at Cornell University’s arXiv.org open access repository and made available at the workshop. Revised versions of the contributions will be published as a special issue with related high profile journal(s) or as an edited book with Springer or Kluwer.