TADM 2021: Trusted Automated Decision-Making Co-located with ETAPS 2021 Luxembourg, Luxembourg, March 27-28, 2021 |
Conference website | https://tadm.3drationality.com/ |
Submission link | https://easychair.org/conferences/?conf=tadm2021 |
Abstract registration deadline | March 5, 2021 |
Submission deadline | March 5, 2021 |
When can automated decision-making systems or processes be trusted? Is it sufficient if all decisions are explainable, secure, and fair? As more and more life-defining decisions are being relegated to algorithms based on Machine Learning (ML), it is increasingly becoming clear that the touted benefits of introducing new and novel algorithms, especially those based on Artificial Intelligence (AI), into our daily lives are accompanied by serious negative societal consequences. Corporations are incentivized to promote opacity rather than transparency of their decision-making processes, due to the proprietary nature of their algorithms. What disciplines can help software professionals demonstrate trust in automated decision-making systems? Decision-making logic of black box approaches -- such as those based on deep learning or deep neural networks -- cannot be comprehended by humans. The field of "explainable AI," which prescribes the use of adjunct explainable models, partially mitigates this problem. Do adjunct models make the whole process trustworthy enough? Detractors of explainable AI propose that decisions that could potentially impact human safety be restricted to interpretable and transparent algorithms. Although there have been a few recent successes in the creation of interpretable models -- including decision trees and case-based reasoning approaches -- it is not clear whether they are sufficiently accurate or practical. To initiate discussion in this very important societal need, we invite transdisciplinary researchers, computer scientists, and practitioners with new and novel research ideas. We particularly encourage research of a nascent or speculative nature, in order to chalk a way forward.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Abstracts
- Position Papers
List of Topics
- Creation of interpretable models for specific domains
- Extraction of interpretable models of comparable accuracy from black box models
- Unique and novel approaches to learning sparse models
- Formal approaches for synthesis of interpretable models from specifications
- Metrics to assess veracity of recent approaches in interpretable model creation
- Challenge problems in finance, criminal justice, or social and industrial robotics
Committees
Program Committee
- Luís Alexandre http://www.di.ubi.pt/~lfbaa/
-
Baptiste Le Feure baptiste.lefevre@fr.thalesgroup.com
-
Raj Dasgupta https://www.linkedin.com/in/prithviraj-raj-dasgupta-9122a5
-
James Miller https://www.smith.edu/academics/faculty/james-miller
-
Madhavan Mukund https://www.cmi.ac.in/~madhavan/
-
Claire Pagetti https://www.onera.fr/fr/staff/claire-pagetti
- Martin Vechev http://www.sri.inf.ethz.ch/
Organizing committee
- Dr. Ramesh Bharadwaj
- Ms. Ilya Parker
Invited Speakers
- Prof. Cynthia Rudin (Duke)
- Prof. Wendell Wallach (Yale)
Publication
TADM 2021 proceedings will be published by ETAPS 2021
Venue
The conference will be held virtually
Contact
All questions about submissions should be emailed to ramesh.bharadwaj@nrl.navy.mil