TADM 2023: Second International Workshop on Trusted Automated Decision-Making Co-located with ETAPS 2023 Paris, France, April 22, 2023 |
Submission link | https://easychair.org/conferences/?conf=tadm2023 |
Trust is critical to safe implementation of new technologies. As a community of software researchers, what can we do to ensure automated decision-making systems are designed through processes that are trustworthy? How can we assess machine learning, large language models and other emerging artificial intelligence (AI) technologies? Is explainability enough? If not, what further measures can be taken?
Safety and trust challenges arise with unique specificities in each new context. The recent explosion in the usage of large language models (LLMs) such as CODEX, derived from GPT3, for generating source code for software applications poses distinctive questions. It is estimated that CODEX only gets it right 30% of the time, thereby increasing the likelihood of the internet getting riddled with vulnerable code which could be readily exploited by malicious actors. Is this inevitable or can we learn to avoid these potentially dangerous pitfalls of AI-based code generation? Can we invent methods and tools for error mitigation/containment? Are formal methods the answer to find and fix flaws in the generated code? Could we exploit adversarial learning to re-train the models and prevent them from repeating their mistakes? Are there benchmarks/tests to certify freedom from exploitable vulnerabilities?
To initiate discussion in this very important societal need, we invite interdisciplinary researchers, computer scientists, and practitioners with new and novel research ideas. We particularly encourage research of a nascent or speculative nature, in order to chalk a way forward.
Submission Guidelines
We solicit and encourage a wide range of talks in multiple formats, addressing new ideas or works in progress (possibly already published). Researchers wishing to give a talk should submit a short abstract (of length between half a page and 2 pages). Submissions will be reviewed for their appropriateness and relevance to the workshop.
- Abstracts and Posters describing work in progress or aim to initiate discussions.
- Regular papers (up to 12 pages in EPTCS format), previously unpublished work, including descriptions of research, tools, and applications.
Important Dates
-
Submission deadline: March 6, 2023
-
Notification of acceptance: March 17, 2023
-
Final paper submission deadline: April 7, 2023
-
Discounted registration deadline: March 22, 2023
List of Topics
- Approaches for assuring security and safety of software created by LLMs
- Creation of interpretable or explainable models for specific domains
- Extraction of interpretable models of comparable accuracy from black box models
- Unique and novel approaches to learning sparse models
- Approaches for synthesis of interpretable or explainable models from specifications
- Metrics to assess trustworthiness and safety in emergent AI based systems
- Challenge problems in finance, criminal justice, or social and industrial robotics
Committees
Program committee
-
Raj Dasgupta NRL
-
Giacomo Gentile Collins
-
Madhavan Mukund CMI
-
Claire Pagetti Onera
-
Ngoc Tran NRL
-
TBD
Organizing committee
- Dr. Ramesh Bharadwaj
- Ms. Ilya Parker
Invited Speakers
- Prof. Edward Ashford Lee (Berkeley)
- TBD
Publication
Proceedings will be published as video on the TADM website.
Publication of formal proceedings is TBD.
Venue
Paris, France, in conjunction with ETAPS 2023
Contact
All questions about submissions should be emailed to ramesh.bharadwaj@nrl.navy.mil