TRUSTWORTHY AI 2022: Trustworthy AI in Science and Society Hamburg, Germany, September 29, 2022 |
Conference website | https://informatik2022.gi.de/workshops#panel-78370 |
Submission link | https://easychair.org/conferences/?conf=trustworthyai2022 |
Abstract registration deadline | May 15, 2022 |
Submission deadline | May 15, 2022 |
TRUSTWORTHY AI 2022 is a workshop taking place on 29.09.2022.
Artificial intelligence (AI) has made its way into a broad variety of sensitive applications, such as health care, hiring processes, and autonomous service. Thereby, it has a direct impact on our daily lives and potential malfunctioning could cause severe damage for individuals and society. Therefore, the topic of trustworthiness in AI has moved into focus. With this workshop we aim to cover different perspectives of trustworthy AI, from technical to societal including topics on security, fairness, transparency, explainability, safety, and privacy.
How can we technically evaluate the distinct aspects of trustworthiness? How do they interfere with one another and how can we improve them? How can methods to implement trustworthiness be applied in practice by a broad spectrum of users and applications, and how do we make sure to eliminate risks? What does it take to include and educate non-technical users on AI trustworthiness and how can society benefit from these insights? Finally, what is needed to create trustworthiness in AI?
Submission Guidelines
We accept both short papers (up to 4 pages, poster) and long papers (up to 10 pages, presentation & proceedings). The working language is English. We require the authors to use the GI template (https://gi.de/service/publikationen/lni). The review process will be double blind. Submissions are made via Easychair until 30.04.2022 AOE.
- Short Papers: Authors of short papers are required to participate in the poster session of the workshop and present their work through a poster. Note: short papers are not included in the conference proceedings.
- Long Papers: Authors of long papers are required to hold an oral presentation of their work during the workshop. Note: long papers are included in the conference proceedings.
Topics of Interest
This workshop is planned to be a forum to discuss the different facets of trustworthy AI. Hence, we welcome applications from both research and industry. Topics include but are not limited to:
- Security, safety, reliability, or robustness in AI
- Privacy in AI
- Transparency in AI
- Bias and fairness in AI
- Explainability and interpretability of AI
- Interplay of different AI-trustworthiness aspects
- Human factors, usability, and user-centered design and application of AI
- Autonomy and control
- Building and deploying trustworthy AI systems
- Auditing and certification of trustworthy AI systems
Course of the Workshop
We will have an invited speaker to give an input presentation, followed by the oral presentation of the submitted long papers and the poster presentations of the short papers.
We plan to hold the event in person. However, we will equally try to make the event accessible online.
Important Dates
- 30.04.2022 (AOE) Deadline for long and short papers
- 17.06.2022 Author notification
- 05.07.2022 Camera-ready version
- 29.09.2022 Workshop Trustworthy AI in Science and Society
Programme and Organization Committee
- Franziska Boenisch (Fraunhofer AISEC, Freie Universität Berlin)
- Rebekka Görge (Fraunhofer IAIS)
- Prof. Marian Margraf (Fraunhofer AISEC, Freie Universität Berlin)
- Karla Markert (Fraunhofer AISEC, TU München)
- Prof. Eirini Ntoutsi (Freie Universität Berlin)
- Dr. Maximilian Poretschkin (Fraunhofer IAIS)
- Prof. Gerhard Wunder (Freie Universität Berlin)
Contact
All questions about submissions should be emailed to franziska.boenisch@aisec.fraunhofer.de or karla.markert@aisec.fraunhofer.de.