USDAI 2021: 2nd International Workshop on Underpinnings for Safe Distributed AI University of York York, UK, September 7, 2021 |
Conference website | https://safecomp2021.hosted.york.ac.uk/ |
Submission link | https://easychair.org/conferences/?conf=usdai2021 |
Abstract registration deadline | May 11, 2021 |
Submission deadline | May 18, 2021 |
2nd International Workshop on Underpinnings for Safe Distributed AI (USDAI 2021)
Workshop Theme
Cooperation of humans and artificial intelligence (AI) requires a reliable and secure underpinning in order to be safe. AI-assisted operation has an abundance of application areas, including medicine, manufacturing and the financial sector. The underlying idea is the evaluation of events by an AI that makes suggestions regarding actions to the human operator. Such close cooperation requires enabling technologies as well as regulatory conditions. AI-aided decision making is often implemented in a distributed fashion, requiring computing capabilities in edge- and handheld-devices, i.e. in lightweight environments. Furthermore, using similar AIs in different devices requires transferrable algorithms and profits from federated approaches that combine the gained knowledge. Europe needs to develop its own capabilities in this area as witnessed by the increasingly frequent calls for a “European digital sovereignty”. This will involve a significant effort to develop the required enabling technologies, such as methods for providing data sovereignty and privacy. To protect the investment, it must be ensured that the enabling technologies developed provide a general value for the involved stakeholders and create a lasting impact.
There are several ways to achieve meaningful and constructive cooperation of humans and AI, but in all cases the right algorithms must meet the right data at the excactly right time to provide intuitive and interpretable results. Similarly, in order to learn from distributed "experiences" a distributed learning approach (federated, or central with redistribution of results) is needed.
The basic challenges of meaningful and safe distributed human-AI cooperation are therefore the reliable collection of data and local (pre-)processing, reliable transport of this data to other relevant nodes as well as orchestration of distributed algorithms across the network of nodes, combined with an intuitive and actionable expression of results. To support this orchestration from an operational point of view, it must be ensured that the system is secure, can be securely operated and updated (secure DevOps) and that it respects the privacy of operators and the general public.
This workshop will address a wide range of enabling methods and technologies to ensure trustworthiness of data as well as the processing and use of the resulting information. Topics will range from advanced computational methods to the legal and regulatory framework in which they must function. There will be a session open for presenters to pitch project ideas for further work on the topics related to the workshop theme.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Full papers describing scientific advances in one of the topics below
- Technical Demonstrations of prototypical or novel technologies of distributed AI or AI-assisted decision making used in an industrial application
List of Topics
- Data collection, transport and storage for safe human-AI cooperation, such as:
- Sensing and initial data analytics on the end node
- Data transport for AI between distributed nodes and the cloud
- Data visualisation for human operators
- Data processing and analytics for safe and distributed human-AI cooperation, such as:
- Safe and reliable data processing and machine learning in hardware and software
- Methods for traceability, accountability and explainability in AI
- Data expression to create meaningful and actionable results
- Application of AI-assisted decision making in lightweight, embedded and edge-devices
- Secure DevOps for safe distributed AI
- Application-level considerations of AI-assisted decision making in relevant fields such as healthcare, mobility, manufacturing, space, etc.
- Considerations related to schemes for approval, qualification and certification of AI-aided decision-making tools
- Legal and regulatory systems of AI-assisted decision-making, including but not limited to issues concerning privacy and data protection, intellectual property, freedom of expression, cybersecurity, liability (e.g. in relation to autonomous cars, drones, and robots), competition, consumer protection, equality and discrimination, healthcare, internet of things, smart offices and cities, energy, and the environment.
Committees
Program Committee
- Morten Larsen, AnyWi Technologies
- Simon Duque Anton, DFKI
- Alan Sears Leiden Law School, Leiden University
- Anna Hristoskova, SIRRIS
- Reda Nouacer, CEA
- Ricardo Reis, Embraer
- Merjin van Tooren, Almende
- Raúl Santos de la Cámara, Hi-Iberia
- Valeriu Codreanu, SURFsara
- Raj Thilak Rajan, TU Delft
- Tobias Koch, consider-it.de
- George Dimitrakopoulos, Harokopio University
Contact
All questions about submissions should be emailed to
Morten Larsen Uta
AnyWi Technologies Institute of Advanced Computer Science
3e Binnenvestgracht 23H Niels Bohrweg 1
2312NR Leiden, the Netherlands 2333 CA Leiden