OSAI21: The culture of Trustworthy AI. Public debate, education and practical learning Venice, Italy, September 2-3, 2021 |
Submission link | https://easychair.org/conferences/?conf=osai21 |
Abstract registration deadline | July 23, 2021 |
Submission deadline | July 23, 2021 |
Workshop Title: The culture of Trustworthy AI. Public debate, education, practical learning
Date: 2-3 September 2021
Location: Venice - San Servolo Island (Venice International University)
Format: physical and online sessions
In the last few years, the European Union has made important steps towards a responsible and sustainable Artificial Intelligence (AI) innovation. In 2019, the High-Level Expert Group delivered the Ethics Guidelines for Trustworthy AI and, even more recently, the European Commission put forward a proposal for a regulation to address different AI risk levels. However, these initiatives could not be sufficient to ensure Trustworthy AI. Besides rules and principles, “Trustworthy AI requires us to build and maintain an ethical culture and mind-set through public debate, education and practical learning.” (HLEG, 2019).
The development of a Trustworthy AI culture poses several challenges to the whole AI ecosystem. A first issue regards how to create meaningful discussions involving different experts but also citizens and people with limited knowledge about AI and its wider implications (public debate). A second problem connects to the equipment needed to help future generations cope with the social and economic changes generated by AI and data-intensive applications (education). Finally, a third important issue relates to the capacity of AI actors to become familiar with principles, rules and the plethora of methodologies supporting ethical, legal and robust AI (practical learning).
This workshop will try to explore how we can get closer to a Trustworthy AI culture sharing investigations and good practices moving along the trajectories suggested by Trustworthy AI guidelines: public debate, education and practical learning. The event is part of the AI4EU project and will give the opportunity to share the results achieved by the AI4EU working groups on Ethical, Legal, Social, Economic and Cultural issues of AI (ELSEC AI). Launched at the end of 2020, working groups are semi-organised groups which gather people from different backgrounds and sectors to collaborate on ELSEC AI with a genuine multidisciplinary spirit.
The workshop will host a mix of invited and contributed presentations, as well as panel discussions, and is open to scholars and practitioners working on Trustworthy and Responsible AI. The event will be held in Venice (Italy) and streamed online to allow remote participation. Registration is free of charge. More details about the workshop location and the program will be announced soon. Interested people can send an email to osai@unive.it
Organizers
Teresa Scantamburlo (Ca'Foscari University)Francesca Foffano (Ca'Foscari University)Atia Cortés (Barcelona Supercomputing Center)Cristian Barrué (Universitat Politecnica de Catalunya)Andrea Aler Tubella (Umea University)
Call for Extended Abstract
The workshop welcomes contributions from scholars and professionals who are interested in the development of a culture of Trustworthy AI. The questions we want to address include, but are not limited to, the following:
Public debate
“The benefits of AI systems are many, and Europe needs to ensure that they are available to all. This requires an open discussion and the involvement of social partners and stakeholders, including the general public.” (Ethics Guidelines for Trustworthy AI, p. 23)
- What do AI experts mean by “Trustworthy AI”? What do they think about key concepts such as “Responsible AI” or “Human-centred AI”?
- How do experts in different disciplines (e.g. philosophy, law, psychology, sociology, management, arts) view the current debate around Trustworthy and responsible AI? What is missing in contemporary discussions?
- What are the disciplinary gaps between AI/computer science and other disciplines involved in present debates on AI and ELSEC issues? How can we support and help the multidisciplinary dialogue?
- How do European citizens view AI applications? Does this view reflect actual AI capabilities? Do they know where AI is in place (e.g. in their daily life)?
- Do lay people know the opportunities and the risks associated with AI applications? Are they aware of the value at stake?
- Are citizens aware of European initiatives aimed at ensuring ethical and legal AI? Do they feel “protected” by them?
Education
“Interdisciplinarity should also be supported (by encouraging joint degrees, for example in law or psychology and AI). The importance of ethics in the development and use of new technologies should also be featured in programmes and courses” (AI for Europe, COMM(2018) 237, p. 13)
- How do we train future AI developers in Europe?
- What do European Universities offer to teach ethical and legal AI?
- What objectives / contents / material are proposed in such courses? What is the teaching methodology? What is missing?
- What tips and resources can help instructors set up an AI ethics course?
- How are companies dealing with ethical and legal training of AI developers?
- What are the working experiences in AI ethics education and training?
- How do we prepare young generations for future AI transformations?
- Do we need to introduce AI in compulsory education? If so, how?
- How to increase equity and diversity in AI education?
Practical learning
“A European governance structure could have a variety of tasks, as a forum for a regular exchange of information and best practice, identifying emerging trends, advising on standardisation activity as well as on certification. It should also play a key role in facilitating the implementation of the legal framework, such as through issuing guidance, opinions and expertise.” (White paper, COM(2020) 65, p. 24)
- How do we tackle the European ethics guidelines in practice?
- How do the guidelines impact the work of AI developers and other roles involved in the development and/or deployment of an AI system (managers, data officers, etc.)?
- Which strategies and approaches can help us apply ethical principles in the design and assessment of AI systems? Which ones can help us tackle tensions between different ethical requirements?
- How do the methods and the tools proposed so far work in concrete AI applications?
- How do these methods concretely interact with companies’ internal protocols and regulatory mechanisms? How can they integrate into existing practices and policies?
Researchers and professionals engaged in any of the above topics are invited to share their work, either completed or in progress, by sending an extended abstract of at most 1000 words (references included) to osai@unive.it. Selected submissions will be considered for either oral or poster presentation.
Important dates
Extended abstract submission: 5 August 2021
Author notification date: 15 August 2021