SafeAI 2022: The AAAI Workshop on Artificial Intelligence Safety 2022 Vancouver, Canada, February 28-March 1, 2022 |
Conference website | http://www.safeaiw.org |
Submission link | https://easychair.org/conferences/?conf=safeai2022 |
Abstract registration deadline | November 12, 2021 |
Submission deadline | November 19, 2021 |
Scope
The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering Safety as a design principle rather than an option. However, theoreticians and practitioners of AI and Safety are confronted with different levels of safety, different ethical standards and values, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. These choices can only be analyzed holistically if the technological and ethical perspectives are integrated into the engineering problem, while considering both the theoretical and practical challenges of AI safety. A new and comprehensive view of AI Safety must cover a wide range of AI paradigms, including systems that are application-specific as well as those that are more general, considering potentially unanticipated risks. In this workshop, we want to explore ways to bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to build, evaluate, deploy, operate and maintain AI-based systems that are demonstrably safe.
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
- What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety, and what are the gaps?
- How can we engineer trustable AI software architectures?
- How can we make AI-based systems more ethically aligned?
- What safety engineering considerations are required to develop safe human-machine interaction?
- What AI safety considerations and experiences are relevant from industry?
- How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
- How can we develop solid technical visions and new paradigms about AI Safety?
- How do metrics of capability and generality, and the trade-offs with performance affect safety?
The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering are viewed as a larger whole, while considering ethical and legal issues, in order to build trustable intelligent autonomy.
List of Topics
Contributions are sought in (but are not limited to) the following topics:
- Safety in AI-based system architectures
- Continuous V&V and predictability of AI safety properties
- Runtime monitoring and (self-)adaptation of AI safety
- Accountability, responsibility and liability of AI-based systems
- Effect of uncertainty in AI safety
- Avoiding negative side effects in AI-based systems
- Role and effectiveness of oversight: corrigibility and interruptibility
- Loss of values and the catastrophic forgetting problem
- Confidence, self-esteem and the distributional shift problem
- Safety of Artificial General Intelligence (AGI) systems and the role of generality
- Reward hacking and training corruption
- Self-explanation, self-criticism and the transparency problem
- Human-machine interaction safety
- Regulating AI-based systems: safety standards and certification
- Human-in-the-loop and the scalable oversight problem
- Evaluation platforms for AI safety
- AI safety education and awareness
- Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others
Submission Guidelines
You are invited to submit:
- Full technical papers (6-8 pages), or
- Proposals for technical Talks (up to one-page abstract including short Bio of the main speaker), without associated paper,
- Position papers (4-6 pages),
Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=safeai2022
Please keep your paper format according to AAAI Formatting Instructions (two-column format). The AAAI author kit can be downloaded from: https://www.aaai.org/Publications/Templates/AuthorKit22.zip
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.
The workshop proceedings will be published on CEUR-WS. CEUR-WS is “archival” in the sense that a paper cannot be removed once it’s published. Authors will keep the copyright of their papers as per CC BY 4.0. In other words, CEUR-WS is similar to arxiv. In any case, authors of accepted papers can opt out and decide not to include their paper in the proceedings. We will inform the authors about the procedure in due term.
Important Dates (Extended)
- Abstract Submission: Nov 12, 2021 – AOE time
- Paper Submission: Nov 19, 2021 – AOE time (extended)
- Acceptance Notification: Dec 6, 2021 – AOE time (delayed)
- Camera Ready Version: Dec 15, 2021
Committees
Program Committee
- Please look at the Website: https://www.safeaiw.org
Organizing committee
- Gabriel Pedroza, CEA LIST, France
- José Hernández-Orallo, Universitat Politècnica de València, Spain
- Xin Cynthia Chen, University of Hong Kong, China
- Xiaowei Huang, University of Liverpool, UK
- Huáscar Espinoza, ECSEL JU, Belgium
- Mauricio Castillo-Effen, Lockheed Martin, USA
- Seán Ó hÉigeartaigh, University of Cambridge, UK
- Richard Mallah, Future of Life Institute, USA
- John McDermid, University of York, UK
Contact
All questions about submissions should be emailed to: safeai2022 at easychair dot org