SafeAI 2023: The AAAI Workshop on Artificial Intelligence Safety 2023 Walter E. Washington Convention Center Washington, DC, United States, February 13-14, 2023 |
Conference website | http://www.safeaiw.org |
Submission link | https://easychair.org/conferences/?conf=safeai2023 |
Submission deadline | November 4, 2022 |
Scope
The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering Safety as a design principle rather than an option. However, theoreticians and practitioners of AI and Safety are confronted with different levels of safety, different ethical standards and values, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. These choices can only be analyzed holistically if the technological and ethical perspectives are integrated into the engineering problem, while considering both the theoretical and practical challenges of AI safety. A new and comprehensive view of AI Safety must cover a wide range of AI paradigms, including systems that are application-specific as well as those that are more general, considering potentially unanticipated risks. In this workshop, we want to explore ways to bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to build, evaluate, deploy, operate and maintain AI-based systems that are demonstrably safe.
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
- What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety, and what are the gaps?
- How can we engineer trustable AI software architectures?
- How can we make AI-based systems more ethically aligned?
- What safety engineering considerations are required to develop safe human-machine interaction?
- What AI safety considerations and experiences are relevant from industry?
- How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
- How can we develop solid technical visions and new paradigms about AI Safety?
- How do metrics of capability and generality, and the trade-offs with performance affect safety?
The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering are viewed as a larger whole, while considering ethical and legal issues, in order to build trustable intelligent autonomy.
List of Topics
Contributions are sought in (but are not limited to) the following topics:
- Safety in AI-based system architectures
- Continuous V&V and predictability of AI safety properties
- Runtime monitoring and (self-)adaptation of AI safety
- Accountability, responsibility and liability of AI-based systems
- Effect of uncertainty in AI safety
- Avoiding negative side effects in AI-based systems
- Role and effectiveness of oversight: corrigibility and interruptibility
- Loss of values and the catastrophic forgetting problem
- Confidence, self-esteem and the distributional shift problem
- Safety of Artificial General Intelligence (AGI) systems and the role of generality
- Reward hacking and training corruption
- Self-explanation, self-criticism and the transparency problem
- Human-machine interaction safety
- Regulating AI-based systems: safety standards and certification
- Human-in-the-loop and the scalable oversight problem
- Evaluation platforms for AI safety
- AI safety education and awareness
- Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others
Submission Guidelines
You are invited to submit:
- Full technical papers (7-9 pages, including references), or
- Proposals for technical Talks (up to one-page abstract including short Bio of the main speaker), without associated paper,
- Position papers (5-7 pages, including references),
Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=safeai2023
Please keep your paper format according to CEUR Formatting Instructions (two-column format). The CEUR author kit can be downloaded from: http://ceur-ws.org/Vol-XXX/CEURART.zip
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.
The workshop proceedings will be published on CEUR-WS. CEUR-WS is “archival” in the sense that a paper cannot be removed once it’s published. Authors will keep the copyright of their papers as per CC BY 4.0. In other words, CEUR-WS is similar to arxiv. In any case, authors of accepted papers can opt out and decide not to include their paper in the proceedings. We will inform the authors about the procedure in due term.
Important Dates
- Abstract Submission: Nov 04, 2022 – AOE time (updated)
- Full Paper Submission: Nov 11, 2022 – AOE time (extended)
- Acceptance Notification: Nov 29, 2022 – AOE time (extended)
- Camera Ready Version: Dec 13, 2022 – AOE time (extended)
Committees
Program Committee
- Please look at the Website: http://www.safeaiw.org
Organizing committee
- Gabriel Pedroza, CEA LIST, France
- Xiaowei Huang, University of Liverpool, UK
- Xin Cynthia Chen, ETH Zurich, Switzerland
- Andreas Theodorou, Umeå University, Sweden
Steering committee
- José Hernández-Orallo, Universitat Politècnica de València, Spain
- Mauricio Castillo-Effen, Lockheed Martin, USA
- Richard Mallah, Future of Life Institute, USA
- John McDermid, University of York, UK
Contact
All questions about submissions should be emailed to: safeai2023 at easychair dot org