SafeAI 2020: The AAAI Workshop on Artificial Intelligence Safety 2020 Hilton New York Midtown New York, NY, United States, February 7, 2020 |
Conference website | http://www.safeaiw.org |
Submission link | https://easychair.org/conferences/?conf=safeai2020 |
Abstract registration deadline | November 15, 2019 |
Submission deadline | November 20, 2019 |
Scope
Safety in Artificial Intelligence (AI) should not be an option, but a design principle. However, there are varying levels of safety, diverse sets of ethical standards and values, and varying degrees of liability, which can only be addressed by taking into account trade-offs and alternative solutions. A holistic analysis should integrate the technological and ethical perspectives into the engineering problem, considering both the theoretical and practical challenges of AI safety. This new view must cover a wide range of AI paradigms, considering systems that are application-specific, and also those that are more general, providing information about risk. We must bridge the short-term with the long-term perspectives, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, in order to build, evaluate, deploy, operate and maintain AI-based systems that are truly safe.
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
* What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety and what are the gaps?
* How can we engineer trustable AI software architectures?
* How can we make AI-based systems more ethically aligned?
* What safety engineering considerations are required to develop safe human-machine interaction?
* What AI safety considerations and experiences are relevant from industry?
* How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
* How can we develop solid technical visions and new paradigms about AI Safety?
* How do metrics of capability and generality, and trade-offs with performance affect safety?
The main interest of the proposed workshop is to look holistically at AI and safety engineering, jointly with the ethical and legal issues, to build trustable intelligent autonomous machines.
As part of a “sister” workshop (AISafety 2019, https://www.ai-safety.org/), we started the “AI Safety Landscape” initiative. This initiative aims at defining a multi-faceted and integrated “view” of the current needs, challenges, and the state of the art and practice of this field. We will follow-up the landscape discussions in SafeAI 2020.
List of Topics
Contributions are sought in (but are not limited to) the following topics:
* Safety in AI-based system architectures
* Continuous V&V and predictability of AI safety properties
* Runtime monitoring and (self-)adaptation of AI safety
* Accountability, responsibility and liability of AI-based systems
* Effect of uncertainty in AI safety
* Avoiding negative side effects in AI-based systems
* Role and effectiveness of oversight: corrigibility and interruptibility
* Loss of values and the catastrophic forgetting problem
* Confidence, self-esteem and the distributional shift problem
* Safety of Artificial General Intelligence (AGI) systems and the role of generality
* Reward hacking and training corruption
* Self-explanation, self-criticism and the transparency problem
* Human-machine interaction safety
* Regulating AI-based systems: safety standards and certification
* Human-in-the-loop and the scalable oversight problem
* Evaluation platforms for AI safety
* AI safety education and awareness
* Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others
Submission Guidelines
You are invited to submit:
- Full technical papers (6-8 pages),
- Proposals of technical talk (up to one-page abstract including short Bio of the main speaker),
- Position papers for general topics (2-4 pages), and
- Position papers for the AI Safety Landscape: https://www.ai-safety.org/ai-safety-landscape (2-4 pages).
Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=SafeAI2020
Please keep your paper format according to AAAI Formatting Instructions (two-column format). The AAAI author kit can be downloaded from: https://www.aaai.org/Publications/Templates/AuthorKit20.zip
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.
Important Dates (Extended)
- Paper submission (Extended): Nov 20, 2019 (Mandatory Abstract Submission Nov 15)
- Notification of acceptance: Dec 8 (delayed), 2019
- Camera-ready submission: Dec 15, 2019
Committees
Program Committee
- Please look at the Website: www.safeaiw.org
Organizing committee
- Huascar Espinoza, CEA LIST, France
- José Hernández-Orallo, Universitat Politècnica de València, Spain
- Xin Cynthia Chen, University of Hong Kong, China
- Seán Ó hÉigeartaigh, University of Cambridge, UK
- Xiaowei Huang, University of Liverpool, UK
- Mauricio Castillo-Effen, Lockheed Martin, USA
- Richard Mallah, Future of Life Institute, USA
- John McDermid, University of York, UK