AISafety 2019: The IJCAI-19 Workshop on Artificial Intelligence Safety Venetian Macao Hotel Resort Macao, China, August 11-12, 2019 |
Conference website | https://www.ai-safety.org/ |
Submission link | https://easychair.org/conferences/?conf=aisafety2019 |
Submission deadline | May 18, 2019 |
Scope
In the last decade, there has been a growing concern on risks of Artificial Intelligence (AI). Safety is becoming increasingly relevant as humans are progressively ruled out from the decision/control loop of intelligent, and learning-enabled machines. In particular, the technical foundations and assumptions on which traditional safety engineering principles are based, are inadequate for systems in which AI algorithms, in particular Machine Learning (ML) algorithms, are interacting with the physical world at increasingly higher levels of autonomy. We must also consider the connection between the safety challenges posed by present-day AI systems, and more forward-looking research focused on more capable future AI systems, up to and including Artificial General Intelligence (AGI).
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
* How can we engineer trustable AI software architectures?
* Do we need to specify and use bounded morality in system engineering to make AI-based systems more ethically aligned?
* What is the status of existing approaches in ensuring AI and ML safety and what are the gaps?
* What safety engineering considerations are required to develop safe human-machine interaction in automated decision-making systems?
* What AI safety considerations and experiences are relevant from industry?
* How can we characterise or evaluate AI systems according to their potential risks and vulnerabilities?
* How can we develop solid technical visions and paradigm shift articles about AI Safety?
* How do metrics of capability and generality affect the level of risk of a system and how trade-offs can be found with performance?
* How do AI system feature for example ethics, explainability, transparency, and accountability relate to, or contribute to, its safety?
* How to evaluate AI safety?
Topics
We invite theoretical, experimental and position papers covering any aspect of AI Safety including, but not limited to:
* Safety in AI-based system architectures
* Continuous V&V and predictability of AI safety properties
* Runtime monitoring and (self-)adaptation of AI safety
* Accountability, responsibility and liability of AI-based systems
* Explainable AI and interpretable AI
* Avoiding negative side effects in AI-based systems
* Role and effectiveness of oversight: corrigibility and interruptibility
* Loss of values and the catastrophic forgetting problem
* Confidence, self-esteem and the distributional shift problem
* Safety of AGI systems and the role of generality
* Reward hacking and training corruption
* Self-explanation, self-criticism and the transparency problem
* Human-machine interaction safety
* Regulating AI-based systems: safety standards and certification
* Human-in-the-loop and the scalable oversight problem
* Evaluation platforms for AI safety
* AI safety education and awareness
* Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others.
Important Dates [extended]
* Paper submission: May 18, 2019
* Notification of acceptance: June 7, 2019
* Camera-ready submission: June 20, 2019
Format
We plan a two-days workshop with general AI Safety topics in the first day and AI Safety Landscape (https://www.ai-safety.org/the-ai-safety-landscape) talks and discussions during the second day.
At AISafety, we believe that to deliver a truly memorable event, we need a highly interactive format with more much-needed debate to keep people engaged and energized throughout the workshop. The workshop sessions during the first day will be structured into short paper pitches and a common panel slot to discuss both individual paper contributions and shared topic issues.
The AI Safety Landscape sessions will be structured in pitches by invitation for each of the landscape Dimensions and panels with structured discussions.
Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Submission and Selection
You are invited to submit:
* Full Technical Papers (6-7 pages),
* Proposals for Technical Talks (up to one-page abstract including short Bio of the main speaker),
* Position Papers for General Topics (2-4 pages), or
* Position Papers for The AI Safety Landscape (2-4 pages). Further information at https://www.ai-safety.org/the-ai-safety-landscape
Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=aisafety2019
Please keep your paper format according to IJCAI Formatting Instructions. For Formatting Guidelines, LaTeX Styles and Word Template, see more information on https://www.ijcai.org/authors_kit
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.
Committees
Organizing Committee:
* Huáscar Espinoza, Commissariat à l´énergie atomique - CEA, France
* Han Yu, Nanyang Technological University, Singapore
* Xiaowei Huang, University of Liverpool, UK
* Cynthia Chen, University of Hong Kong, China
* José Hernández-Orallo, Universitat Politècnica de València, Spain
* Seán Ó hÉigeartaigh, University of Cambridge, UK
* Freddy Lecue, Thales, Canada
* Richard Mallah, Future of Life Institute, USA
Programme Committee: please look at the website: https://www.ai-safety.org/