DeceptECAI2020: International Workshop on Deceptive AI @ECAI2020 University of Santiago de Compostela Santiago de Compostela, Spain, August 30, 2020 |
Conference website | https://sites.google.com/view/deceptecai2020/home |
Submission link | https://easychair.org/conferences/?conf=deceptecai2020 |
Submission deadline | May 25, 2020 |
Notification of acceptance | June 15, 2020 |
There is no dominant theory of deception. The literature on deception treats different aspects and components of deception separately, sometimes offering contradictory evidence and opinions on these components. Emerging AI techniques offer an exciting and novel opportunity to expand our understanding of deception from a computational perspective. However, the design, modelling and engineering of deceptive machines is not trivial from either conceptual, engineering, scientific, or ethical perspectives. The aim of DeceptECAI is to bring together people from academia, industry and policy-making in order to discuss and disseminate the current and future threats, risks, and even benefits of designing deceptive AI. The workshop proposes a multidisciplinary approach (Computer Science, Psychology, Sociology, Philosophy & Ethics, Military Studies, Law etc.) to discuss the following aspects of deceptive AI:
1) Behaviour - What type of machine behaviour should be considered deceptive? How do we study deceptive behaviour in machines as opposed to humans?
2) Reasoning - What kind of reasoning mechanisms lie behind deceptive behaviour? Also, what type of reasoning mechanisms are more prone to deception?
3) Cognition - How does cognition affect deception and how does deception affect cognition? Also, what function, if any, do agent cognitive architectures play in deception?
4) AI & Society - How does the ability of machines to deceive influence society? What kinds of measures do we need to take in order to neutralise or mitigate the negative effects of deceptive AI?
5) Engineering Principles - How should we engineer autonomous agents such that we are able to know why and when they deceive? Also, why should or shouldn’t we engineer or model deceptive machines?
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference.
Submissions are NOT anonymous. The names and affiliations of the authors should be stated in the manuscript.
All papers should be formatted following the Springer LNCS/LNAI guidelines and submitted through EasyChair.
The following paper categories are welcome:
- Long papers (12 pages + 1 page references): Long papers should present original research work and be no longer than thirteen pages in total: twelve pages for the main text of the paper (including all figures but excluding references), and one additional page for references.
- Short papers (7 pages + 1 page references): Short papers may report on works in progress. Short paper submissions should be no longer than eight pages in total: seven pages for the main text of the paper (including all figures but excluding references), and one additional page for references.
- Position papers regarding potential research challenges are also welcomed in either long or short paper format.
List of Topics
- Deceptive Machines
- Multi-Agent Systems and Agent-Based Models
- Trust and Security in AI
- Machine Behaviour
- Argumentation
- Machine Learning
- Explainable AI - XAI
- Human-Computer(Agent) Interaction - HCI/HAI
- Philosophical, Psychological, and Sociological aspects
- Ethical, Moral, Political, Economical, and Legal aspects
- Storytelling and Narration in AI
- Computational Social Science
- Applications related to deceptive AI
Committees
Organizing committee
- Stefan Sarkadi - King’s College London, UK
- Peter McBurney - King’s College London, UK
- Liz Sonenberg - University of Melbourne, Australia
- Iyad Rahwan - Max Planck Institute for Human Development & MIT, Germany & USA
Publication
DeceptECAI2020 proceedings will be published in Springer CCIS. We are also planning a Special Issue on the topic of "Deceptive AI" in a highly-ranked Journal.
Contact
All questions about submissions should be emailed to stefan.sarkadi@kcl.ac.uk