previous day
next day
all days

View: session overviewtalk overview

10:30-11:00Coffee Break
12:30-14:00Lunch Break
15:30-16:00Coffee Break
16:05-17:30 Session 8: Risks of AI

Risks of AI

INVITED TALK: Autonomous Weapon Systems – Dangers and Need for an International Prohibition

ABSTRACT. Technological advances in ICT, robotics and sensors bring autonomous weapon systems (AWS) within reach. In detecting, identifying, selecting and engaging of targets AI is foreseen to play a prominent role. Shooting without the need for control by a human operator has military advantages: Reaction times can be much shorter. Without permanent communication link AWS would be more difficult to detect, and the link cannot be jammed. Personnel can be saved and devoted to higher-level tasks. But there are also military disadvantages if combat decisions are transferred to computers – human understanding of the situation and control of events would suffer. On a more general level, there are bigger dangers. Compliance with the law of armed conflict is questionable. There is the ethical question whether a machine should be given the authority to take a human life. And the drastically increased pace of battle may overburden human capability for understanding and decision making. This is particularly problematic in a severe crisis where interaction between two AWS fleets – the control programs of which can never be tested together – can lead to fast escalation from crisis to war, and if war is already on-going, to higher levels of violence. Such dangers have motivated an international campaign to stop killer robots as well as an increasing number of professionals and enterprises in the IT, robotics and AI communities to call for an international prohibition of AWS.

States have discussed about limitations or a ban in the UN context, but no consensus has evolved so far. The US, UK, Israel and Russia are among the states opposing a prohibition. Germany has on the one hand argued for a ban of fully autonomous weapons, but has not joined the countries proposing an AWS ban in the UN (28 by late 2018), and has used a definition that calls a system autonomous only if it has the ability to learn and develop self- awareness.

An international AWS ban could comprise two basic obligations: a positive requirement that each single use of force must be under meaningful human control, and a negative prohibition of autonomous weapons (including the stages of development, testing, acquisition and use) (with narrow exceptions for automatic last-second systems for defence of human-inhabited sites). As long as remotely controlled uninhabited weapon systems remain allowed, verification that they do not attack under computer control cannot rely on vehicle type by outer appearance. Checking the control software would be much too intrusive to be accepted, and a modified version allowing autonomous attack could be downloaded very fast. Compliance could be proved with an obligation to securely store all communication and sensor data together with the actions of the human operator/commander and to make the records available later for checking by an international verification organisation. The AI and robotics communities could make significant the international discussion. Specific research projects could be devoted, among others, to: identifying dual-use AI research of concern, the potential for errors in machine learning for target recognition, proliferation risks, including to non-state actors, scenarios of interaction between two fleets of AWS, commonalities and differences between AWS and cyber forces in this respect, the possibility of “circuit breakers” to prevent uncontrolled escalation. In order to prevent AWS proliferation and dangers to the law of armed conflict as well as to military stability and peace, an international AWS ban is urgently needed. Because of high military/political/economic interests in AWS, this needs active support by an alert general public as well as by specific communities. Here the AI and robotics communities are particularly relevant.

Generation of Explanations for Moral Judgments Under Various Ethical Principles

ABSTRACT. We present an approach to the computational generation of explanations for moral judgments in the context of an hybrid ethical reasoning agent (HERA). The HERA agent employs logical representations of ethical principles to make judgments about the moral permissibility or impermissbility of actions, and uses the same logical formulae to come up with explanations for these judgments. We motivate the distinction between sufficient reasons, necessary reasons, and necessary parts of sufficient reasons yielding different types of explanations, and we provide algorithms to extract these reasons.