View: session overviewtalk overviewside by side with other conferences
11:00 | How to Free Yourself from the Curse of Bad Equilibria ABSTRACT. A standard problem in game theory is the existence of equilibria with undesirable properties. To pick a famous example, in the Prisoners Dilemma, the unique dominant strategy equilibrium leads to the only Pareto inefficient outcome in the game, which is strictly worse for all players than an alternative outcome. If we are interested in applying game theoretic solution concepts to the analysis of distributed and multi-agent systems, then these undesirable outcomes correspond to undesirable system properties. So, what can we do about this? In this talk, I describe the work we have initiated on this problem. I discuss three approaches to dealing with this problem in multi-agent systems: the design of norms or social laws; the use of communication to manipulate the beliefs of players; and in particular, the use of taxation schemes to incentivise desirable behaviours. |
11:30 | Specification and Verification for Robots that Learn |
12:00 | Moral Permissibility of Actions in Smart Home Systems SPEAKER: Michael Fisher ABSTRACT. With this poster, we present ongoing work of how to operate a smart home via a Hybrid Ethical Reasoning Agent (HERA). This work is part of the broader scientific effort to implement ethics on computer systems known as machine ethics. We showcase an everyday example involving a mother and a child living in the smart home. Our formal theory and implementation allows us to evaluate actions proposed by the smart home from different ethical points of view, i.e. utilitarianism, Kantian ethics and the principle of double effect. We discuss how formal verification, in the form of model-checking, can be used to check that the modeling of a problem for reasoning by HERA conforms to our intuitions about ethical action. |
14:00 | TBA |
14:30 | The computational neuroscience of human-robot relationships |
15:00 | Trust, Failure, and Self-Assessment During Human-Robot Interaction |
16:00 | Safety and Trust in Autonomous Driving |