HHAI-AI&Behaviour2025: Research agenda for responsible AI-supported behaviour change |
Website | https://ai-behaviour.nl/hhai-2025/ |
Submission link | https://easychair.org/conferences/?conf=hhaiaibehaviour2025 |
Submission deadline | April 11, 2025 |
A prominent area of research within the Hybrid Intelligence domain pertains to Artificial Intelligence systems that support individuals in voluntarily adapting their behaviour. Such technological support can be relevant in different domains like health, sustainability and justice e.g., to help people adopt healthier lifestyle patterns, support people in making more sustainable choices, empower people in managing their chronic disease, or support victims of crimes in their healing process.
Key to developing effective behaviour support technologies is understanding why people do what they do (by learning about their motivations, habits, capabilities and needs), so that the offered support is timely and targeted at a pivotal mechanism. AI-related technologies abound that can contribute to deepening this understanding, e.g., machine learning of observational data to gain insights in behavioural patterns, cognitive models to reason about cognitive aspects such as motivation and self-efficacy, conversational AI such as LLMs and chatbots to engage with people, or VR/AR approaches for training people or for providing visual insight into possible scenarios.
Effectiveness, however, is not the only relevant consideration to take into account when designing behaviour change support systems. Key to developing responsible behaviour support technologies is identifying and implementing strategies for supporting individuals in their behaviour change trajectory in ways that align with core societal values such as liberty, autonomy, and (social) justice. Especially in a time when there are many (commercial) efforts to use AI technology to subconsciously influence (individual and group) behaviour (e.g., consumer behaviour, voting behaviour), it is important to work on a research agenda that forms a counterbalancing narrative to this development, focusing on the design and development of AI technologies that users can trust by aligning with both public values and the users’ own values.
The aim of this interactive event is to develop a research agenda for the next 5 years for the field of AI-support behaviour change. Our aim is to do this in three steps. First, we will create an overview of the state of the art. Second, we will identify the most important challenges for the field. Finally, based on these two components, we aim to define a research agenda.
Topics of Interest
We encourage people to submit an abstract on:
- AI approaches for understanding behaviour from sensor data;
- AI-based techniques for deriving cognitions, preferences, or personal values from observational data;
- Empirical studies on computer-supported coaching using reasoning and ML techniques;
- Techniques or UI design strategies to support (perceived) autonomy;
- The use of Explainable AI for behaviour change support;
- Techniques for individual recommendations to encourage individuals to make positive and sustained changes;
- AI techniques that aim to improved adherence to recommendations;
- Methods for providing tailored and personalized feedback in a transparent way resulting in new insights.
Submission Guidelines
We welcome two types of submissions:
- First, we invite participants to submit short papers (5 pages excluding references) formatted using the IOS formatting guidelines. Papers will be reviewed by the organizing committee and selected on the basis of their relevance to the workshop topics and their potential for encouraging fruitful discussion. Accepted papers will be published with CEUR-WS.
- We also welcome extended abstracts (2 pages excluding references). Like the short papers, these submissions may be selected for lightning talks (see below), however they are not eligible for publication.
Committees
Program Committee
- Lize Alberts, Computer Science, University of Oxford
- Tessa Beinema, Communication Science, VU Amsterdam
- Willem-Paul Brinkman, Interactive Intelligence, TU Delft
- Aart van Halteren, Social AI, VU Amsterdam
- Marcos Oliveira, Computer Science, University of Exeter
- Nimat Ullah, AI & Behaviour, Vrije Universiteit Amsterdam
Organizing committee
- Charlotte Gerritsen, Associate Professor AI & Behaviour, Vrije Universiteit Amsterdam
- Bart Kamphorst, Senior Researcher AI Ethics, Data School, Utrecht University
- Michel Klein, Associate Professor AI & Behaviour, Vrije Universiteit Amsterdam
Publication
Papers accepted as short papers will be published in the CEUR-WS.org HHAI 2025 workshop proceedings. Extended abstracts will not be published, but will be presented as lightning talks during the morning session of the event.
Contact
All questions about submissions should be emailed to hhai2025-aibehaviour [at] easychair.org.