View: session overviewtalk overview
Philosophical Foundations of Digital Humanism
The core of humanism in general is human agency - its anthropological preconditions and its ethical, social and political implications. Digital humanism is the application of it to the challenges of digital transformation. This talk will present the main arguments in favor of (1) humanism in general and (2) digital humanism specifically. It will focus on the philosophical foundations of digital humanism and exemplify their relevance by discussing some of its practical implications.
| 13:30 | Visual Neuroprosthetics, Digital Humans and the Law of Evidence ABSTRACT. The paper explores some of the evidential implications of neuro-implants that assist or restore vision from the perspective of evidence law. As people with disabilities are disproportionately victims of crime, and often experi-ence secondary victimisation during trials where their credibility is often questioned, these technologies raise the question of how evidence law should treat “technologically mediated” witness accounts |
| 13:50 | Beyond the Digital Judge: Legal Reasoning in Compliance Checking and Compliance Choices ABSTRACT. This paper investigates the practical reasoning involved in compliance related decisions, distinguishing between two scenarios where a state of affairs is evaluated in the light of applicable norms: ex post compliance checking and ex ante compliance choices. While the literature in legal reasoning representation is exclusively focused on compliance checking scenarios, i.e., simulating a digital judge, different factors seem to play a role in the inner deliberation of compliance choices. In this paper, we show how human agents are influenced in their compliance choices by their own value ranking and risk assessment; in turn, the choice affects their preference among alternative interpretations of the law. We contend that contributions from the literature such as the value-based argumentation framework, while focused on ex post judgments, may be able to provide a comprehensive framework for ex ante compliance decisions. The main goal of this work is to represent legal reasoning in a comprehensive framework that can be used as a reference for the explanation of automated compliance decisions. |
| 14:10 | What if the Avatar Can Read My Mind? Possibilities and Ethical Pitfalls of Human-Virtual Reality Interaction Integrating Artificial Intelligence ABSTRACT. Adaptive Virtual Reality (VR) scenarios are already being designed for implementation in neurofeedback and brain-computer interface applications. In these scenarios, the neurophysiological signals of VR users are recorded and processed in real-time to control a virtual environment. Machine learning algorithms and artificial intelligence play a crucial role in detecting specific brain states and translating them into control commands, thereby enabling the VR system to adapt to the user’s brain state. Through this neurotechnology, virtual avatars or agents that users interact with in VR can adjust their behavior and reactions based on the users' neurophysiological states. This advancement holds potential benefits, particularly in educational settings, where virtual tutors can tailor their interactions to the learner's cognitive load. However, it's important to acknowledge the potential ethical concerns associated with this technology since it could also be utilized for subliminal manipulation of VR users for commercial or political purposes. In the following discussion, we will explore both the positive aspects of adaptive human-VR interfaces and the potential ethical pitfalls as well as implications for public policy. |
| 14:20 | Understanding the Humanist Notion of Trust in the Age of Generative AI PRESENTER: Pia-Zoe Hahne ABSTRACT. This research stresses the humanist conceptualisation of trust rooted in human self-trust and communication which classifies trust as an ethical and epistemic condition for the possibility of cooperative life. Considering artificial intelligence, we assess that AI should not meet the criteria to qualify for trust. From a digital humanist perspective, trust in artificial intelligence lacks the foundational communicative and ethical grounding that characterises humanistic trust. |
| 14:30 | Start Using Justifications When Explaining AI Systems to Decision Subjects ABSTRACT. Every AI system that makes decisions about people has stakeholders who are affected by its outcomes. These stakeholders, whom we call decision subjects, have a right to understand how their outcome was produced and to challenge it. Explanations should support this process by making the algorithmic system transparent and creating an understanding of its inner workings. However, we argue that while current explanation approaches focus on descriptive explanations, decision subjects also require normative explanations or justifications. In this position paper, we advocate for justifications as a key component in explanation approaches for decision subjects and make three claims to this end, namely that justifications i) fulfill decision subjects' information needs, ii) shape their intent to accept or contest decisions, and iii) encourage accountability considerations throughout the system's lifecycle. We propose four guiding principles for the design of justifications, provide two design examples, and close with directions for future work. With this paper, we aim to provoke thoughts on the role, value, and design of normative information in explainable AI for decision subjects. |
| 14:40 | A two-axis framework to map reasons for neurotechnology use ABSTRACT. Neurotechnologies, tools for recording, analyzing, and manipulating brain activi-ty, are increasingly used for enhancement beyond their traditional roles in diagno-sis and rehabilitation, introducing new challenges and risks. To navigate these ethical complexities, we propose a two-axis framework: “recovery” and “discov-ery”. Recovery refers to rehabilitative applications, while discovery pertains to enhancement-related uses. These concepts are interconnected, and their overlap can create an ethical “gray zone”. Our framework aids in analyzing neurotechnol-ogy use at individual, social, and cultural levels, helping to answer the “what for, how, and why” questions. We also address "neuroenchantment"—the persuasive influence of these technologies—which can distort beliefs and impair critical thinking, leading to poor risk assessment. This framework helps identify such dangers, offering insights for medical, therapeutic, and regulatory strategies. |
| 15:30 | Narrated future: How narratives shape our digital present PRESENTER: Betina Aumair ABSTRACT. This article explores how competing narratives shape our understanding and governance of digitalisation, focusing on the ideological opposition between Cyberlibertarianism and Digital Humanism. While technological and eco-nomic frameworks often dominate public discourse, the underlying narra-tives that inform these perspectives exert significant influence on policy, so-cietal norms, and democratic possibilities. Cyberlibertarianism promotes a vi-sion of individual freedom rooted in technological autonomy and market de-regulation, often marginalising ethical concerns, and reinforcing corporate power. In contrast, Digital Humanism offers a counter-narrative that centres human dignity, democratic participation, and social responsibility in the de-sign and application of digital technologies. The article examines these narratives as ideologically charged patterns of in-terpretation that structure what is perceived as technologically inevitable or politically possible. It critiques the discursive framing of Cyberlibertarianism, particularly its depoliticisation of democracy and appropriation of emancipa-tory language, which masks power imbalances and limits public deliberation. Digital Humanism is presented as a necessary ideological intervention that reclaims digital spaces as culturally, ethically, and politically negotiable. Education emerges as a central arena for cultivating critical awareness of these narratives. Beyond digital literacy, it calls for an education that fosters ethical reflection, democratic agency, and narrative sensitivity. Through such an approach, individuals can be empowered to critically engage with technological systems and participate meaningfully in shaping digital fu-tures. Ultimately, the article argues that reflecting on and contesting dominant digital narratives is essential to safeguarding democracy and enabling collec-tive self-determination in an increasingly digital world. |
| 15:50 | Why Digital Humanism Needs a Social Psychology – and How You Can Use Digital Data to Study Social Identities in Socio-Technical Systems ABSTRACT. Digital Humanism aspires to align technological innovation with human values, yet its psychological underpinnings remain predominantly individualistic, positioning the human as a “flawed” agent within socio-technical systems. This paper proposes a shift towards group-level social psychological dynamics as a foundation for designing socially responsive socio-technical systems. Building on the Social Identity Approach (SIA), which integrates the Social Identity Theory and Self-Categorisation Theory, we argue that human behaviour in complex socio-technical systems is shaped by dynamic group memberships and context-dependent identity processes. We demonstrate how digital traces, such as language, sensor data, and interactional patterns can serve as behavioural proxies for identifying and analysing such identity processes. Through interdisciplinary research, we present applications in system design, safety management, privacy protection, and ethical evaluation to show how identity-aware computational models and frameworks operationalise SIA to enhance inclusivity, resilience, and ethical responsiveness in socio-technical systems. Embedding social identity dynamics into the design of emerging socio-technical systems offers a transformative potential for advancing the normative goals of Digital Humanism. |
| 16:10 | TiBaLLi: Internet Inclusion Through Artificial Intelligence ABSTRACT. In recent years, Artificial Intelligence has aided in the development of many solutions around the world in numerous fields and it has become imperative to ask how advanced AI methods like Machine Learning and Natural Language Processing can be reconstructed so as to make the Internet more inclusive practically, for communities in low-resource environments in the Global South. This paper outlines the [Anon] project in Ghana, which utilizes a Participatory Action Research approach to build a local voice-based Automatic Speech Recognition system to provide domain-focused web-based information to local communities in Dagbani (a local Ghanaian language). We look at the methodology, and how it utilized insights from community engagement to build an inclusive system. We also look at what broader implications of this design process for the Web and AI in the context of decolonization of the Internet. |
| 16:20 | Designing Deliberative Digital Communication Platforms ABSTRACT. Despite the enormous potential of digital communication platforms (DCPs) to disseminate information and facilitate deliberative public discourse without interference from traditional gatekeepers, they are subject to widespread criticism due to problems such as misinformation and societal polarization. While this criticism is supported by numerous empirical studies, current research lacks an alternative positive vision that integrates digital communication platforms and normative ideals for public discourse. To this end, we develop design knowledge for deliberative DCPs according to the ideals of Habermas' theory of communicative action (TCA). Specifically, we develop TCA-based meta-requirements and utilize network gatekeeping theory to develop suitable design principles for their implementation. Thereby, we contribute design knowledge of a normatively grounded DCP that can assist researchers, system designers, and policy makers seeking a foundation for the conception, development, and regulation of DCPs. |
| 16:30 | Readiness-Centered AI in Practice: Findings from a Pilot Chatbot for Digital Skilling of Older Adults in Low-Readiness Contexts ABSTRACT. This paper presents findings from an initial deployment of a readiness-centered AI-driven chatbot designed to support digital skilling amongst older adults in low-resource settings. Drawing on Human-Centered AI (HCAI) principles and adapting it through a readiness lens, the tool integrated multilingual support, voice based interactions and age sensitive design. A cross-sectional study was conducted in urban and rural Kenyan counties, involving 388 participants using observational methods and semi-structured interviews to document user interactions. Thematic analysis revealed key barriers to adoption, including emotional discomfort, language-related confusion, usability breakdowns and cultural perceptions. These findings demonstrate that, while ethical design is necessary, it is insufficient when foundational precursors like digital readiness, cognitive diversity and socio-cultural beliefs are not met. This study contributes by identifying critical gaps within the current HCAI framework and proposing a readiness-centered reframing that reorients design of AI systems around users’ actual capacities, cultural norms, and infrastructural realities. It introduces digital readiness assessment, cognitive scaffolding, and cultural usability as essential design pillars for AI systems that are not only ethical, but truly inclusive and usable and effective in low-resource contexts. |
| 16:40 | Climate Disasters and Risks in Online Expressions in South Africa ABSTRACT. Digital platforms such as X (formerly Twitter) increasingly serve as arenas for civic expression during climate related disasters. However, studies on digital governance related critique around climate crises in South Africa are limited. This is whereas the country has a volatile online-offline dynamic where such expressions carry inherent risks. This paper responds by analysing X conversations surrounding the 2022 KwaZulu Natal (KZN) floods. Using a computational social science approach, it examines tweets and user accounts involved in propagating high-risk content and situates the dynamics within the risk and digital performativity framework. The study reveals long standing socio-political tensions which are articulated through contested boundaries of digital freedom. This is particularly through xenophobic rhetoric, racialised critique of political actors and antagonism towards corporate institutions. The paper draws on broader comparative cases to contextualise these risks and illustrate the potentially harmful dimensions of digital freedoms during crises. It reveals how the X platform enable civic participation but also backlash and polarisation. These dynamics expose the social vulnerabilities embedded in digital expression cognisant of South Africa’s volatile online-offline landscape. It also calls for further studies on both independent and networked user behaviours to strengthen climate risk communication. The paper contributes to digital humanities by offering a grounded account of how publics contest authority, assert rights, and navigate vulnerability during crisis-mediated online engagement. |
Human Power - A Politics for the AI Machine Age
The rapid and tumultuous introduction of AI into our everyday lives has triggered a self-exploratory public debate about what it means to be human. What are our human potential, talents, and powers – what essentially is our place in the modern world? Are we nothing but outdated machines in dire need of a technological fix?
The keynote is based on the book Human Power - Seven Traits for the Politics of the AI Machine Age and reflects on the shifting power dynamics between humans and the AI-powered technologies and their industrial complexes increasingly shaping our world. Exploring the distinctiveness of human power, it argues for a new foundation of the politics that is needed in the AI Machine Age.
Based on: Gry Hasselbalch, "Human Power - Seven Traits for the Politics of the AI Machine Age", © CRC Press