DIGHUM-RES25: DIGITAL HUMANISM – INTERDISCIPLINARY SCIENCE AND RESEARCH CONFERENCE
PROGRAM FOR THURSDAY, NOVEMBER 20TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-10:30 Session 4: Keynote 1: Julian Nida-Rümelin, "Philosophical Foundations of Digital Humanism"

Philosophical Foundations of Digital Humanism

The core of humanism in general is human agency - its anthropological preconditions and its  ethical, social and political implications. Digital humanism is the application of it to the challenges of digital transformation. This talk will present the main arguments in favor of (1) humanism in general and (2) digital humanism specifically. It will focus on the philosophical foundations of digital humanism and exemplify their relevance by discussing some of its practical implications. 

Chair:
11:00-12:30 Session 5: LONG PAPERS I
11:00
Breaking Disciplinary Silos: The Case of Software Engineering

ABSTRACT. Digital Humanism aims to influence the complex interplay between technology and humankind, striving for a better society while fully respecting universal human rights. While software is the crucial technological component in this interplay, current mainstream approaches in software engineering research and education are largely agnostic about the goals of Digital Humanism. Scientists and engineers tend to view, design, and study software systems from a purely technical perspective, often disregarding human and societal implications. In this position paper, we argue that research and education should be profoundly revisited by broadening the disciplinary landscape through cross-disciplinary collaborations, particularly with the humanities and social sciences, to ensure that the human and societal implications of new designs, as well as the principles of Digital Humanism, are properly considered. We provide an initial roadmap for this effort, mainly addressing software engineering. The roadmap is intended as a basis for further discussion and refinement, and serves as a call for increased community action in this area.

11:20
The Architecture of Academic Overproduction: Toward Post-AI Scholarship

ABSTRACT. This article critically examines the accelerating phenomenon of academic overproduction, tracing its roots from exponential publication growth in the late twentieth century to the contemporary landscape overwhelmed by digitization, global competition, shifting publication economics and now artificial intelligence. The surge of scholarly output, enabled by advanced digital infrastructures, open-access models, and mega-journals has fueled not only greater access and collaboration, but also mounting information overload, declining editorial standards, and the evolution of a research workforce that spends more and more time chasing metrics. Against this backdrop, the rise of generative artificial intelligence is poised to further intensify these dynamics through both “flattening” (the homogenization and proliferation of scholarly writing) and “enslopification,” defined as the mass production of low-quality academic content optimized for metrics rather than insight. These issues reflect deeper epistemological tensions within academic research, between cultures of “knowledge sharing” and “knowledge transfer”. Rather than simply blaming digital technologies or AI, we argue that quantification pressures, institutional incentives, and the commodification of research have primed the academy for a crisis of relevance and authenticity. It is thus imperative to reimagine research beyond compliance-driven production and superficial debates about AI integration, instead advocating for multimodal, participatory, and dialogical scholarship. Meaningful reform demands a shift from metric-driven output toward research that cultivates agency, reflection, and genuine public engagement, urging institutions and scholars to reclaim the value and purpose of scholarly inquiry in a post-AI world.

11:40
Thinking along the lines generated by GenAI? A systematic mapping study on academic writing

ABSTRACT. As generative AI (GenAI) tools such as ChatGPT enter higher education, they challenge fundamental assumptions about academic authorship, originality, and the cultivation of critical thinking through writing. Academic writing serves not only to communicate knowledge but also as a method of knowledge construction and a site of epistemic and cognitive development. This systematic mapping study synthesizes emerging research on how GenAI is reshaping academic writing and its relationship to critical thinking in higher education. Drawing on 25 peer-reviewed studies published between 2023 and 2025, we analyze conceptualizations of critical thinking and academic writing, identify pedagogical approaches, and examine the use of GenAI tools across diverse disciplines, educational levels, and geographic contexts. Findings reveal a global interest in the use of GenAI in academic writing, spanning all disciplines and educational levels. However, only a minority of studies draw on robust theoretical frameworks. Critical thinking is predominantly conceptualized within a cognitive skills paradigm, while broader understandings linked to criticality and critical pedagogy are largely absent. Pedagogical models remain rare and mostly untested, with educators' perspectives notably underrepresented. While GenAI is seen to support writing processes, its potential to foster or hinder critical thinking remains contested. We conclude that future research should broaden conceptual foundations, include educator perspectives, and prioritize pedagogical design and evaluation. Rethinking academic writing as a transformative practice in the digital age may require moving beyond the cognitive paradigm toward more reflexive, participatory, and ethically grounded approaches to critical thinking.

11:50
Reclaiming Agency through Cyber Humanism: A European Agenda for AI, Education and Culture

ABSTRACT. The rise of generative artificial intelligence (AI), together with increasingly data-centric and computational infrastructures, demands more than ethical critique; it calls for a reconfiguration of society. Cyber Humanism is a new paradigm: humanists act as active epistemic agents, capable of shaping intelligent systems in ways that embed cultural reflexivity, plural epistemologies, and democratic values into their architecture and that amplify, rather than reduce, human agency. Building on the Cyber Humanities Manifesto and recent European frameworks such as the AI Act, DigComp 3.0, and the Digital Rights Declaration, this paper examines Cyber Humanism through three core dimensions that shape and sustain its agenda: first, the redesign of digital competence around reflexivity and cognitive sovereignty; second, the reimagining of curricula and institutions as democratic infrastructures for participatory AI; and third, the translation of normative principles into prototypable practices through interdisciplinary governance and cultural experimentation. The argument posits that cognitive sovereignty, defined as the right to shape one’s informational environment, is a fundamental principle of algorithmic citizenship that should be established. Europe’s institutional depth, cultural infrastructure and regulatory ecosystem put it in a position to prototype a third way, offering an alternative to both market driven technosolutionism and authoritarian control. In contrast to either resistance or acquiescence to technological change, Cyber Humanism employs a logic of dialogic design, in which human values and algorithmic form evolve in tandem. The future of intelligence is not predetermined; it is to be collectively authored, urgently and democratically.

12:00
AI Research is not Magic, it has to be Reproducible and Responsible: Challenges in the AI field from the Perspective of its PhD Students

ABSTRACT. Unlocking the full societal potential of artificial intelligence demands a fundamental shift towards responsible and reproducible research. Understanding that PhD students are pivotal in conducting and reproducing experiments, we investigated the challenges of 28 AI doctoral candidates from 13 European countries. We identify three critical areas where current practices fall short: (1) the findability and quality of AI resources such as datasets, models, and experiments; (2) the difficulties in replicating the experiments in AI papers; (3) and the lack of trustworthiness and interdisciplinarity. After uncovering some of the underlying reasons behind the challenges, we propose a combination of social and technical recommendations to overcome the identified challenges and foster a more transparent and reliable AI research ecosystem. Socially, we recommend the general adoption of reproducibility initiatives in AI conferences and journals, as well as improved interdisciplinary scientific collaboration, especially in data governance practices. On the technical front, we call for enhanced tools to better support versioning control of datasets and code, and a computing infrastructure that facilitates the sharing and discovery of AI resources, as well as the sharing, execution, and verification of experiments.

12:10
Economies of Labor in the Age of AI: The Case of YouTube

ABSTRACT. The structure of the labor market has shifted in recent years, with waged employment giving way increasingly to “alternative work arrangements” (AWA). Largely driven by computing technologies, the exact nature of this shift remains underexplored. This paper examines the shift from the perspectives of the discursive economy and political economy. To that end, we first propose a discursive framework to account for current displacements in the labor market. Then, we extend the notion of “heteromation” to discuss various mechanisms of value creation and value extraction in current capitalism, including not only waged labor but also the varieties of non-waged labor that fall under AWA. To ground our conceptual analysis, we examine YouTube as the largest digital global labor platform and a pioneer in the use of AI computing technologies. YouTube provides us an insight into how mature AI-driven platforms interact with labor and may provide insight into what a successful Generative AI platform may become if it similarly becomes infrastructural and no longer dependent upon outside investment.

13:30-15:00 Session 6: LONG PAPERS II
13:30
Visual Neuroprosthetics, Digital Humans and the Law of Evidence

ABSTRACT. The paper explores some of the evidential implications of neuro-implants that assist or restore vision from the perspective of evidence law. As people with disabilities are disproportionately victims of crime, and often experi-ence secondary victimisation during trials where their credibility is often questioned, these technologies raise the question of how evidence law should treat “technologically mediated” witness accounts

13:50
Beyond the Digital Judge: Legal Reasoning in Compliance Checking and Compliance Choices

ABSTRACT. This paper investigates the practical reasoning involved in compliance related decisions, distinguishing between two scenarios where a state of affairs is evaluated in the light of applicable norms: ex post compliance checking and ex ante compliance choices. While the literature in legal reasoning representation is exclusively focused on compliance checking scenarios, i.e., simulating a digital judge, different factors seem to play a role in the inner deliberation of compliance choices. In this paper, we show how human agents are influenced in their compliance choices by their own value ranking and risk assessment; in turn, the choice affects their preference among alternative interpretations of the law. We contend that contributions from the literature such as the value-based argumentation framework, while focused on ex post judgments, may be able to provide a comprehensive framework for ex ante compliance decisions. The main goal of this work is to represent legal reasoning in a comprehensive framework that can be used as a reference for the explanation of automated compliance decisions.

14:10
What if the Avatar Can Read My Mind? Possibilities and Ethical Pitfalls of Human-Virtual Reality Interaction Integrating Artificial Intelligence

ABSTRACT. Adaptive Virtual Reality (VR) scenarios are already being designed for implementation in neurofeedback and brain-computer interface applications. In these scenarios, the neurophysiological signals of VR users are recorded and processed in real-time to control a virtual environment. Machine learning algorithms and artificial intelligence play a crucial role in detecting specific brain states and translating them into control commands, thereby enabling the VR system to adapt to the user’s brain state. Through this neurotechnology, virtual avatars or agents that users interact with in VR can adjust their behavior and reactions based on the users' neurophysiological states. This advancement holds potential benefits, particularly in educational settings, where virtual tutors can tailor their interactions to the learner's cognitive load. However, it's important to acknowledge the potential ethical concerns associated with this technology since it could also be utilized for subliminal manipulation of VR users for commercial or political purposes. In the following discussion, we will explore both the positive aspects of adaptive human-VR interfaces and the potential ethical pitfalls as well as implications for public policy.

14:20
Understanding the Humanist Notion of Trust in the Age of Generative AI
PRESENTER: Pia-Zoe Hahne

ABSTRACT. This research stresses the humanist conceptualisation of trust rooted in human self-trust and communication which classifies trust as an ethical and epistemic condition for the possibility of cooperative life. Considering artificial intelligence, we assess that AI should not meet the criteria to qualify for trust. From a digital humanist perspective, trust in artificial intelligence lacks the foundational communicative and ethical grounding that characterises humanistic trust.

14:30
Start Using Justifications When Explaining AI Systems to Decision Subjects

ABSTRACT. Every AI system that makes decisions about people has stakeholders who are affected by its outcomes. These stakeholders, whom we call decision subjects, have a right to understand how their outcome was produced and to challenge it. Explanations should support this process by making the algorithmic system transparent and creating an understanding of its inner workings. However, we argue that while current explanation approaches focus on descriptive explanations, decision subjects also require normative explanations or justifications. In this position paper, we advocate for justifications as a key component in explanation approaches for decision subjects and make three claims to this end, namely that justifications i) fulfill decision subjects' information needs, ii) shape their intent to accept or contest decisions, and iii) encourage accountability considerations throughout the system's lifecycle. We propose four guiding principles for the design of justifications, provide two design examples, and close with directions for future work. With this paper, we aim to provoke thoughts on the role, value, and design of normative information in explainable AI for decision subjects.

14:40
A two-axis framework to map reasons for neurotechnology use

ABSTRACT. Neurotechnologies, tools for recording, analyzing, and manipulating brain activi-ty, are increasingly used for enhancement beyond their traditional roles in diagno-sis and rehabilitation, introducing new challenges and risks. To navigate these ethical complexities, we propose a two-axis framework: “recovery” and “discov-ery”. Recovery refers to rehabilitative applications, while discovery pertains to enhancement-related uses. These concepts are interconnected, and their overlap can create an ethical “gray zone”. Our framework aids in analyzing neurotechnol-ogy use at individual, social, and cultural levels, helping to answer the “what for, how, and why” questions. We also address "neuroenchantment"—the persuasive influence of these technologies—which can distort beliefs and impair critical thinking, leading to poor risk assessment. This framework helps identify such dangers, offering insights for medical, therapeutic, and regulatory strategies.

15:30-17:00 Session 7: LONG Papers III
15:30
Narrated future: How narratives shape our digital present
PRESENTER: Betina Aumair

ABSTRACT. This article explores how competing narratives shape our understanding and governance of digitalisation, focusing on the ideological opposition between Cyberlibertarianism and Digital Humanism. While technological and eco-nomic frameworks often dominate public discourse, the underlying narra-tives that inform these perspectives exert significant influence on policy, so-cietal norms, and democratic possibilities. Cyberlibertarianism promotes a vi-sion of individual freedom rooted in technological autonomy and market de-regulation, often marginalising ethical concerns, and reinforcing corporate power. In contrast, Digital Humanism offers a counter-narrative that centres human dignity, democratic participation, and social responsibility in the de-sign and application of digital technologies. The article examines these narratives as ideologically charged patterns of in-terpretation that structure what is perceived as technologically inevitable or politically possible. It critiques the discursive framing of Cyberlibertarianism, particularly its depoliticisation of democracy and appropriation of emancipa-tory language, which masks power imbalances and limits public deliberation. Digital Humanism is presented as a necessary ideological intervention that reclaims digital spaces as culturally, ethically, and politically negotiable. Education emerges as a central arena for cultivating critical awareness of these narratives. Beyond digital literacy, it calls for an education that fosters ethical reflection, democratic agency, and narrative sensitivity. Through such an approach, individuals can be empowered to critically engage with technological systems and participate meaningfully in shaping digital fu-tures. Ultimately, the article argues that reflecting on and contesting dominant digital narratives is essential to safeguarding democracy and enabling collec-tive self-determination in an increasingly digital world.

15:50
Why Digital Humanism Needs a Social Psychology – and How You Can Use Digital Data to Study Social Identities in Socio-Technical Systems

ABSTRACT. Digital Humanism aspires to align technological innovation with human values, yet its psychological underpinnings remain predominantly individualistic, positioning the human as a “flawed” agent within socio-technical systems. This paper proposes a shift towards group-level social psychological dynamics as a foundation for designing socially responsive socio-technical systems. Building on the Social Identity Approach (SIA), which integrates the Social Identity Theory and Self-Categorisation Theory, we argue that human behaviour in complex socio-technical systems is shaped by dynamic group memberships and context-dependent identity processes. We demonstrate how digital traces, such as language, sensor data, and interactional patterns can serve as behavioural proxies for identifying and analysing such identity processes. Through interdisciplinary research, we present applications in system design, safety management, privacy protection, and ethical evaluation to show how identity-aware computational models and frameworks operationalise SIA to enhance inclusivity, resilience, and ethical responsiveness in socio-technical systems. Embedding social identity dynamics into the design of emerging socio-technical systems offers a transformative potential for advancing the normative goals of Digital Humanism.

16:10
TiBaLLi: Internet Inclusion Through Artificial Intelligence

ABSTRACT. In recent years, Artificial Intelligence has aided in the development of many solutions around the world in numerous fields and it has become imperative to ask how advanced AI methods like Machine Learning and Natural Language Processing can be reconstructed so as to make the Internet more inclusive practically, for communities in low-resource environments in the Global South. This paper outlines the [Anon] project in Ghana, which utilizes a Participatory Action Research approach to build a local voice-based Automatic Speech Recognition system to provide domain-focused web-based information to local communities in Dagbani (a local Ghanaian language). We look at the methodology, and how it utilized insights from community engagement to build an inclusive system. We also look at what broader implications of this design process for the Web and AI in the context of decolonization of the Internet.

16:20
Designing Deliberative Digital Communication Platforms

ABSTRACT. Despite the enormous potential of digital communication platforms (DCPs) to disseminate information and facilitate deliberative public discourse without interference from traditional gatekeepers, they are subject to widespread criticism due to problems such as misinformation and societal polarization. While this criticism is supported by numerous empirical studies, current research lacks an alternative positive vision that integrates digital communication platforms and normative ideals for public discourse. To this end, we develop design knowledge for deliberative DCPs according to the ideals of Habermas' theory of communicative action (TCA). Specifically, we develop TCA-based meta-requirements and utilize network gatekeeping theory to develop suitable design principles for their implementation. Thereby, we contribute design knowledge of a normatively grounded DCP that can assist researchers, system designers, and policy makers seeking a foundation for the conception, development, and regulation of DCPs.

16:30
Readiness-Centered AI in Practice: Findings from a Pilot Chatbot for Digital Skilling of Older Adults in Low-Readiness Contexts

ABSTRACT. This paper presents findings from an initial deployment of a readiness-centered AI-driven chatbot designed to support digital skilling amongst older adults in low-resource settings. Drawing on Human-Centered AI (HCAI) principles and adapting it through a readiness lens, the tool integrated multilingual support, voice based interactions and age sensitive design. A cross-sectional study was conducted in urban and rural Kenyan counties, involving 388 participants using observational methods and semi-structured interviews to document user interactions. Thematic analysis revealed key barriers to adoption, including emotional discomfort, language-related confusion, usability breakdowns and cultural perceptions. These findings demonstrate that, while ethical design is necessary, it is insufficient when foundational precursors like digital readiness, cognitive diversity and socio-cultural beliefs are not met. This study contributes by identifying critical gaps within the current HCAI framework and proposing a readiness-centered reframing that reorients design of AI systems around users’ actual capacities, cultural norms, and infrastructural realities. It introduces digital readiness assessment, cognitive scaffolding, and cultural usability as essential design pillars for AI systems that are not only ethical, but truly inclusive and usable and effective in low-resource contexts.

16:40
Climate Disasters and Risks in Online Expressions in South Africa

ABSTRACT. Digital platforms such as X (formerly Twitter) increasingly serve as arenas for civic expression during climate related disasters. However, studies on digital governance related critique around climate crises in South Africa are limited. This is whereas the country has a volatile online-offline dynamic where such expressions carry inherent risks. This paper responds by analysing X conversations surrounding the 2022 KwaZulu Natal (KZN) floods. Using a computational social science approach, it examines tweets and user accounts involved in propagating high-risk content and situates the dynamics within the risk and digital performativity framework. The study reveals long standing socio-political tensions which are articulated through contested boundaries of digital freedom. This is particularly through xenophobic rhetoric, racialised critique of political actors and antagonism towards corporate institutions. The paper draws on broader comparative cases to contextualise these risks and illustrate the potentially harmful dimensions of digital freedoms during crises. It reveals how the X platform enable civic participation but also backlash and polarisation. These dynamics expose the social vulnerabilities embedded in digital expression cognisant of South Africa’s volatile online-offline landscape. It also calls for further studies on both independent and networked user behaviours to strengthen climate risk communication. The paper contributes to digital humanities by offering a grounded account of how publics contest authority, assert rights, and navigate vulnerability during crisis-mediated online engagement.

17:15-18:15 Session 8: Keynote 2: Gry Hasselbalch, "Human Power - A Politics for the AI Machine Age"

Human Power - A Politics for the AI Machine Age

The rapid and tumultuous introduction of AI into our everyday lives has triggered a self-exploratory public debate about what it means to be human. What are our human potential, talents, and powers – what essentially is our place in the modern world? Are we nothing but outdated machines in dire need of a technological fix? 

The keynote is based on the book Human Power - Seven Traits for the Politics of the AI Machine Age and reflects on the shifting power dynamics between humans and the AI-powered technologies and their industrial complexes increasingly shaping our world. Exploring the distinctiveness of human power, it argues for a new foundation of the politics that is needed in the AI Machine Age.

Based on: Gry Hasselbalch, "Human Power - Seven Traits for the Politics of the AI Machine Age", © CRC Press