STS NL 2026: STS NL CONFERENCE 2026
PROGRAM FOR THURSDAY, APRIL 16TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 11A: T26: What If we talk Politics a Little - More?
09:00
Democratic futuring: an inquiry in Nederland Verbeeld(t)

ABSTRACT. As the Netherlands confronts climate adaptation, housing insecurity, and rural transformation, democratic challenges increasingly emerge around how futures are imagined and governed. This paper addresses what we conceptualize as an imagination gap: a growing distance between policymakers and citizens concerning whose visions of the future matter. Building on STS scholarship on sociotechnical imaginaries and recent work on “captured futures” (Hajer & Oomen, 2025), the paper argues that dominant dramaturgies of governance tend to stabilize particular futures while foreclosing others, turning futuring into a technocratic rather than a political and democratic practice.

Here, we develop the concept of democratic futuring to understand how futures can be organized as objects of collective inquiry rather than pre-scripted through arrangements that determine whose imagination matters in advance. Democratic futuring foregrounds the provisional character of institutions and sustains spaces where multiple, competing imaginaries can circulate, confront one another, and remain contestable. Through agonistic engagement, it cultivates the productive tensions that arise when futures are treated as political matters, keeping open whose imagination shapes collective horizons.

Empirically, the inquiry takes place in Nederland Verbeeld(t), a three-phase intervention (2025–2027) that creates exhibitions, workshops, regional encounters, and ends up in a public manifestation around societal and spatial futures. These situations are conceptualised as soft spaces outside formal policy arenas that function as "democratic situations" (Birkbak & Papazu, 2022): experimental environments in which futures are materially and narratively made investigable together through collective engagement. Through form-giving practices as spatial/social design, narrative formats and dramaturgical choices, these soft spaces co-produce what counts as valid knowledge about the future, expanding epistemic authority to include affective and embodied experience. By tracing how imaginaries circulate between experimental and institutional settings, the paper shows how democratic futuring emerges through sustained material experimentation that keeps futures politically open by reconfiguring whose imagination counts.

09:22
Still time for a dress rehearsal? Geopolitical ecologies in a changing world order

ABSTRACT. The world is experiencing a rapidly fragmenting world order, lulling Europe from a geopolitical slumber (WRR 2025). No longer can Europe pretend that its values, prosperity and security are easily and comfortably aligned. While the urgency of this message is widely felt in both public and policy debates, the question of ‘what’s next’ is not answered as easily. Moreover, there is a risk in this return of the term ‘geopolitics’ by naturalizing particular power dynamics and by hiding salient ecological relations. These issues call for seeds of ‘good’ geopolitical ecologies to be sown (cf. Lawhon et al. 2021; Graddy-Lovelace & Ranganathan 2024). In this paper-in-development, we present work from an ongoing project at PBL, the Dutch national institute for environmental policy research, that seeks to do just this. The project itself is primarily oriented at policymakers at Dutch ministries. As a starting point, it takes that even though modes of global environmental governance are in flux, policy actors have agency in shaping what’s next. The project’s aim is to develop forms of rehearsing such agency, so as to empower policy actors both in terms of the expertise they hold around such modes of governance – after all, they are their daily practice – as well as in terms of their ability to make normative arguments about what modes are more desirable than others. In our paper, we aim to take such rehearsals to also be rehearsals of forms of democracy-in-practice, acknowledging that bureaucracies are part of how democracy is enacted. We are particularly interested in whether and how such rehearsals are facilitated by expertise that politicizes modes of environmental governance, while simultaneously tries to comes to terms with ecological change as a political actor in its own right (cf. Maas et al. forthcoming).

09:44
The global digipolis and a return to self-preservation: on Simmel, the home and the city

ABSTRACT. The “global digipolis” refers to digital culture and information technologies as integral to the modern city. It draws on the late 19th century sociologist George Simmel’s view of “metropolitan individuality”

Discussions of “smart cities” and “smart housing” often include appeal to create policies that are more responsive to the lives of their inhabitants. This is often approached critically, fo example, in the literature on “platform urbanization” that looks at the transformation of urban space from the standpoint of capital accumulation (following Marx) and surveillance (following Foucault). There is a more positive spin in a closely related STS literature that affirms the need for new approaches, and related epistemological strategies. Again priority goes to the direct involvement of citizens, as users of new technologies, experimental local practicioners, and as active (co-)creators of participatory spaces.

Simmel, however, pointed to a “blasé attitude”, which frames the indifference and apathy as a necessity, as a matter of survival. It not simply something to overcome, and he drew on political and legal theory, framing it as resistance to “the social-technological mechanism”, and contrasts the attitude to the individualism of the 19th century. In other words, he appeals to legal and political theory.

The paper draws on Simmel while considering real-world examples associated with the project of “Emphatic housing” (Radboud/Maastricht) and a research collaboration that involves Dublin City Council’s Smart City Unit and ADAPT Research Ireland. These examples foreground the need to reconsider (legal) concepts like privacy, autonomy, and fundamental rights are under pressure. With the Simmel’s help, the focus can be placed on their “minimal morality”, maintained as a function of the intensely negotiated boundaries of inner and outer, of personhood and property, and the self that is seeking its survival in a global digipolis.

10:06
The Machinery of Participation: How the Neighbourhood Approach formats local heat transitions in the Netherlands

ABSTRACT. In the Netherlands, decarbonisation has moved from abstract national targets to the "neighbourhood approach" (Wijkgerichte aanpak). Municipalities must now make the 2050 gas-free housing target "land" neighbourhood by neighbourhood, behind individual front doors. This implementation strategy moves the "front line" of the heat transition into local communities, making the neighbourhood the primary site for a democratic question: how do abstract system transitions become politically articulable — matters that citizens can understand, contest, and act upon?

Drawing on extensive participatory action research into situated interaction processes between citizens, experts, and policymakers in pilot transition neighbourhoods Maastricht and Heerlen, I argue that participation is a constitutive political process. The question is not whether participation is "good" or "desirable", but what participation does in making the energy transition land. More specifically, I am working towards a heuristic to understand this machinery through three dimensions: 1) Delegation: The structuring of responsibility and agency, determining who is invited to think along and who is merely expected to follow; 2) Legitimation: The process of framing problems and solutions where technical expertise often overrides experiential knowledge, defining what "counts" as a valid concern; 3) Synchronisation: The temporal politics of aligning abstract policy targets with local infrastructural works, daily routines, and household investment cycles.

Through these dimensions, I show how the heat transition is produced through fragile, contested processes of gradual concretisation. The neighbourhood approach "formats" the opportunity for political engagement, structuring who can participate, what forms of knowledge matter, and who bears the costs when transitions hit home. Understanding this machinery is essential for imagining sustainability transitions that do not treat households as passive implementation sites, but as political actors with legitimate grounds for contestation and concern.

09:00-10:30 Session 11B: T32: Histories in Times of Global Shifts
09:00
Preparing for urban crises: Public shelters as ‘infrastructures of preparedness’ in times of turmoil

ABSTRACT. Current geopolitical tensions, the threat of (hybrid) warfare in Europe and climate-related disasters, have spawned discussion about the need to get better prepared for compound crises. In these discussions, the revitalization of public bunkers and shelters as critical infrastructures for the protection of citizens has emerged as a potential strategy. Focusing on both the Cold War period and the 2020s, this paper studies how in Dutch cities, the uses and meanings of shelters have changed over time (as objects for civil protection, cultural heritage, obsolete objects, (military) strategic objects, repurposed objects etc) and which crisis imaginaries they embody. Conceptually, the paper takes an infrastructural lens focusing on the debate about the ‘de-infrastructuring’ and ‘re-infrastructuring’ of shelters and the role of obduracy in this process of transformation. Based on an analysis of archival documents on Dutch shelter design and use in the Cold War era, public safety reports and campaigns, newspaper articles as well as Dutch and EU policy reports on civil preparedness, it addresses questions such as: What are the underlying discourses and assumptions regarding the nature, scope and potential impacts of the crises for which public bunkers and shelters were considered as (potential) solutions? How do past governmental decisions regarding bunker and shelter maintenance, investment, technological design, or austerity policies shape the current debate on their role in crisis preparedness in cities? By addressing these questions, this paper hopes to explore how ‘the past’ can be made usable for today’s societal challenges.

09:22
Foreground and background relationality: addressing chemical design through technoscience and value regimes

ABSTRACT. We are all chemically polluted. This is a serious and urgent matter. Chemical pollution is not new; concerns have been present since at least the early 1960s. Yet despite decades of environmental debate, the situation has worsened. This paper reframes pollution not as a failure of molecules but as a crisis of historical and material imagination. Chemicals gain meaning and power through technoscientific apparatuses, value regimes, and infrastructures that connect past industrial formations with present interventions. We demonstrate that dominant responses, such as bans, restrictions, and redesign strategies, locate agency at the molecular level, moralise the chemist, and leave decision-making, mechanism exchange, and value creation intact.

Adopting a relational and historical STS perspective, we ask how chemicals become socially relevant across geographies and temporalities. Three moments anchor our analysis. First, the rise of the corporate research laboratory connected chemistry to stable regimes of value through patenting, managerial hierarchies, and industrial rationality. Second, the emergence of synthetic polymers, exemplified by Bakelite, showed that material success hinged on strategic connectivity to modernity’s industry rather than intrinsic properties. Third, the consolidation of oil as the hegemonic feedstock established a technological zone that aligned pipelines, refineries, standards, capital, and governance, directing chemical design toward value extraction and cementing petrochemical power.

We conclude by contrasting polypropylene with bio-polypropylene: the latter’s marginality stems less from technical limits than from weak social and infrastructural connectivity to prevailing regimes. For transformative change, “better” molecules are insufficient; what is required is contesting and reconfiguring the systems through which chemicals come into context, understanding the wider political-economic reality under which molecules gain value, circulate, and shape worlds.

09:44
Governing Crisis in Private: Environmental Imaginaries, Sustainable Development, and Alternative Sites of Global Governance since the 1970s

ABSTRACT. This paper reframes established histories of sustainable development by shifting attention from formal institutions and policy frameworks to the private and alternative settings in which environmental and energy imaginaries were shaped during the global crises of the 1970s. It examines how postwar environmental and energy crises, shaped by decolonization, oil shocks, and growing concern over ecological limits, produced imaginaries that sought to reconcile environmental protection with continued economic growth. Rather than treating energy as a purely technical or geopolitical issue, the paper approaches it as a moral and epistemic site in which questions of development, planetary limits, and global responsibility were negotiated. Focusing on the private networks of Maurice Strong and Hanne Marstrand, the paper moves beyond official arenas such as UN conferences and international organizations to examine informal gatherings in private homes and experimental settings in Geneva, Nairobi, the Canadian Rockies, and Crestone, Colorado. Drawing on archival correspondence, personal papers, and institutional records, it shows how these alternative sites that have been largely neglected in the historiography and literature on crisis governance functioned as spaces of experimentation where industrial philanthropy, cybernetics, New Age spiritualism, and development thinking intersected. Using the Strongs’ model village of Crestone as a microhistorical lens, the paper reconstructs how histories of development and future energy possibilities were mobilized together to articulate flexible visions of planetary governance. While the ecological crisis was acknowledged, environmental limits were reframed as adaptable through technology, corporate participation, and moral reform. By foregrounding private governance networks and informal knowledge-making practices, the paper challenges linear, institution-centered narratives of sustainable development and highlights how crisis imaginaries forged outside formal policy arenas shaped the evolution of environmental and environmental imaginaries, extending to oil-backed techno-utopian or post-humanist environmental visions that resemble those of present-day elites.

10:06
Relational preservation: conceptualising our digital legacy amidst an ecological crisis

ABSTRACT. Big Tech’s discourse taps into both the fear of forgetting and the promise of control over our personal digital past. Resulting in save-by-default strategies that are pushing planetary limits. Academic literature echoes these sentiments, frequently assuming that there is a common desire to preserve every memory. This talk aims to complicate this perspective by exploring the tension between the long-term valuation of our personal digital heritage and its long-term ecological effects. What are the consequences of these present valuation practices for future archivists, historians, and storytellers more broadly?

Personal digital heritage consists of personal, born-digital, and digitised material that is both consciously and unconsciously preserved. This presentation is informed by one of the result chapters of my dissertation and draws on long-term assemblage ethnography in the Netherlands. It involves two distinct contexts: (1) community centre visitors experiencing (temporary) socioeconomic difficulties, and (2) industry and public service professionals with whom I explored digital legacy, death, and heritage across different scales, sites, and practices.

With this talk, I hope to show how lived experiences with personal digital heritage practices in the present could impact historical analysis in the future. My observations point to different, more relational ways of preservation that challenge capitalist accumulation narratives. I witnessed the refusal of ratio, effect, and efficiency, and instead felt space for emotion, affect, and sufficiency. This paper aims to offer a different starting point for a heritage conceptualisation – one that reflects on how colonial and patriarchal views on technology permeate conversations about both the ecological impact of these technologies and the proposed solutions to them.

09:00-10:30 Session 11C: OT: Infrastructures of Knowing
09:00
Gender as an Epistemological Lens in Biodiversity Data Production: Rethinking Open-Air Laboratories

ABSTRACT. Recent international efforts to enhance the validity and reproducibility of biodiversity data—such as Nature Portfolio’s 2022 guidelines and the SAGER framework—recognize that ignoring sex and gender considerations can weaken scientific findings. However, in biodiversity sciences, gender has mostly been viewed as a sociological or demographic variable rather than an epistemological one. This presentation examines how incorporating gender as an epistemological perspective can change the way biodiversity data is produced, validated, and interpreted in open-air laboratories, with implications for both epistemic justice and environmental governance. Grounded in feminist epistemologies (Haraway, Barad, Schiebinger, Harding) and Science and Technology Studies (STS) of environmental knowledge (Turnhout, Bowker), this paper presents two case studies: the Patagonia UC Research Station and Escudero Antarctic Base. These sites, situated in sub-Antarctic and Antarctic ecosystems, function as “open-air laboratories” environments where the material, institutional, and climatic contingencies of research are inseparable from its epistemic outcomes. Methodologically, the study combines ethnographic observation with scientific fieldwork, as well as reflexive documentation of sampling practices, taxonomic classifications, and metadata recording. Through iterative “knowledge cycles,” interdisciplinary teams—including microbiologists, ecologists, philosophers of science, and gender theorists—co-analyze how decisions made in the field (e.g., which specimens are considered representative, how metadata categories are filled out, what constitutes a “complete” dataset) reflect gendered assumptions about neutrality, scale, and value. Preliminary findings show that including gender as an epistemological category reveals subtle but consistent biases in field protocols and data systems. For example, classifications based on binary sexual dimorphism in fauna often conceal intersex or sex-changing species, while team hierarchies and data recording practices reinforce masculine ideas of authority and detachment. Conversely, using reflexive, gender-aware observation criteria enhances data transparency, traceability, and representativeness—dimensions that are increasingly important for global biodiversity databases. Conceptually, this work redefines biodiversity not as a neutral object of measurement but as a relational epistemic practice, co-created by human and non-human actors, technologies, and institutional norms. It aligns with feminist and posthuman approaches that emphasize situated knowledge and the interconnectedness of matter, meaning, and method. Beyond its technical contribution, the project aims to promote epistemic justice by questioning whose categories of life, difference, and value underpin global biodiversity infrastructures. Ultimately, this paper argues that including gender as an epistemological lens is not an optional addition to biodiversity science but a necessary approach for developing more robust, inclusive, and context-aware environmental knowledge. In doing so, it repositions open-air laboratories as sites where the politics of data, gender, and ecology converge—offering both empirical insights and a conceptual pathway toward more equitable biodiversity management.

09:18
Uncovering hidden assumptions in Digital Twins

ABSTRACT. Policy makers increasingly use of algorithmic tools for their decision-making processes. It is argued that these tools hold myriad of positive outcomes, such as: enhanced efficiency, cost-effectiveness, improved predictive accuracy, and over all, surpassing human decision-making capabilities. Mainly because of the assumption of algorithms generating objective and value-free outcomes and the promise that “more granular data equals more accurate predictions” keep feeding digitization processes. However, seeing algorithms as tools, holds multiple ontological beliefs: seeing digital twins as “accurate” “one-to-one” “representations” that humans can control, implicates a dependent relationship between that what’s being simulated and that what’s being acted out in the physical world. In this relationship, for the simulations to be an accurate description of the physical, is to assume the physical reality can be reduced to single digits. Rarely these assumptions are verbalized or even researched. This lack of conceptualization is equally dominant in the current narratives in ‘smart cities’ design and ‘digital twins as a service’. What is meant by “smart”, “as a service” and what is precisely the problem in current practices that they try to solve? The aim of this ethnographic study is to observe the lifecycle of a mathematical model that is implemented in the Digital Twin of a Dutch municipality. In order to understand the role that modelling algorithms play in connecting the physical to the social domain in policy making. By using a processual lens I try to capture how data transform to actionable insights for policy-making, and how this differs from actual (expert) knowledge. Questions of particular interest: Firstly, what is reality represented as in these modelling algorithms? secondly, what is the role of meaning in this process of reduction? Lastly, in addition to researching the decision-makers’ understanding of accuracy, I’m interested in their validating practices as opposed to their evaluating practices.

09:36
Quality of science journalism explored in Japan and the Netherlands in times of uncertainty

ABSTRACT. Worldwide, it is clear from research and practice that science and society are intertwined in complex and multiple ways. In this complex context, access to high-quality information can be considered vital since the way citizens encounter scientific information matters (e.g. Dunwoody, 2020). Traditionally, science journalists’ role is to deliver scientific information to a wider audience. The past years, however, have seen several changes, or shifts, in the field of science journalism, as has been reported for example by Dijkstra et al (2024). Changes in media ecosystems, the media landscapes and the growing influence from mis- and disinformation pose challenges to reliable news production by science journalists while scientific knowledge itself is increasingly disputed. However, regarding topics on the boundaries between science and society, such as vaccination, climate change and artificial intelligence, it is highly important to better understand the developments and the challenges these bring, as well as the positions of the key actors to enable high-quality science journalism. Understanding these shifts which take place around the world is increasingly important.

In our study, we aimed to better understand the position and developments in science journalism comparing the understudied countries Japan and the Netherlands. In a mixed methodology consisting of a Q-sort study followed-up by focused group discussions, we gained in-depth insights in journalists’ behaviours and views on quality reporting and the shifts they must deal with. Q-sort study methodology combines qualitative and quantitative elements allowing to consider the breath of the topic at hand (Brown, 1980; Watts & Stenner, 2012). Ranking of statements according to participants’ personal views allowed for identifying similar views, as well as outlining views and variety (Watts & Stenner, 2012). Our findings will be presented along the major identified changes for science journalism, the changing media landscape and the process of medialization of science.

References Brown, S. R. (1980). Political Subjectivity: Application of Q Methodology in Political Science. Yale University Press. Dijkstra, A. M., de Jong, A., & Boscolo, M. (2024). Quality of science journalism in the age of Artificial Intelligence explored with a mixed methodology. PLOS ONE, 19(6), e0303367. https://doi.org/10.1371/journal.pone.0303367 Dunwoody, S. (2021). Science journalism: Prospects in the digital age. In M. Bucchi & B. Trench (Eds.), Routledge Handbook of Public Communication of Science and Technology (third edition) (pp. 14-32). Routledge. https://doi.org/10.4324/9781003039242-2 Watts, S., & Stenner, P. (2012). Doing Q Methodological Research: Theory, Method and Interpretation. SAGE Publications Ltd. https://doi.org/info:doi/10.4135/9781446251911

09:54
From Open Science to Platform Enclosure? Sociotechnical Mechanisms of Closing in Decentralized Science (DeSci)

ABSTRACT. Decentralized science (DeSci) has emerged as a digitally native organizational field that positions itself as an alternative to closed scientific data platforms, commercial publishing, and opaque clinical research infrastructures. Early DeSci initiatives explicitly rejected Big Tech involvement, promoted transparent blockchain-based governance, and mobilized citizen science as a means to resist rent-seeking and enclosure in scholarly communication. Yet by early 2026, many DeSci organizations display a striking transformation: governance has shifted from open, on-chain participation to closed working groups with appointed membership; scientific platforms have been fragmented into multiple startups across jurisdictions; and communities increasingly function as sources of visibility and legitimacy rather than decision-making authority. This paper asks why these shifts occur and which sociotechnical mechanisms drive the re-emergence of enclosure and centralized control within a field explicitly committed to openness. Theoretically, the study conceptualizes DeSci as a sociotechnical organizational field by combining strategic action field theory with insights from actor-network theory. This integrated approach allows analysis of how governance tools, infrastructures, and spatial arrangements actively structure power, legitimacy, and value capture, rather than merely supporting organizational coordination. Empirically, the paper draws on a comparative qualitative study of four decentralized autonomous organizations (DAOs), using interviews, participant observation at online and offline events, organizational documents, platform materials, and social media data. The analysis shows how one specific network of funding infrastructures, governance tools, platform architectures, founders, developers and spatial arrangements became dominant in the emerging DeSci field. This dominance emerged through processes of translation that produced technological path dependencies and stabilized authority via infrastructural reliance rather than formal hierarchy. As a result, practices initially framed as decentralized and open increasingly reproduced centralized control, illustrating how sociotechnical arrangements can stabilize power and limit alternatives even within the movement started from open science.

09:00-10:30 Session 11D: T24: Making Futures Tangible? Sociomaterial Practices of Prediction
09:00
Model realities: a case study on model use and governance in Dutch water management

ABSTRACT. Dutch river management is characterized by its top-down and flood-risk-centered culture, and relies heavily on computer models as to facilitate anticipatory governance. Model-informed flood-risk management has effectively gone uncontested in the Netherlands, despite the substantial impacts of decisions derived from it; while actual river widening or dike heightening implementations have been challenged, the broader narrative remains largely untouched. Other political stakes and aims are only allowed to claim space when their goals manage to exist within the logic and metrics of the flood-risk-paradigm. The computer models underpinning flood-risk management reinforce this dynamic: they have a legal and authoritative status and are perceived as objective, yet, given their design and implementation, they cannot capture alternative knowledges or perspectives. As such, they co-shape the socio-ecological systems that they are argued to neutrally represent, thereby perpetuating the dominant knowledge politics of the flood-risk paradigm.

In order to research the epistemological role of flood-risk models in water management, I equip Actor-Network-Theory methodology and ask: How do these anticipatory hydrological models come to be seen as legitimate, and what role do they play in shaping riverine landscapes? Empirically, I use ethnographic methods to study the dike strengthening project between Olst and Zwolle, conducted by Ijsselwerken. In this way, I will elucidate naturalized assumptions, contestations, and the reinforcing role of computer models in understanding flood risk.

09:30
The Making of Neuro-Futures – BCI-figurations in Science-Fiction film

ABSTRACT. For half a century, research on brain-computer interfaces (BCIs) has inspired Science-Fiction filmmakers to explore questions of human nature, embodiment, and technological possibility. SF Film renders these speculative artefacts as tangible expectations and establishes a horizon of social and technological plausibility that guides our imagination of the future. What remains overlooked is that these images themselves are constructed, following techniques such as montage and continuity to create affective credibility and the illusion of experientiality.

Drawing on the concept of diegetic prototypes and building on a hermeneutic approach to technology assessment, the paper analyses the socio-technical imaginaries surrounding BCIs in SF cinema through which contemporary hopes, anxieties, and debates on emerging technologies take narrative and aesthetic form. Across a corpus of films from different eras, the analysis identifies three recurring tropes: the BCI as media apparatus, enabling cognitive immersion and sensory replay; the BCI as transhuman interconnection, merging the human mind with the machine; and the neurointerface as a path to immortality, where mind-uploading promises life beyond biological limits. These tropes operate as media-technical configurations of prediction that reorganise bodies, data, and agency while foregrounding larger ethical issues of ownership, commodification, control, and human–machine integration.

By combining an STS perspective with a media theoretical approach, the paper highlights how predictive artefacts and cinematic infrastructures shape and stabilise imaginaries of futures and technological development.

10:00
Knitting Work of Prediction: The Praxeological Grounds of Forecasting

ABSTRACT. Drawing on fieldwork at an AI company, this paper explores the mundane, situated, and methodical work of computation involved in the "production of prediction" (Mackenzie, 2015). My method presents an extension of the established ways of doing workplace ethnography: rather than simply shadowing practitioners, I worked alongside data scientists as a colleague to examine the technical activities constituting their everyday work. I focus on how these practitioners use computer simulation to forecast inventory demand for their e-commerce customer, creating a "digital twin" of their supply chain to test hypothetical strategies before applying them in the real world. I examine the team meetings between the AI company and its customer as “perspicuous settings” (Garfinkel, 2002) where predictive futures are made tangible. Here, data scientists present forecasts as unassailable numbers, yet the validity of the model is routinely contested. In the resulting "oscillation" (Coopmans, 2018) between trusting the forecast and questioning its reliability, I show how the validity of prediction rests on a display of statistical mastery: recovering evidence from data tables to justify findings while obscuring the "trade secrets" of the shop floor. I characterise these practices as "knitting work" (Saha, 2025): the continuous, practical activity of weaving together disparate elements—mathematics, models, code, and worldly data—into a coherent, functional, and accountable whole. I demonstrate how this knitting work places commonsense understandings of business logistics in active dialogue with the technical work of making uncertain futures computationally operational. Finally, I argue that the "intelligence" of the AI system is not located in the algorithm alone but is a product of the extensive division of labour that sustains the sociotechnical infrastructure. It is this invisible work, spanning across engineering, management, and data science, that sustains the intelligent appearance of the system, transforming messy, ad-hoc realities into actionable outcomes.

09:00-10:30 Session 11E: T10: Resisting Rent-Seeking: Nonprofit Platforms and the Politics of Open Science in Scholarly Communication and Evaluation
09:00
Why is change in scholarly communication so hard to imagine? Findings from a stakeholder consultation for the cOAlition S proposal ‘Towards Responsible Publishing’

ABSTRACT. We analyse focus group discussions and free-text survey responses from a multi-stakeholder consultation conducted following the October 2023 publication of the proposal Towards Responsible Publishing by cOAlition S. The proposal advocates for a systemic reform of scholarly communication, aiming to reduce barriers to knowledge dissemination, encourage the early sharing of research outputs through preprints, and shift peer review toward an open, post-publication model. Our analysis focuses on how different stakeholder groups—including researchers, scholarly infrastructure providers, academic institutions, funders, and publishers—perceive the obstacles to the large-scale, coordinated transformation envisioned in the proposal. We interpret these accounts as articulations of collective action problems that arise from the deep entrenchment of actors within existing academic reward structures and established commercial revenue models. These conditions make transitions toward a more open and economically sustainable system of scholarly communication difficult, even in contexts where there is broad recognition of the need for change. By comparing perspectives across stakeholder groups, our approach highlights areas of alignment as well as points of tension and conflict. It also underscores the performative nature of discourse surrounding collective action problems in scholarly communication: in articulating barriers to reform, participants simultaneously construct, reinforce, or contest their own roles, responsibilities, and capacities within the system. These discursive dynamics, in turn, shape the possibilities for coordinated action and meaningful systemic change.

09:30
Advancing Open Research Information: Challenges and Opportunities in the Era of Platform Capitalism

ABSTRACT. Platform capitalism is an evolved form of capitalism where the concept of wealth shifts—from the production of goods to the control and management of goods. Due to the increasing interactions with the internet, the process of platformization—the expanding reach of platform-based infrastructures, business models, and policies across multiple facets of society—continues to accelerate. In the context of research, platform capitalism can be explored through the domain of research information (RI). RI itself is generally used to gain insights into the current research landscape, such as research impact and the current research trends. These insights inform decision-making in science and guide the future direction of science. Nevertheless, RI has mostly been locked inside pay-to-access databases, such as Scopus and Web of Science (WoS). These proprietary databases operate with low transparency, making research evaluation based on proprietary RI non-reproducible and limiting the ability to identify and address errors, biases, and gaps. Despite the availability of ORI infrastructures, such as OpenAlex and OpenAIRE, and the emergence of ORI movement, such as through the Barcelona Declaration on ORI, as well as the high prices of proprietary platforms, the global research community continues to rely heavily on services provided by proprietary platforms and prioritizing publication within them. This ultimately supplies more RI to these proprietary platforms that can be extracted and turned into proprietary indicators. To explore this phenomenon, an ongoing interview study is being conducted with university staff across universities in the Netherlands, such as with librarians and research information specialists. The aim of the study is to gain insights into the current perspectives on and practices surrounding (open) research information, as well as to explore how potential financial, technical, institutional, and cultural challenges associated with adopting ORI within a university setting can be understood through the lens of platform capitalism.

10:00
Reviving enlightenment values in post-big-tech knowledge infrastructures?

ABSTRACT. The new digital information and communication infrastructures that became known as the Internet in the 1990s were originally the home of academics (which they of course had to share with the military). Influenced by academic norms that embody enlightenment ideas, these infrastructures were explicitly designed to enable the universal sharing of knowledge on a global scale. Today, 35 years later, we know that anti-enlightenment ideas, misinformation, conspiracy theories and anti-science sentiments are at least as easily spread through these infrastructures as their scientific counterparts.

The shift away from an academic Internet was accompanied by the massive growth of proprietary services that were provided by what would become some of the most powerful corporations of our time. Instead of pursuing enlightenment values of universalism, reason, equality, and cosmopolitanism, these new owners valued profit, engagement, and attention above all.

Academics did not go away during this shift but readily adopted the new, proprietary services engaging with and producing "content" for Google Scholar, Twitter, and other social media platforms. They bought into specialised proprietary tools like Endnote, SPSS, to mention only a few. And they accepted that their text production, communication, and publication happened on the platforms owned and controlled by Microsoft, Google, and major academic publishers. These proprietary tools and platforms have become infrastructural in the strong sense of not being optional any longer but effectively controlling the standards that enable academic collaboration and exchange.

In this contribution, I examine whether a growing skepticism towards Internet-services that are provided by so-called "big-tech" corporations can be harnessed to support the development of knowledge infrastructures that re-engage positively with classic enlightenment values. This question is addressed through a literature study in which the relation between enlightenment ideals and various formations of socio-technical knowledge infrastructures is scrutinised. This work is rooted in STS-approaches to infrastructures, what they "do", and their use, spearheaded by Star and Bowker (e.g., 1999), but also newer contributions more specifically discussing knowledge infrastructures (e.g., Hirsch and Ribes 2021). In addition, perspectives from media studies are introduced, above all Kittler's "systems of inscription" (Aufschreibesysteme). In parallel, results of this study are experimented with in an action research project, which is conducted together with representatives from NTNU's university library and the university's central IT services. Here we experiment with different notions of universalism, reason, openness, and freedom in relation to knowledge infrastructures, and organise local workshops with our colleagues in which alternative ways of community-led knowledge infrastructuring are presented, discussed, and experimented with.

References:

Bowker, Geoffrey C., and Susan Leigh Star. 1999. Sorting Things out: Classification and Its Consequences. Cambridge, Mass: MIT Press. Hirsch, Shana, and David Ribes. 2021. “Innovation and Legacy in Energy Knowledge Infrastructures.” Energy Research & Social Science 80: 102218. doi:10.1016/j.erss.2021.102218.

09:00-10:30 Session 11G: T15: Health and Care Shifts in Times of AI
09:00
Awkward Alliances: How Algorithm Developers Navigate Contradictions in Healthcare

ABSTRACT. My research examines how developers of vocal biomarker technology for mental health navigate fundamental contradictions between their technological aspirations and institutional healthcare realities. Drawing on interviews with business leaders, researchers, and healthcare professionals, alongside conference observations and corporate discourse analysis, I reveal a striking tension: while developers publicly champion "objective, quantifiable measurements" and dimensional approaches to diagnosis aligning with biomedical perspectives, their technologies overwhelmingly rely on traditional categorical diagnostic frameworks like the DSM.

Through the lens of expectation work (Konrad et al., 2017) and discourse coalitions (Hajer, 1993), I demonstrate that this contradiction is deeply political. Developers are embedded in coalitions with insurers, regulators, and evidence-based medicine frameworks demanding categorical diagnoses for reimbursement and validation, despite their own incentives toward dimensional, symptom-based approaches that would expand market potential and better suit technological capabilities.

Whereas literature treats developers as autonomous shapers of technological futures (e.g. Brown & Michael, 2003; Katzenbach & Mager, 2021), I show they perform nuanced, multi-sided expectation work: discursively reconstructing "objectivity" as "practically objective," temporally postponing system change, pivoting to symptom severity measurement, integrating vocal biomarkers into other technologies, or exiting clinical markets for wellness applications. This reveals developers as simultaneously enactors and selectors of expectations (cf. Bakker et al., 2011), caught between technological possibilities and coalitional constraints while attempting to redirect coalitional values.

My analysis contributes to STS scholarship on health and AI by: (1) addressing underrepresented research on industrial actors in healthcare innovation, showing how algorithmic expectations interact with psychiatric classification debates; (2) demonstrating discourse coalitions can accommodate contradictory expectations when politically expedient; (3) exposing political choices made in early-stage development design and discourse. This research illuminates how AI developers balance innovation promises with institutional inertia, raising critical questions about whose expectations shape mental healthcare's future and who expectation makers actually consider.

References Bakker, S., Van Lente, H., & Meeus, M. (2011). Arenas of expectations for hydrogen technologies. Technological Forecasting and Social Change, 78(1), 152–162. https://doi.org/10.1016/j.techfore.2010.09.001 Brown, N., & Michael, M. (2003). A sociology of expectations: Retrospecting prospects and prospecting retrospects. Technology Analysis & Strategic Management, 15(3), 3–18. Hajer, M. A. (1993). Discourse coalitions and the institutionalization of practice: The case of acid rain in Britain. In F. Fischer & J. Forester (Eds.), The Argumentative Turn in Policy Analysis and Planning (pp. 43–76). Duke University Press. Katzenbach, C., & Mager, A. (2021). Power and subversion in the algorithmic society. Internet Policy Review, 10(3). Konrad, K., Van Lente, H., Groves, C., & Selin, C. (2017). Performing and governing the future in science and technology. In U. Felt, R. Fouché, C. A. Miller, & L. Smith-Doerr (Eds.), The Handbook of Science and Technology Studies (4th ed., pp. 465–493). MIT Press.

09:22
Hypothetical enrollment - An anticipatory situated method to assess the implementation of AI diagnostics in clinical settings

ABSTRACT. Despite the supposed potentialities of AI tools for medical diagnosis, their adoption is a slow and troubled process. Recent empirical studies (Carboni et al. 2023; Kusta et al. 2024; Lebovitz et al. 2022) illustrate the misalignment between the narratives and expectations about these tools and how they are integrated in clinical settings. Yet, there is a substantial lack of methods for studying the transition of AI diagnostic tools from research settings to real-world settings (Williams et al. 2024). Drawing on this emerging body of scholarship, in this paper we ask: what methods can be used to research the implementation of AI diagnostics in clinical settings. We hence propose “hypothetical enrolment” as a methodological framework to assess the organizational and epistemic challenges of adopting AI tools. We conceive of “hypothetical enrolment” as a situated, anticipatory and performative approach. It is anticipatory as it aims to provide an opportunity to appreciate the potential uses of AI diagnostic tools not yet implemented by prodding actors’ expectations and the potential interactions, alliances and transformations that could emerge. It is situated because expectations and potential alliances are mapped and analyzed against the specific disease network in which the technology is to be adopted. It is performative since it prompts actors to reflect on the possible implications of the innovation for their daily diagnostic tasks and to trigger reflection about new modalities of knowledge production. We tested the validity of our method against an empirical case, the company Autism Scope (AS). AS applies machine learning models for the early detection of autism-spectrum-disorder on children below two years of age. We conducted interviews with AS developers and with three neuropsychiatrists, exploring the “hypothetical enrolment” of AS in clinical settings. Notwithstanding a generally positive attitude, several organizational, professional and epistemic challenges emerged thanks to our method.

09:44
Aporetic Intelligence: Puzzlement as Care in AI-Augmented Neurosurgery

ABSTRACT. AI in medicine risks inaugurating an era in which we "produce more and understand less" (Messeri & Crockett 2024). Yet beyond familiar critiques of algorithmic bias and black-box opacity lies a less examined transformation: how AI reshapes the experiential core of clinical practice—the hesitations, doubts, and productive uncertainties that constitute skilled judgment. This paper develops Aporetic Intelligence as a framework for understanding what practitioners stand to lose when AI tools promise to eliminate the "cognitive friction" of complex decision-making. Drawing on ongoing ethnographic fieldwork at a Berlin neurosurgery department, I examine how surgeons navigate AI-assisted brain mapping for tumor operations. Tractography—computational modeling of white matter pathways—exemplifies the tension between automation and embodied expertise. While startups like Omniscient.ai offer fully externalized planning designed to offload "cognitive weight," leading practitioners warn that surgeons who rely on such tools forgo the apprenticeship necessary for intraoperative adaptation: the capacity to respond when the patient's brain behaves differently than the preoperative model predicted. The research combines long-term ethnographic immersion with two experimental approaches. Graphic ethnography—live sketching during surgical observations and planning sessions—captures the gestural, spatial, and improvisational dimensions of neurosurgical thinking that text-based fieldnotes struggle to render. Design experiments, developed collaboratively with creative coders and artists, produce speculative interfaces that deliberately disclose the uncertainty in the outputs, testing whether tools might amplify rather than foreclose clinical imagination. I argue that puzzlement is not an obstacle to care but partly constitutive of it. Aporetic Intelligence names both the situated practices through which clinicians sustain productive uncertainty and a broader challenge to the frictionless ideal embedded in contemporary AI design.

10:06
Sensing Sepsis: responsibly developing and embedding an AI-assisted diagnostic device

ABSTRACT. Sepsis is a major cause of death and adverse health outcomes worldwide. The NWO-funded SepsPIC project aims to develop a diagnostic device based on blood biomarker testing and machine learning to enable faster and more accurate diagnosis of sepsis, and specifically its various subphenotypes. This could improve survival rates and mitigate adverse health outcomes.

If successful, the device will affect clinical decision-making practices in sepsis care, thereby reconfiguring clinical work, relationships, responsibilities, and required knowledge and skills. This raises questions about how such an AI-assisted device could be responsibly embedded within these existing practices.

In this project we are interested in how the responsible development and embedding of this device can be realized. To this end, we apply Constructive Technology Assessment (CTA), by including different stakeholders in workshops, interviews and surveys during the development of the device to explore, among other things, needs, attitudes and expectations. In this presentation, the results from this research will be discussed to highlight challenges related to 1) how the AI-assisted device can be embedded in clinical decision-making practices when users do not fully understand how blood biomarker data are interpreted, 2) how responsibilities are reconfigured and what medical professionals require to be able to take responsibility and 3) how needs and expectations of medical professionals can be anticipated in the development of the device and the training of the AI component.

09:00-10:30 Session 11H: T21: Orbiting and Hovering – Critical Remote Sensing and Volumetric Regimes
09:00
More Accurate, Less Meaningful? Reflexive Remote Sensing

ABSTRACT. Over the past decades, remote sensing has undergone a remarkable technological acceleration. Advances in sensor resolution, data availability, cloud computing, and algorithmic sophistication—most recently through machine learning and artificial intelligence—have substantially increased the technical accuracy of spatial data products. At the same time, however, critical questions arise as to whether this growing precision is accompanied by a commensurate increase in geographical meaning, explanatory power, or societal relevance. This paper revisits and synthesizes a series of critical interventions that interrogate the epistemological foundations of contemporary remote sensing under the guiding question: does more accuracy necessarily result in more meaningful knowledge? I argue that dominant evaluation practices in remote sensing—centered on accuracy metrics, validation scores, and model performance benchmarks—risk narrowing the epistemic horizon of the field. In many applications, complex socio-ecological processes are reduced to technically tractable proxies, while assumptions about scale, categorization, causality, and normativity remain implicit and largely unscrutinized. As a result, methodological refinement may coincide with conceptual simplification, distancing remote sensing products from the socio-environmental contexts they claim to represent.

09:30
Between remote sensing and yak herding: grassland changes on the Tibetan plateau

ABSTRACT. The ongoing vegetational change on the Tibetan Plateau, where pastoralism has been the predominant way of life, is of regional and global importance. Although recent influential research suggests that the vegetation on the Tibetan Plateau has been greening, or improving, local yak herders in Nagchu (Tibetan Autonomous Region, China) report that their grassland has deteriorated. To understand this discrepancy, we critically analysed and contrasted remote sensing observations and ethnographic accounts within the framework of valuation studies. We argue that these seemingly contradictory observations are not mutually exclusive because the remote sensing data mainly focuses on the spatial vegetation coverage, whereas herders care about vegetation height and its nutritional quality as yak fodder. Taking into account that these two sets of data evade a direct comparison, valuation studies help to understand in what respect the underlying perspectives and observations— i.e. remote sensing and local experiences—can be understood as social activities in which assessments are made based on different criteria. Our study argues that a pluralistic way of understanding the grassland dynamics helps to understand the complexity of the changing environment.

10:00
Exploring the Limits of Remote Sensing: Transdisciplinarity in Land Conflict Research in Mozambique

ABSTRACT. This research critically examines the methodological limits of remote sensing and explores how transdisciplinary approaches can extend its analytical reach, enrich its interpretations, and challenge epistemic assumptions. The implementation of the Nacala Development Corridor in northern Mozambique provides a compelling site for this question, where biophysical transformations are consequences and/or drivers of complex socio-political dynamics. Here, the expansion of the agricultural and extractive frontier walks hand in hand with socio-political conflicts, leading to new arrangements among local social actors. Revealing the entanglement of environmental change and political conflict.

To address these complexities, the study explores transdisciplinarity to navigate spatial explicit and non-explicit methods. Drawing on ethnographic traditions of participant observation with special emphasis on transect walks, the research foregrounds embodied experiences, local perceptions of resource use, and situated knowledge of production practices. Remote sensing analyses employ multiscale datasets (planet imagery for community-level insights and Landsat data for regional patterns), alongside multiple indices to capture land cover dynamics.

Two empirical cases illustrate the phenomenon of socio-spatial transformations. The first, at the regional scale, cross-references land-related conflicts involving local communities and large-scale agribusiness with satellite data on land cover change. This comparison reveals the spatial and temporal patterns of politically driven conflicts and land-cover changes, a biophysical expression of human activities. The second case, located at the community level in Ribaue district, examines interactions among peasants, rural workers, middle-scale farmers, and non-human agents of spatial change, including technologies, market networks, and infrastructure. Long-term qualitative fieldwork is combined with remote sensing to validate or contest participants’ narratives, surface hidden processes, and critically reflect on the researcher's own bias towards presumed findings. Together, these cases demonstrate how transdisciplinary practices can complicate, ground, and ultimately expand the interpretive power of remote sensing in contexts of land conflict.

10:30-10:45Coffee Break
10:45-12:15 Session 12A: T7: Making Science Better?
10:45
Who Assesses What? Framing Expertise and Decision-Making in Peer Review

ABSTRACT. Peer review is a ubiquitous feature of science evaluation known for quality control and legitimacy, however, its processes and procedures remain complex and understudied (Reinhart & Schendzielorz, 2024), particularly in relation to how appropriate experts are selected and what gets assessed by them. This study examines how editors and reviewers frame, stretch, and delimit their expertise in relation to specific manuscripts, and how these practices shape what ultimately gets assessed in peer review. We conducted about 35 semi-structured qualitative interviews with editors, reviewers and authors of PLOS journals using a manuscript-centered approach, i.e., using a manuscript they handled recently as a focal point of the interview questions. To triangulate the interview data, the corresponding review reports for these manuscripts were also coded and analyzed.

Expertise is essentially knowledge delivered at the request of someone else who wants it and who configures the provider as knowledgeable (Grundmann, 2017). In case of peer review, this depends on how the editor frames the manuscript and their selection of reviewers often embeds assumptions about what kinds of expertise are valid/required to assess the paper. The analysis shows that manuscripts are divided to assessable components such as topics or methods. Rather than any single actor claiming comprehensive expertise over the manuscript, assessment emerges through a division of tasks and responsibilities. Although this allows diverse forms of expertise/perspectives to contribute to peer review, it can also produce gaps in assessment, leaving certain aspects of manuscripts underexamined. It also creates the need for integration of the diverse forms of expertise, which tends to be a particularly challenging aspect of the editorial process. For instance, reviewers contribute to those aspects they feel competent to assess while assuming someone else will look into other areas. By foregrounding how expertise is situationally enacted and framed, the study contributes to a more nuanced understanding of peer review as a collaborative yet fragmented process of knowledge evaluation in contemporary scholarly publishing.

References:

Grundmann, R. (2017). The Problem of Expertise in Knowledge Societies. Minerva, 55(1), 25–48. https://doi.org/10.1007/s11024-016-9308-7

Reinhart, M., & Schendzielorz, C. (2024). Peer-review procedures as practice, decision, and governance—The road to theories of peer review. Science and Public Policy, 51(3), 543–552. https://doi.org/10.1093/scipol/scad089

11:15
Imagining the Bureaucratic Modularity of Good Science

ABSTRACT. The pursuit of "good science" has long been a central ethos in the scientific community. Struggles over how to do science and how to do it well, are commonplace. However, what constitutes "good science," and how it is institutionalised in practice, differs remarkably across time and place. Institutions and organisations have established a range of mechanisms and structures to guide, support and evaluate the qualities of research, often under labels such as research integrity, research ethics or research quality.

While all these mechanisms and structures seek to create an environment in which science aspires to be “good”, technically, methodologically, morally and epistemically, their co-existence and the sometimes unclear boundaries between their mandates can also create vagueness and confusion among practising scientists.

Scientists hold diverse views on what these structures mean for their work. For some, they represent invaluable safeguards against error and misconduct, bolstering public trust in science. For others, they may appear as bureaucratic hurdles that stifle creativity or delay progress. This paper explores these tensions and examines how various support structures and perspectives intersect to highlight different "good sciences” and different moral economies that produce them. Preliminary findings suggest that scientists recognise different roles for institutional infrastructures in terms of temporality and targets, yet reject the institutional suggestions of complementarity or modularity and instead articulate incommensurable imaginaries of good, or better science.

Through investigating the perspectives of scientists and the frameworks within which they operate, we display the diversity and plurality of operationalisations of “good science” in practice and can connect existing infrastructures with evolving valuation regimes.

11:45
Research Integrity at Scale: Paper Mills, Screening Tools, and the Reconfiguration of Publishability

ABSTRACT. Paper mills—black market entities selling fraudulent authorship and/or manuscript content for publications in academic journals—are attracting growing attention as a major threat to research integrity. Building on earlier issues like ghostwriting and predatory journals, paper mills exploit vulnerabilities in academic publishing at scale, shifting the focus of scientific misconduct from individual misbehavior to systematic manipulation. This crisis has prompted coordinated responses from multiple stakeholder groups including publishers, research institutions, and funders, with interventions that are actively reshaping the field.

This research specifically examines how research integrity tools designed and developed to combat paper mills are reconfiguring the concept of "publishability", as it is situated and performed within industrial scholarly communication infrastructures by automating and augmenting integrity checks within submission and editorial workflows. The study draws on a scoping review of empirical studies on paper mills focusing on their operational patterns and proposed countermeasures complemented by a document analysis of grey literature and industry materials that catalog their positioning and promotion of solutions.

Preliminary findings show that paper mill detection and screening tools are increasingly relying on AI-based automation, framed by vendors as scalable solutions that are also efficient, saving time and scarce resources for editorial and peer review efforts. As paper mills are also widely alleged to utilize generative AI, this dynamic fosters an AI arms race that generates a new commercial market in research integrity, where both paper mills and vendors of research integrity tools pursue business opportunities through technological innovations and hence co-constitute an emerging research integrity economy. This market-making process with the adoption of these tools within integrity and quality assurance systems in academic publishing not only how publishability is operationalized, but also broader conceptions of research integrity and quality.

10:45-12:15 Session 12B: T15: Health and Care Shifts in Times of AI
10:45
Is participatory design enough? Methodologies for co-designing inclusive AI-based health technologies for and with people with COPD

ABSTRACT. Participatory design (PD) is often leveraged as the gold standard for co-developing inclusive AI-based health technologies, particularly when it comes to patients as users. PD involves multiple stakeholders, including patients, across iterative phases of observation, ideation, and prototyping. However, even participatory processes face challenges: marginalized groups often remain underrepresented because their needs are harder to accommodate than those of tech-savvy participants. As in any participatory process, micro decisions about whom to involve in the development can have a huge impact on the outcome. How can we move in the participatory design framework and leverage other methods that ideally might lead to a more thorough analysis of the field, to an (even) more reflexive development process and more inclusive technologies? This presentation draws on the development process of an AI-based mobile application for and with people with chronic obstructive pulmonary disease (COPD) within a Dutch project funded by NWO called DACIL. The project particularly centers people living in low socio-economic position in its co-development process. COPD patients living in a low socio-economic position often are low digital and health literate and are thus, particularly vulnerable to poor health outcomes. DACIL aims to co-create a wearable that monitors disease progression and provides personalized lifestyle advice. The project brings together an interdisciplinary team from medicine, computer science, health sciences, behavioral sciences, and science and technology studies, alongside technology partners and patient representatives. Beyond design thinking, the process incorporates reflexive monitoring in action, ensuring continuous reflection on goals, expectations, and patient perspectives throughout the development. By analysing this case, I explore how participatory and reflexive practices can address socio-technical challenges in designing AI-driven health technologies for marginalized populations.

11:07
Explainable AI visions by older adults and how they co-create AI in care and daily practices

ABSTRACT. Artificial Intelligence (AI) can improve personalized care and ageing in place. However, responsible embedding might be difficult without knowledge about users and their practices. Designers and researchers do not often actively involve the older adults, which results in designs that are probably less suitable to the lives and practices of older adults. Several studies have shown how design could be formed by ageism, especially in current AI developments. Design and development processes could benefit from involving the older adults to overcome ageism and possible biases incorporated into AI systems. In my previous work I explore the necessities and notions regarding explainable AI (XAI). What is explainability in the context of care and according to older adults? This research showed that it is not such a simple topic. The needed level of explainability is diverse for individuals and in different contexts, it varies from knowing the algorithm towards practical knowledge. Although care stakeholders argue that older adults probably do not need explainability, the older adults show interest in XAI. Therefore, I currently aim to actively involve older adults to overcome ageism in the design and implementation of AI systems. By using co-creation methods it is possible to provide an environment for open collaboration and enhances generation of new perspectives from actual end-users. Therefore, I applied that approach to explore possible AI use in the lives of older adults. My current research assists older adults in investigating the question: “Where do you visualize and position AI in your life while ageing in a more and more digital society?” This question is leading during a creative session in which the older adults got an introduction about AI, after which they created personal scenarios and got into discussions about AI in their future lives, including care and daily practices.

11:29
Reflecting on diabetes care through AI: Lessons from Leefmaatje

ABSTRACT. Chatbots in healthcare have been critiqued in terms of techno-solutionism as they reframe complex problems as easily solvable by AI (Sharon & Stiffels, 2024; Sharon 2025 ). For example, claims on “empathic” chatbots are criticized for hollowing out the meaning of empathy, while also creating an orphan problem: they distract from the underlying issue that current healthcare lacks insufficient time and attention for genuine empathy to exist (Sharon 2025). In this presentation we want to discuss our findings as embedded ethicist in the development of a chatbot for patients with Diabetes mellitus type I. For over 30 days, a group of patients was given the opportunity to interact with a chatbot called Leefmaatje. Leefmaatje was designed to answer questions about managing ones disease in daily life. Observations within this project shed light on current diabetes care practices, and may actually reveal another orphan problem than the absence of empathy. First, current clinical encounters are to a great extent scripted by numbers resulting from glucose monitoring, rather than discussing real-life experiences. These consultations are not able to address questions as they occur throughout patients’ daily life, and often assume certain skills and knowledge. Moreover, experiences with Leefmaatje stir away from the assumption that patients want an empathetic chatbot or doctor: the chatbot was greatly appreciated for its neutral, non-judgmental responses to everyday questions. The so-called orphan problem in diabetes care might therefore actually be the suboptimal organization of current diabetes care. This is for instance reflected in the lack of agenda-setting in current clinical consultations, causing a disproportional focus on numbers and very little attention to navigating daily life and the questions that arise for patients within that context. We conclude that engaging and experimenting with this chatbot ultimately raises broader questions about how diabetes care can be most effectively organized.

11:51
In between use and non-use: evaluating implementation of digital home-based health screening in the lives of everyday publics

ABSTRACT. Background In recent years, we have seen an increase in the development of health technologies (Pols, 2012). Developers often have assumptions about prospective users and non-users which become materialized in technologies (van Lente, 2012). Yet, research has shown how people have creative ways of using technologies, that do not always match developers’ expectations (Oudshoorn, 2019). Scholars e.g. Wyatt (2003) and Weiner and Will (2018), showed that use/non-use are dynamic practices. Much remains unknown about people’s considerations whether or not to use technologies.

We studied why and how people do (not) use digital home-based screening technologies. Our case is embedded in the Check@Home case, which is developing a digital home-based screening for the early detection of chronic diseases. People have to perform digital self-tests, interpret results and seek-out care.

Methods Interviews with people who have 1) not used the screening, 2) used the digital home-based tests but did not seek-out care, and 3) used the complete screening programme.

Results While Check@Home aims for detection of disease in individuals and promotes lifestyle changes, respondents used screening to assure themselves of their health and to contribute to scientific development. Respondents saw performing health tasks as a ‘civic duty’ to reduce work-load for health professionals and reduce costs for society. Check@Home strives towards an accessible technology for often-called “care avoiders”. Yet, people who did not use the screening, stopped participation at different phases of the screening process due to feelings of frustration, e.g. because they had trouble executing self-tests, anxiety, e.g. for the outcome of the tests, or distrusts, e.g. regarding reliability of self-testing.

Conclusion Our results confirm research on how use/non-use is shaped by personal values, e.g. good health. We add that for citizens that are not (yet) diagnosed, use/non-use is also shaped by societal expectations and values, e.g. responsibility to be good citizens.

10:45-12:15 Session 12C: T31: Navigating the Digital–Science Nexus: Science and Technology Studies, Emerging Digital Technologies, and the Future of Knowledge
10:45
The future of genAI in science: a backcasting exercise

ABSTRACT. The increasing use of generative AI (genAI) in scientific research has transformative effects on the science system, creating epistemological shifts, challenging existing modes of quality control, changing collaborative dynamics and shifting research funding priorities. GenAI brings potential benefits in terms of efficiency, research quality and democratization, but also poses serious risks for scientific integrity, reliability, equity, social and epistemic diversity, ownership, and autonomy. While policy makers and science studies scholars pay significant attention to the potential opportunities and risks of GenAI in the present, we signal the need for a long-term perspective, taking into account the disruptive and transformative effects genAI may have on the scientific system in the coming 10-15 years. The research question of this project is: how can co-created future visions about the development and use of genAI in science provide guidance to strategic decision-making by policy makers and managers in the (Dutch) science system? To address this question we use a backcasting methodology (Quist and Vergragt, 2006). Our first step was to formulate desirable future visions. Based on expert interviews, literature study, observations at relevant meetings and an interactive stakeholder workshop we have developed three visions for the role of genAI in science 2040: ‘Accelerating Science’ (driven by the values innovation and efficiency), ‘Shared Science’ (openness and inclusion), and ‘Cautious Science’ (scrupulousness and independence). Next steps in our project, to be conducted before the STS NL conference are (1) analyzing the current use of genAI in science, bottlenecks and policy interventions; (2) identifying creative solutions to bridge the gap between the present and desirable futures; and (3) prioritizing policy interventions in co-creation with stakeholders. In our presentation we will cover the results of the backcasting exercise, and evaluate its potential to facilitate future-proof science policy, in relation to the use of genAI.

11:15
The AI-Question in Academia: Exploring the Practices and Perceptions of Researchers using AI-driven Software

ABSTRACT. The use of artificial intelligence in knowledge production processes and in academia has been illuminated from different disciplinary perspectives, focusing e.g on authorship or academic integrity (e.g. Barros et al. 2023; Watermeyer et al. 2023). With the introduction of OpenAI's ChatGPT, and the associated API-access, the integration of LLMs in software tools for literature reviews or paper writing was simplified, impacting the ways knowledge is produced. Drawing on new materialism (Barad 2008) and situated knowledges (Haraway 1988) to investigate this knowledge produced by human researchers and their AI-driven tools, in this early-stage paper, I aim to present insights from an interview study with researchers from differing fields and career stages. In this study, I explore the use practices, justifications for and against AI-usage, as well as the underlying conceptions about the integration of artificial intelligence tools into knowledge production processes using card-sorting techniques. Conceptualising the intra-actions between researchers and their tools as part of a decision space within knowledge production processes, I aim to make sense of the differences and similarities when employing other computational technologies within the research process compared to the usage of genAI. In this study, I explore the perceived implications for research methods, ethical and transparent research, and the understanding of what constitutes scientific knowledge. Following Vu's (2018) approach of ‘thinking with’ as analysing knowledge and practice as co-produced by multiplicities of human/material, I aim to attune to the multiple knowledges within these assemblages of researchers, digital technologies, and knowledge sources to understand what might be necessary for accountable research practices with AI-driven software.

(To note: As of submitting this abstract, interviews for this study are still being conducted and planned.)

11:45
Navigating Quantum Infrastructures: An Ecosystem-Level Approach to Responsible Research and Innovation in the Netherlands

ABSTRACT. As quantum information and communication technologies (QICT) become deeply integrated into complex information and communication technology (ICT) systems and critical information infrastructures (CII), they are reshaping the digital-science nexus, redefining the co-evolution of scientific knowledge production, digital transformation, and governance. This integration foregrounds the socio-technical nature of quantum innovation, in which technical capabilities, institutional arrangements, and normative assumptions are mutually constituted. In the Netherlands, national initiatives such as Quantum Delta NL position quantum computing, communication, and networking as strategic enablers of cloud infrastructures, hybrid quantum-classical data centres, cybersecurity architectures, and algorithmically enabled systems that underpin essential public services and economic functions. These developments embed quantum technologies within heterogeneous innovation ecosystems involving research institutions, industry actors, government agencies, and infrastructure operators, thereby extending their societal relevance. Against this backdrop, this paper addresses the following research question: How can emerging use cases of QICT within Dutch critical information infrastructures be designed and governed in alignment with ecosystem-level responsible research and innovation (RRI) principles, given the evolving dynamics of the digital-science nexus? By focusing on CII as a site of socio-technical integration, the paper will build on Stahl’s 2021 ecosystem approach to responsibility by extending core RRI dimensions on anticipatory governance, responsibility distribution and inclusion as well as reflexive innovation and responsiveness in quantum technology development from project-focused interventions to distributed governance arrangements across actors, infrastructures, and institutions. The aim is to explore how ecosystem-level RRI can guide the responsible embedding of QICT in national critical infrastructures and overall digital transformation, in the Netherlands. Methodologically, the research question will be explored through a document analysis of Dutch quantum strategies, policy programmes, and infrastructure initiatives, complemented by a focused case analysis of selected QICT use cases in critical infrastructures such as secure quantum communication networks and quantum-ready cloud services.

10:45-12:15 Session 12D: T6: Research Culture(s) in transition: Uncertainty, Reform, and the Politics of Change in Academic Research: ainty, Reform, and the Politics of Change in Academic Research: Shaping norms and governance via reform
10:45
The impact agenda. Investigating practices and politics of producing science with impact

ABSTRACT. ABSTRACT TRACK 6 STS NL Sophie van der Does, Lotte Krabbenborg, Noelle Aarts

Since the 1970s, most governments in the Western world have increasingly emphasized the benefits to society of their financial investment in academic science (Mowery and Sampat 2005). One, and perhaps the most visible instance of this development has been the top down implementation of a so called ‘impact agenda’. The impact agenda refers to a set of policy measures that seeks to bureaucratically assess the social, cultural and economic impact of scientific research in society (Kidd and Chubb, 2021). However, the implementation of this impact agenda does not come without consequences, as incentivizing societal impact places a double expectation on academic research cultures, now charged with both producing scientific and societal impact. Empirical studies point out that this double expectation produces tensions; research aimed at societal impact and ‘traditional’ academic research are seen as two different categories of practice with different values assigned to them (Bandola- Gill, 2019; D’Este, Ramos-Vielba et al. 2018). For example, incentive structures for most academics lead to publishing in high impact journals, often with an eye on achieving academic reputation in a given knowledge domain (D’Este et al, 2018).

This PhD research examines how tensions between producing science with impact, which is seen as problem-driven and assumes interactions between stakeholders and scientists, and ‘traditional’ science, i.e. science as intellectually driven and ‘inward oriented’ are dealt with in practice. Empirically, we look at processes of producing science with impact in different settings in which actors attempt to make societal impact. We do so from a social practice perspective (Shove, Pantzar et al. 2012). In other words, this research looks at the ways in which societal impact that is expected and stimulated, materializes in practices. We do so by studying to the ways researchers, managers, stakeholders and other actors translate and realize a fuzzy policy concept like ‘societal impact’ in their daily work. In doing so, we explore how changes in governance of science through the introduction of the impact agenda affect academic research cultures downstream and how emerging tensions are dealt with in practice.

Bandola-Gill, J. (2019). Between relevance and excellence? Research impact agenda and the production of policy knowledge. Science and Public Policy, 46(6), 895-905. https://doi.org/10.1093/scipol/scz037

D’Este, P., Ramos-Vielba, I., Woolley, R., & Amara, N. (2018). How do researchers generate scientific and societal impacts? Toward an analytical and operational framework. Science and Public Policy, 45(6), 752-763. https://doi.org/10.1093/scipol/scy023

Kidd, I. J., Chubb, J., & Forstenzer, J. (2021). Epistemic corruption and the research impact agenda. Theory and Research in Education, 19(2), 148-167. https://doi.org/10.1177/14778785211029516

Mowery and Sampat, 2005. Universities in national innovation systems. J. Fagerberg, D.C. Mowery, R.R. Nelson (Eds.), The Oxford Handbook of Innovation, Oxford University Press, Oxford.

Sapolsky, R. M. (2017). Behave: The biology of humans at our best and worst. Penguin Books.

Shove, E., Pantzar, M., & Watson, M. (2012). The Dynamics of Social Practice: Everyday Life and How it Changes. SAGE Publications Ltd. https://doi.org/10.4135/9781446250655

11:15
Archives and AI - opportunities and challenges for research data curation in a changing landscape of academic research

ABSTRACT. This paper looks into technology induced changes of research culture, in particular data cultures, and their impact on research data archives. Research data archives as part of research infrastructures have increasingly become an object of study. [1,2,3] The current wave of technological innovation - generative AI - which so profoundly shakes the landscape of scholarly communication and the inner working of knowledge production in academia, not surprisingly also affects the processes of digital preservation and the services as provided by archives. This paper combines a reflection about the impact of new technology on the function of digital research archives with a practice lens and a focus on ongoing policy making. We depart demonstrating about how archives in the cultural heritage domain embrace those new technologies, and at the same time try to confine and guide such technological impact by means of shaping new policies [4]. Very concretely we look into the DANS Data Station of Archaeology. This archive is the central access point for all archaeological data in the Netherlands. It contains over 165.000 datasets with a varied provenance, from academic research to certified archaeological commercial organizations to citizen scientists. The archive itself has innovated its services by participating in international and projects and networks such as ARIADNE [5], PARTHENOS [6], and SEADDA [7], and is well embedded in new trends in digital preservation. [8] The core of the paper concerns the question how trust is to be protected in the changing landscape of making research data. Trustworthiness is the very essence in the relationship between an archive and the (scientific) communities it serves. We ask: How can and should new technologies such as AI supported metadata enrichment become implemented? What policy should an archive apply if it comes to the preservation of data which might be created by humans and machines? What are the implications for the professional portfolio of data curators themselves?

[1] Kaltenbrunner, W. (2015). Reflexive inertia: reinventing scholarship through digital practices. PhD Thesis University Leiden. https://hdl.handle.net/1887/33061 [2] Borgman, C, Scharnhorst, A & Golshan, MS (2019). Digital data archives as knowledge infrastructures: Mediating data sharing and reuse'. Journal of the Association for Information Science and Technology, vol. 70, no. 8, pp. 888-904. DOI: 10.1002/asi.24172; preprint version: https://arxiv.org/abs/1802.02689 [3] Scharnhorst, A., Tykhonov, V., Indarto, E., & Doorn, P. K. (2023). Measuring research data archives. Paper presented at 86th Annual Meeting of the Association for Information Science & Technology | Oct. 29 – 31, 2023 |London, United Kingdom, London, United Kingdom. https://doi.org/10.5281/zenodo.10555758 [4] Damen, D. (2025) Getting started with AI policy for your organisation. Blogpost. Website Digital Erfgoed Nederland, https://www.den.nl/kennis-en-inspiratie/maak-een-ai-beleid-voor-jouw-organisatie. Access Jan 6, 2026 [5] ARIADNE is an international research infrastructure https://www.ariadne-research-infrastructure.eu/what-is-the-ariadne-ri/ providing data, tools and other support for the arts, humanities, cultural heritage and in particular to archaeology. [6] PARTHENOS was an European funded project (2015-2019, 10.3030/654119) which aimed at strengthening the cohesion of research in the broad sector of Linguistic Studies, Humanities, Cultural Heritage, History, Archaeology and related fields through a thematic cluster of European Research Infrastructures. See http://www.parthenos-project.eu [7] SEADDA was a COST Action with the aim to Save European Archeology from the Digital Dark Age. https://www.seadda.eu [8] Johansson, M., Tykhonov, V., Alexandersson, S., Ferguson, K., Hanlon, J., Hollander, H., Touber, J., Scharnhorst, A., and Osborne, N. (2025) Archiving for the Future Past - Multimodality and AI - Challenges and Opportunities. Paper presented at the DARIAH Annual Event "The Past", Book of Abstracts, https://annualevent.dariah.eu/wp-content/uploads/2025/09/DARIAH-Annual-Event-2025-Book-of-Abstracts_10.5281-zenodo.16411471-.pdf

11:45
Evaluating relevance in research assessment: an ethnographic inquiry of academic whiteness

ABSTRACT. The practice of research assessment is one of the ways in which the contemporary western(ized) university is made to give an account of itself. In this paper I reflect on this practice as a performance of 'academic whiteness'. Based on ethnographic research carried out during the Dutch research assessment of law in 2017 and of philosophy in 2019, I zoom in on how the issue of 'relevance' (or impact) is grappled with in research assessment. Shifting to a decolonial perspective, I argue that by tapping into repertoires of relevance, academic whiteness tends to systematically delocalize itself. Repertoires of relevance involve the affective calibration of a relation between 'science' and 'society' - the thing which science is required to be relevant for. Under conditions of academic capitalism, this relating is predominantly shaped by market logics, although this is paradoxical as science would lose status if it were to be fully reduced to it. Thus, relevance becomes flattened out as virtually any practice of mobilizing knowledge for capital, while at the same time a counter-narrative of scientific autonomy and independence is deployed to legitimate this. Science gets affectively performed as neutral, objective, and pure, negating how epistemic practices that have come to be called scientific have always come from particular epistemic locations. Analyzing the ways in which science is assumed to be relevant to society, while answerability with regards to who and what it is relevant for remains systematically displaced, I argue that society gets mobilized as an empty signifier. Thus, as repertoires of relevance perform both science and society as demarcated, neutral, and implicitly white spaces, research assessment allows the university to be continued as an institute of coloniality.

12:15
Assetisation mechanisms in hybrid professions - Evidence from Project Management

ABSTRACT. Across many contemporary organizational fields, authority, coordination, and value creation are no longer primarily anchored in state licensing or legally enforced professional monopolies. Instead, they are increasingly produced through sociotechnical arrangements that render practice-based knowledge portable, examinable, and continuously updatable. Standards, certifications, metrics, and software platforms have become central organizing devices, yet existing theories struggle to explain how such arrangements stabilize authority and endure over time in the absence of legal jurisdiction.

This paper develops assetisation as a mid-range theory to address this puzzle. We conceptualize assetisation as the sociotechnical process through which heterogeneous and often tacit know-how is transformed into durable, value-bearing artifacts—standards, credentials, calculative devices, and platforms—whose authority depends on their portability, calculability, exclusivity, and updatability. Crucially, assetisation shifts analytical attention from the production of knowledge to its maintenance, highlighting the institutional work required to sustain knowledge as an asset over time.

The paper further argues that assetisation is coordinated by a triadic institutional engine composed of academic journals, professional bodies, and vendors or consultancies. Through their institutional coupling, this engine governs the durability of knowledge assets and substitutes functionally for legal jurisdiction. Rather than being unintended side effects, characteristic field-level dynamics emerge as structural outcomes of this coupling, including credential substitution for licensing, non-monotonic effects of standardization on interoperability, and performativity-induced blind spots that privilege calculable forms of value.

We develop these arguments through a theory-building analysis of project management, a field characterized by mature standards, extensive credential markets, strong platformization, and the absence of state licensing. The analysis demonstrates how authority is stabilized through asset governance rather than law and how sociotechnical infrastructures shape what counts as competence and value in practice.

The paper contributes to organization studies and science and technology studies by reframing professional authority as a problem of asset governance and by offering a transferable framework for analyzing how contemporary organizational fields are governed through standardized and platformized knowledge assets.

10:45-12:15 Session 12F: T21: Orbiting and Hovering – Critical Remote Sensing and Volumetric Regimes
10:45
Towards a FAIR-compliant Harmonised AI-based Automatic Metadata for Climate Research

ABSTRACT. In the petabyte era, climate research deals with large and extremely large datasets on a daily basis. Filling in metadata accompanying climate datasets is challenging in many cases. It can be time consuming, often leads to incomplete results and is very error prone. Arguably, most researchers fill only the minimal set of metadata required to publish their data (i.e. software, publication), mostly out of time constraints. The metadata fields are also not filled consistently. For the institution for example sometimes an abbreviation, while the other times the full name is used. There are multiple lower/upper case issues. Moreover, users do not always choose the same names for the same variables they are describing. In multiple cases there are FAIR compliance gaps (findable, accessible, interoperable, reusable).

In this talk, we present the idea of an automatic AI-based FAIR-compliant metadata for climate research in order to deal with the aforementioned challenges. Based on an interdisciplinary collaboration within the Leibniz Science Campus “Digital Transformation of Research” (DiTraRe), we created a work plan connecting researchers from the climate domain as well as computer science experts and infrastructure providers (RADAR). Within this framework, we aim to develop a scalable infrastructure that leverages natural language processing (NLP), knowledge graphs, and large language models (LLMs) to support the harmonisation and semantic alignment of metadata in climate research repositories. Our output will be a curated, machine-actionable metadata set that can support both the integration of scientific data and downstream AI research. We aim to deliver not only technical tools but also sustainable resources for the community, including an openly accessible metadata set and methods for its continuous extension and reuse.

11:15
Seeing like a Satellite? The Role of Space-based Valuations in Monitoring GHG Emissions

ABSTRACT. Earth Observation technologies comprise a central part of information-based infrastructures owned by public and – in the context of the emerging ‘New Space Economy’ – private entities. Promising to care for a distant future, they find multiple climate-related applications, including the anticipation of climate hazards and the sensing GHG emissions. The contribution seeks to answer how space-based visualising technologies for monitoring and modelling greenhouse gases enact “scopic valuations” (Dobeson 2016). Specifically, the contribution asks how satellite-derived sensing data for climate mitigation assemble a shared reality, thereby allow for a global form of coordination and create novel possibilities for accountability.

Hyperspectral imaging sensors are capable of measuring atmospheric GHG concentrations and tracking anthropogenic contributions to climate change. Thereby, they hold the promise to measure global GHG emissions more accurately than previous measurement methods. Space-based valuations are increasingly integrated into voluntary or mandatory GHG reporting practices, e.g., national emissions estimations under the ‘Global Stocktake’ or cap and trade markets. Hence, space-based climate monitoring tools can be conceptualised as “monitoring procedures and sensors” (Callon 1998, 257) that create evidence for tangible processes – negative climate change-inducing externalities – within economic practices (ibid. 244, 257).

Drawing on conference ethnography, the contribution traces negotiations on how satellites’ measurements refabricate the governance of GHG emissions as economic objects in the context of reporting practices. The insights of my conference ethnography also shed light on the implementation of space-based GHG monitoring infrastructure as a site of political struggle for space assets. Within this ongoing assetisation process, the reconfiguration of the public sector – enrolling the state as an investor or entrepreneur – becomes decisive (Birch/ Muniesa 2020, 19). Business models of private companies are salient in high-resolution monitoring practices, such as facility level methane leakages. Other companies nest their business model into open satellite-derived climate data that stems from public scientific-oriented satellite missions, such as low-resolution area flux mappers.

11:45
How the adoption of remote sensing technologies challenges asset management in municipal organisations: a case study of urban bridges and quay walls

ABSTRACT. Many infrastructure assets in the Netherlands are beyond their technical lifespan and are at risk of failure. This also applies to urban bridges and quay walls (UBQs) in historic cities. These UBQs exhibit signs of deterioration, and they require regular monitoring, inspection, and assessment. To reduce costs and nuisance while ensuring safety, municipalities are moving from traditional, labour-intensive inspections towards data-driven approaches. As part of this transition, municipalities are interested in using space-borne Interferometric Synthetic Aperture Radar (InSAR) technology. InSAR is a remote sensing technology that enables cost-effective monitoring of many assets at once and tracking long-term deformations at city-scale.

Despite these benefits, adopting InSAR into municipal UBQ asset management appears challenging. This study aims to identify and discuss the challenges that limit the adoption of remote sensing technologies in municipal organisations. To this end, we employed literature on innovation adoption in public organisations, and conducted a qualitative case study that examines how two large Dutch municipalities are adopting InSAR into their UBQ asset management. While both municipalities have an innovation program on UBQs, they differ in terms of organisation size and resources for asset management. We conducted semi-structured interviews with asset management practitioners and analysed municipal documents to identify technological and organisational challenges that limit the adoption of remote sensing.

Our findings show that both municipalities mostly face organisational challenges, such as (re)formalising monitoring methods and UBQ assessment standards. Furthermore, municipalities are challenged by continuously changing technological limitations and advancements, which affect the uncertainties in their UBQ safety assessments. This research provides insights into the innovation adoption challenges faced by municipal organisations when implementing InSAR into their UBQ asset management, and aims to motivate further research on the adoption of remote sensing technologies by public organisations.

10:45-12:15 Session 12G: T32: Histories in Times of Global Shifts: Roundtable

A roundtable conversation with researchers active in the fields of history, memory studies, transdisciplinary STS, and related disciplines. Our goal is to generate a debate rich in both content and connection between different approaches, as we work towards understanding how such mobilisations take place across disciplines, and towards developing conceptualisations that allow these approaches to speak to one another.

Organizers: 

Evelien de Hoop (Vrije Universiteit Amsterdam)

Andreas Weber (TU Twente)

Sjamme van de Voort (Vrije Universiteit Amsterdam

)Efi Nakopoulou (TU Twente)

12:15-13:15Lunch Break & Poster Session
14:45-15:00Coffee Break
15:00-16:30 Session 14A: T3: Ethics and Technology in Practice
15:00
Outsourcing the Ethics of Care: Human-Robot-Healthcare Interaction

ABSTRACT. Healthcare environments face persistent challenges balancing staffing shortages with the provision of high-quality patient care. A promising solution is the use of social robots which have potential to support the transformation of healthcare delivery. They are already being deployed across various care settings, including long-term residential facilities and paediatric wards, to assist with operational tasks, diagnostics, and direct patient interaction. Alongside these promising developments, however, come important challenges related to trust, implementation, and adaptation. For instance, healthcare practitioners need ongoing resources, training, and ethical guidance to effectively integrate, coordinate, and troubleshoot technological systems in everyday care. Moreover, by adding robots to the system of caregiving, not only functional aspects of care are outsourced to the technology, however also the social and ethical aspects of “good care” are affected. In this paper, we introduce three social robots currently used in healthcare. We will then explore the research question how can a combined lens of mediation theory and Joan Tronto’s ethics of care be used to frame and analyse the use and roles of such robots. We will examine how these frameworks can help us understand what constitutes “good robot care” and how this understanding can better position the role of social robots within the care system and how they can reshape the future care experience. Through this ethical reflection, we will discuss how the functions and design of these robots can be envisioned and tailored to specific care tasks and environments, ensuring that technological innovation genuinely supports, rather than undermines, the ethical and relational aspects of care in practice.

15:30
Ethical Readiness for AI in Systemic Design Practice: Making Systemic Design Tools Adaptive, Inclusive, and Participatory Infrastructures

ABSTRACT. Systemic design approaches are widely used to address complex global challenges such as climate change. Central to these approaches are methods and tools for participatory mapping, modelling, and collective sense-making. In design-oriented Science and Technology Studies, such tools are understood as non-neutral mediators that shape how participation, knowledge production, and decision-making unfold (Sevaldson, 2011; Latour, 2002; Perera et al., 2025). As artificial intelligence is increasingly explored and debated across design research and practice, its integration into participatory and system-oriented design approaches, such as systemic design, raises questions about whether these practices are ethically ready to incorporate AI into their methodological repertoires.

This contribution draws on a participatory design workshop exploring Living System Maps, an approach that reimagines systemic design tools as adaptive, inclusive, and participatory infrastructures, while examining the promise of AI to make such practices more accessible and supportive of collective sense-making (Perera et al., 2025). Participants worked with a pre-framed food systems map and engaged in reflective and speculative activities that explored the role of AI in collective sense-making through multiple imagined roles, framing AI as a supportive actor to enhance participation and inclusiveness.

In this paper, we present a retrospective analysis of the ethical reflections that emerged during the workshop using the Ethical Readiness Check (Dorrestijn & Eggink, 2018). Applied as an analytical lens at the level of practice, it helps articulate ethical concerns related to system framing, boundary-setting, shifting agency and responsibility, and potential unintended effects. These concerns appeared in fragmented, participant-specific ways rather than being embedded as a shared, repeatable practice of ethical reflection.

The study proposes that ethical readiness should be understood as a capacity of systemic design practice rather than a property of AI technologies alone. By linking experiential design work with structured ethical reflection, it can extend the CTA toolbox for AI-supported systemic design and can offer practical insights for embedding ethics amid complexity in uncertain times.

16:00
Making the invisible visible: A reflection method to help model developers and users uncover justice-related design choices in energy models

ABSTRACT. Energy models play a crucial role in shaping energy transitions, yet are often treated as objective and value-neutral tools. Increasingly, scholars and practitioners argue that justice should be more explicitly linked to energy modelling, not only because of the intrinsic importance of justice, but also in response to growing contestation of top-down, techno-economic decision-making. So far, most efforts have focused on implementations of energy justice within the model logic, such as equity metrics or by including a variety of stakeholders through participatory modelling. However, such design choices embed assumptions of what justice entails in the given context, such as: who is recognized as a stakeholder, whose knowledge is represented, and whose interests are prioritized. Therefore, it is essential that model developers and users engage reflexively with normative assumptions embedded within the modelling process. However, few concrete methods exist to support such reflection in practice. In this study, we ask: How and to what does extent does a method for joint reflection between model developers and users help identify assumptions and choices in the modelling process that have potential justice implications? We develop and test a reflection method that combines structured dialogue, creative exercises, and critical questioning, in the form of two workshops with developers and users of two policy-relevant energy models in the Netherlands: Energy Transition Model and Hestia. Our findings reveal six places where (in)justice reveals itself in energy models, namely in (1) model scoping and its consequences, (2) normative assumptions in models, (3) ability of models to steer policy choices, (4) modeller errors and uncertainties, (5) model complexity and barriers to understanding, (6) potential for model misuse. We conclude with implications for integrating reflexivity and justice awareness in energy modelling practice.

15:00-16:30 Session 14B: OT: Approaches to Knowing I
15:00
Enacting Futures Through Valuation: Sociomaterial Practices of Prediction in Public Investment

ABSTRACT. How do predictive artefacts and evaluation regimes make certain futures tangible while rendering others impossible to articulate? We discuss a conceptual framework for understanding prediction as a sociomaterial practice in the governance of major public investments. These projects—transport, energy, digital infrastructure—are justified through formal evaluation devices such as cost–benefit analyses, risk registers and performance indicators. While presented as neutral tools, these devices enact specific worlds: they stabilise some value configurations and foreclose others, shaping what counts as legitimate knowledge and acceptable futures.

Building on Science and Technology Studies and valuation research, the research integrates four strands. First, modes of existence and non-existence conceptualise technical, legal, political and moral regimes as distinct ways of establishing validity, and show how futures become practically ‘non-existent’ through procedural moves and omissions (Latour, 2013; Valkenburg, 2020). Second, a typology of value modes—technical, economic, organisational, political, legal, environmental, moral and representational—captures the plurality of logics mobilised in project governance (Zerjav, 2021). Third, valuation studies frame evaluation as sequences of tests and demonstrations with emergent causal powers: once stabilised, they lock in trajectories and exclude alternatives (Muniesa, 2011; Muniesa, 2017; Elder-Vass, 2022). Finally, recent work on futures and inference highlights the inferential labour behind scenarios and counterfactuals, suggesting that making futures tangible requires explicit chains of reasoning from documentary traces to plausible pathways (Navarrete et al., 2020).

We argue that predictive knowledge in public investment is not a passive forecast but an active sociomaterial arrangement. Evaluation devices and procedures do not merely compare options; they enact futures by configuring value modes, legitimising some imaginaries and silencing others. This conceptual synthesis advances STS debates on prediction and valuation by showing how epistemic agency is distributed across artefacts, practices and institutional logics, and by proposing a vocabulary for researching futures as enacted rather than anticipated.

15:30
Identifying future control points in the economy: The startup complexity index

ABSTRACT. In this time of geopolitical and economic uncertainty, a number of challenges and crises - including the COVID-19 pandemic, the unravelling energy transition, the Russian war in Ukraine, instability in the Middle East, and US-China trade tensions – have exposed economic vulnerabilities and global dependencies between countries and continents that are becoming more of a liability (Damen, 2022). In the EU and various other countries there are fears of losing ground in key technological areas such as battery production, artificial intelligence, cybersecurity, clean energy technologies, and medicines (Edler et al., 2024; Kroll,2024). As a result, countries are trying to preserve their own and their firms’ ability to act strategically and autonomously, and produce and use technological knowledge in a time of intensifying geopolitical and geo-economic competition. Identifying current positions of control (TNO, 2024) is quickly becoming a necessity in these times of geopolitical and geo-economic concerns. This study goes one step further and aims to identify potential future control points in the economy that assure future competitiveness (Draghi, 2024). To find future control points we use company data from Dealroom. This database includes innovative startups and scale-ups which are developing products using new technologies. As these startups are already commercializing their innovations this is more closely related to future economic activity than, for instance, publication or patent data. We analyze this data building on economic complexity methods as proposed by Hidalgo and Hausman (2009) and refined by Tachella et al. (2012). These indicators show whether outputs are relatively simple or complex to produce by analyzing how many countries are able to produce these outputs (ubiquity) and which other outputs these countries produce (diversification). The findings show the countries with the most complex startup economies as evidenced by the highest score on the startup complexity index. Analyzing the countries that are strong in the most complex startup sectors points us towards potential future control points. These control points are likely to emerge in clusters of complex startups with unique and hard to replicate activities.

16:00
CDA as a theoretical lens for studying contemporary STS related issues: discursive disputes around sustainability

ABSTRACT. This paper discusses potential contributions of Critical Discourse Analysis (CDA) for expanding STS research agenda by exploring ideological effects and power disputes which constitute contemporary discourses on sustainability as found in two exemplars of widely used Brazilian and Dutch secondary school Physics textbooks. Sustainability can be understood as one of the urgent and cross-cutting issues of contemporary society, encompassing social, environmental, economic, and cultural dimensions that call for effective and collaborative solutions in times of environmental crisis, particularly with regard to the maintenance of social justice, democracy, and the contributions of environmental education. STS perspectives have emphasised that science and technology are not neutral social practices, but historically situated and permeated by diverse interests and values, whether cultural, environmental, economic, or political. In educational contexts, this perspective aims at fostering students’ abilities to understand, question, and propose sociopolitical actions concerning sustainability, which involve responsible participation and a commitment to social justice. CDA understands discourse as both constitutive of and constituted by social practices, while also contributing to the maintenance or transformation of power structures. By articulating conjunctural and textual analyses, it allows to investigate ways through which meanings of sustainability are produced, naturalised, or legitimised in discourse, CDA can shed light on how specific conceptions on human-environment relationships may become hegemonic or marginalised in different sociopolitical contexts. Discursive disputes around sustainability – in society as well as in specific genres such as textbooks – involve tensions between competing notions of preservation and natural resource exploration, Education for Sustainable Development interpretations, developmentalist growth-centred discourses, concepts such as de-growth and negotiations between local and global framings. A preliminary analysis shows that, in the Netherlands, sustainability is addressed cross-disciplinarily, grounded on urban daily-life contexts, and focusing energy efficiency and emission reduction. In Brazil, debates emphasise environmental and economic dimensions, referencing the Amazon region, electric vehicles, and entrepreneurship. While Brazilian textbooks make more explicit intertextual connections, with both historical and media texts, Dutch textbooks tend to articulate conceptual approaches related to different natural science disciplines. In both contexts, growth-oriented development models are naturalized; considerations about historical and Indigenous perspectives are incipient; and sustainability is framed in terms of society’s consumption demands and conceptual discussions of energy-related physics topics with limited democratic engagement.

15:00-16:30 Session 14C: T11: Rural-Urban Knowledge Infrastructures for Transformation
15:00
On amplifying, facilitating and building alliances: democratic infrastructures for transformation

ABSTRACT. The knowledge embedded in everyday experiences and bottom-up practices is often overlooked, or seems to operate at too small a scale to address the all-encompassing challenges posed by interlocking eco-social crises. Democratic theories of prefiguration and commoning respond to this lacuna and are well-equipped to study how alternative practices, including artistic, community-based and protest practices, embody “the kind of radical change that they aspire to bring about on a much grander scale in the future” (Sande 2019, 227). However, how such alternative world-building ‘aligns’ (Von Redecker 2021) to realise durable political change within existing democratic systems often remains unclear. I propose that the concept of ‘democratic infrastructures’ offers a useful tool to understand how seemingly small-scale practices of ‘everyday living’ connect with larger institutional, political or cultural transformation. Firstly, democratic infrastructures make visible how different temporalities of political action are connected: for instance, how moments of radical protest or refusal, can later lead to the building of durable alternative spaces, practices and institutions. Secondly, democratic infrastructures connect different sites of political action: progressive institutional policy might for instance be driven by bottom-up (counter-institutional) initiatives, which are in turn further enabled by a change in policy. Based on this spatial and relational understanding of democracy, I wish to contribute to an understanding of transformative change that solidly recognises the connections between informal and self-organised initiatives, and institutional change. To further concretise this conceptual account, I focus on two figures: the amplifier, and the facilitator. These are people (e.g. community development workers, speakers for the living or government advisors) who make connections between different worlds, norms, or systems. They both – in different ways – build alliances for transformation between different sites and temporalities of change. Finally I ask: what would it mean for academics to act as amplifier or facilitator of transformative change?

15:30
‘Commons-literacies’ for activating rurban futures: Lessons learned on cultivating a new (knowledge) practice for commoning at the rural edges of Amsterdam

ABSTRACT. The current situation of land politics for urban farming in Amsterdam upholds a regime that stimulates competition and which is highly vulnerable to the financialization and value-extraction of the land. Alternative rurban futures require modes of governing space and ownership in ways that break away from dominant capitalist practices. Commoning is one such alternative mode that is capable of displacing financial interests so that other social-ecological values have a chance of flourishing. However, for this purpose ‘prospective commoners’ who have been socialized within a mainly capitalist paradigm, need to foster and learn new vocabularies and practices that articulate the alternative ecologies, economies and ontologies that constitute the commons. We refer to these alternative vocabularies and practices as a ‘commons literacy’. Within this paper presentation, we examine: what is a commons-literacy and how can we further develop this? For this we draw on the experience of Voedselpark Amsterdam in making land available for ecological urban agriculture and building communities capable of governing the land as a commons. We examine how we can cultivate a commons-literacy through conversations, workshops, research and collectively setting up governance structures. We will critically reflect on where the lack of a shared ‘commons-literacy’ leads to frictions and how Voedselpark Amsterdam has developed new creative vocabularies that allow to articulate alternative modes of valuing the land and organizing finance. This, we show, is an embodied and experiential process. However, while it is necessarily rooted in local communities and circumstances, as a set of (knowledge) practices, there is a continuous exchange with other communities and circumstances.

16:00
Beyond niche innovations and scaling: Transformative change as struggle

ABSTRACT. Despite widespread acknowledgement that contemporary social-ecological crises demand profound change, dominant sustainability frameworks continue to rely on the same modernist logics – growth, progress, and innovation – that have contributed to these crises. While recent scholarship and policy debates increasingly invoke ‘transformative change’, much of the sustainability transitions literature remains shaped by modernist assumptions, particularly through its emphasis on niches, innovation, disruption, and scaling. Although these concepts have generated valuable insights, they risk depoliticising transformation by obscuring structural constraints and struggles over imaginaries. This paper addresses the question of how transformative change can move beyond modernist framings. We address this question by drawing on social movement scholarship and presenting materials from two case studies: schoolyard greening initiatives in the Basque city of Vitoria-Gasteiz and an initiative to create a food park in Amsterdam. Our findings show that when these initiatives are framed as niche innovations that need to be scaled, they become constrained by modernist logics, leading to lock-in. Conversely, when they are understood as part of broader struggles – for transformative sustainability education and for commoning, respectively – they reveal leverage points to contribute to transformative change. Based on our analysis, we propose to conceptualize Transformative Change as struggle, foregrounding conflict and contestation over worldviews.

15:00-16:30 Session 14D: T7: Making Science Better?
15:00
Exploring Scientific Bubbles: Lessons from other domains

ABSTRACT. Science is sometimes described as having “bubble-like” tendencies, whereby researchers and others create and perpetuate “hype-like” trends which can result in vast amounts of research into specific approaches untethered from concrete results. This costs resources. But why does this happen in science, a supposedly rational enterprise? In this paper we examine the relevance of the notion of a “scientific bubble”, a concept related to “hype”, to identify and explain such patterns. To do so we build on some initial applications of the economic bubble concept to science (e.g. Pedersen and Hendricks, 2014) by taking a systematic approach, first exploring an analogy between the structure and behaviour of economic bubbles and certain patterns in science, and then examining causes or indicators that might transfer (or have analogues) between the contexts.

Economic bubbles are commonly considered cases in which asset prices greatly exceed intrinsic valuations (Long 1990). As an ‘asset’ – the principal variable in economic bubble theory – we consider any scientific good which can generate new knowledge, e.g. scientific methods, approaches, hypotheses, new technologies. ‘Prices’ represent expectations scientific researchers have on the payoffs these goods might bring, e.g. new discoveries (epistemic value), increased status / rewards. These prices can then be speculated on and in turn overvalued.

With an analogy in place we then consider how causes of economic bubbles might translate to science: e.g. manipulation (misconduct), too much money chasing too few resources, normative conformity, herding (rational, irrational), hype, overpromising, emotional contagion, and others. We map the traits we identify to key cases (historical, current, potential) to assess the extent to which the analogy holds up, and to thereby draw inferences for the identification of causes, failures, and features for early identification of scientific bubbles and to explore differences between the concept of a bubble, and that of a hype cycle.

300 words

Indicative references and sources

• Gaillard, Mody, and Halffman. (2023) "Overpromising in science and technology: An evaluative conceptualization." TATuP-Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis/Journal for Technology Assessment in Theory and Practice 32.3 (2023): 60-65. • Kang, D., Danziger, R.S., Rehman, J. et al. (2025) Limited diffusion of scientific knowledge forecasts collapse. Nat Hum Behav, 9, 268–276. • Long et al. (1990) Positive Feedback Investment Strategies and Destabiliz- ing Rational Speculation,” The Journal of Finance 45(2), 379–395. • Mirowski, P. (2012). The modern commercialization of science is a passel of Ponzi schemes. Social Epistemology: A Journal of Knowledge, Culture and Policy, 26(4), 285–310. • Pedersen, D. B., & Hendricks, V. F. (2014). Science bubbles. Philosophy & Technology, 27, 503-518. • Sheeks, M. (2023). The Myth of the Good Epistemic Bubble. Episteme, 20(3), 685-700. • Wible, J. R. (1998) The economics of science: Methodology and epistemology as if economics really mattered, Routledge Frontiers of Political Economy, vol. 13. London: Routledge, Chapter 5 & 6.

15:30
Editorial Structure and Peer Reviewer Selection: A Comparative Study of PLOS Biology and PLOS ONE

ABSTRACT. In scholarly communication, peer reviewers are expected to filter out poor-quality research and improve papers through advice. Therefore, the selection of reviewers is crucial, as it fundamentally affects this quality assurance mechanism. Editors are primarily responsible for selecting reviewers. Existing literature has discussed editors’ perspectives on reviewers’ roles and editors’ practices in reviewer selection. However, much current research approaches this topic from a self-reported perspective, and, drawing on the concept of attitude–behaviour inconsistency, what editors claim about their reviewer selection may not reflect their actual practice. To fill this gap, this study seeks to examine existing journal metadata quantitatively, using a quantitative approach to understand who editors select as reviewers.

However, there is variation in the category of editors. Some journals rely heavily on in-house editors (i.e. staff editors), while others rely more heavily on external editors (usually called “academic editors”). How do differences in editorial categories affect the composition of peer reviewers? Using metadata from more than 30,000 articles published in PLOS Biology and PLOS ONE—as PLOS Biology has a strong internal editorial team, whereas PLOS ONE relies more on external editors—we will compare reviewer-panel characteristics across these journals. In addition, using social network analysis and scientometrics, collaboration and epistemic networks among reviewers, authors and editors will be constructed to assess how prior relational proximity differs between the two editorial models. The findings from this research will be compared with academic ideals regarding peer reviewer selection, thereby highlighting any inconsistencies and potential flaws that could be improved.

16:00
Navigating the science system: research integrity and academic survival strategies

ABSTRACT. Institutional approaches to research integrity often frame it as an issue of information deficit that can be solved through documentation. This solution-oriented approach allows to identify and deal with specific cases of misconduct but it does not consider the rationale behind them. Whereas significant research on scientific misconduct has focused on the motivations and its systemic incentives, in our approach we sought to explore the relation between misconduct and various forms of uncertainty. We propose to conceptualize QRPs as attempts by researchers to reconcile epistemic and social forms of uncertainty inherent to a knowledge production system embedded in a particular economic and social system. Our analysis is based on empirical material from 30 focus group interviews with 147 researchers of different levels of seniority and stakeholders from the four main areas of research (humanities, social science, natural science incl. technical science, and medical science incl. biomedicine) carried out in 8 European countries between 2019-2020 (organized in the context of the Standard Operation Procedures for Research Integrity EU Horizon project). The coding process made use of an explorative approach, where the themes of QRPs and social uncertainty in academic careers emerged through an inductive coding procedure. In the researchers’ accounts, misconduct was often described as part of a spectrum of shared practices that span various degrees of acceptability within a community. We have grouped these practices into various overarching “families”: cutting corners, grey data practices, RI as box ticking, avoiding trouble, and writing practices. We argue that conceptualizing QRPs as efforts to reconcile epistemic and social forms of uncertainty allows to reconsider actions for discouraging them that go beyond normative and policing approaches. Further, recommendations for collective change in the endeavors of knowledge production are provided which stem from the researchers’ own accounts during the interviews and existing literature.

15:00-16:30 Session 14E: T15: Health and Care Shifts in Times of AI
15:00
A Case Study of the Development and Implementation of AI in Healthcare, or the Love of Technology

ABSTRACT. AI-based decision aids are frequently promoted by developers, tech companies and health care organizations as tools to improve shared decision-making, reduce overtreatment, and enhance patient autonomy. Yet many such tools struggle to become meaningfully embedded in clinical practice. This paper presents a work-in-progress case study of an AI-supported decision aid developed in the Netherlands to support patients with prostate cancer by providing personalized information on treatment options and predicted side effects. Despite high expectations, the implementation of the tool was ultimately discontinued.

Inspired by Latour’s "Aramis, or the Love of Technology" (1992), we examine how entangled social, organizational, and technological factors shape trust in AI-based decision aids, and how these dynamics help explain their implementation trajectories. The study is based on 15 semi-structured interviews with AI developers, clinicians, health scientists, employees of a decision-aid company, student assistants, and patient representatives, who were involved in the project. Through reflexive thematic analysis, the study traces how different actors understood the purpose of the tool, its place within clinical consultations and their roles in its development and use. In doing so, it shifts attention away from technical performance towards the relations, negotiations, and frictions through which trust in the tool was built, contested, and ultimately withdrawn.

Preliminary findings highlight tensions in understandings of medical and technological expertise, diverging expectations about the tool’s role in consultations, and shifting doctor-patient relationships. These were compounded by practical and structural factors, including regulatory changes, complex multi-stakeholder arrangements, questions about the tool’s clinical usefulness, and economic dependencies with an external company. The paper contributes to debates on trust, knowledge politics, and AI implementation by showing how discontinuation often materializes from frictions across socio-technical arrangements rather than a single point of failure.

15:30
Digital health, AI, and the shifting of (health)care values: Corporate-driven efficiency unquestioned

ABSTRACT. Artificial Intelligence (AI) is increasingly being introduced in the healthcare field through promissory discourses that present it as a potential solution to structural challenges such as rising costs, workforce shortages, and increasingly ageing population. While efficiency has long played a role in healthcare governance, particularly within regimes of regulated competition and technological intervention, this study argues that contemporary AI-driven digitalisation facilitates a shift toward what can be described as corporate-driven efficiency. Rather than just optimising existing practices, the introduction of AI enables corporate actors to enter healthcare and rearticulate efficiency through regimes of data collection, measurability, and standardization. Building on scholarship on sphere transgressions (Sharon, 2021; Sharon & Gellert, 2024; Walzer, 1983) and historical analyses of efficiency and automation (Alexander, 2008, 2009), this research develops an account of how AI acts as a vehicle for value migration across societal domains. Values originating in technical, managerial, and market spheres are increasingly enter healthcare, where they interact with – and often displace – sphere-specific commitments such as relationality and patient-centred care. This research argues that these value shifts represent normatively and politically charged crossings between different spheres. Drawing on empirical cases discussed in the literature on AI-supported homecare (Neves et al., 2024), medical chatbots (Sharon, 2025), and digital mental health technologies (Stein & Prost, 2024), the paper illustrates how corporate-driven efficiency becomes an operationalised value in concrete healthcare practices. These studies show how efficiency-oriented choices at the same time prioritise standardisation and measurability, while also reconfiguring care relations, professional roles, and emotional engagement. By conceptualising these developments as sphere transgressions rather than just the consequences of technical innovations, the paper contributes to STS debates on AI, health, and care by foregrounding how corporate actors reshape values, authority, and what counts as legitimate healthcare practice.

16:00
Care alignment: Holding together care and scale in the development of AI-based health data infrastructures

ABSTRACT. The promissory discourse surrounding AI-based health data infrastructures rests on scalability, on the expectation that data, tools, and practices can be standardized and implemented across diverse settings. In STS, scalability has often been contrasted with care, understood as situated, experimental practices through which specific relations are sustained (Law, 2014; Mol, 2008; de la Bellacasa, 2017). While scale is associated with abstraction and uniformity, care foregrounds attentiveness to difference, fragility, and contingency.

In this presentation, we start from the premise that AI-based health data infrastructures are shaped by the simultaneous demands of care and scale. Their development requires interoperable and standardized data, while also attending to the specificities of diseases, patients, research communities, etc. To examine how these demands are navigated in practice, we study IDEA4RC, a Horizon Europe–funded project developing a data ecosystem for rare cancers.

Building on work on scale (Tsing, 2012; Seaver, 2021), we argue that scalability does not displace care but is dependent on ongoing efforts to negotiate, translate, and partially align multiple forms of care. Based on a thematic analysis of project documents, 30 semi-structured interviews, three co-creation workshops, and over 200 hours of participant observation, we introduce the concept of care alignment. Care alignment refers to processes whereby heterogeneous ideals, needs, and attachments are negotiated so that multiple objects of care and modes of caring can be successfully orchestrated. The relationship between care and scale in AI-based health data infrastructures is thus marked not only by persistent tensions, but also by practical strategies of bringing and holding together different elements. Scalability is pursued by aligning data accumulation efforts with the management of unwanted data collection excesses, by promoting standard data access approaches while allowing flexible local requirements, and by fostering collaboration while retaining the lures of competitive advantages and reputational gains.

16:30-16:45Coffee Break
16:45-18:15 Session 15A: OT: Approaches to Knowing II
16:45
Engaging Quantum Futures: Expectations as a Bridge Between Science and Society

ABSTRACT. Contemporary science communication practices emphasise not only the dissemination of research result but also engagement with the public. In the context of emerging technologies, communication of impacts, predictions, and expectations, routinely can be found accompanying research result. To clarify this phenomenon, we draw on Science and Technology Studies, particularly the concept of expectation, which is central to how emerging technologies are communicated. Quantum technology is a suitable case because, despite uncertain impacts, growing attention suggests ongoing hype. We conducted a qualitative content analysis of highly cited papers regarding quantum technologies (2007-2023) to examine how expectations are articulated. Our findings suggest two dominant modes: use case and future projection. The use case frames quantum technology as a solution to (societal) problems. This problem-solving narrative provides a shared reference point for the broader public to engage with. While the tendency to use this narrative can be attributed to the medialisation of science, the current science evaluation practice, which places a strong emphasis on ‘impact’, also plays a role. The future projections articulate versions of the future in which quantum technology is mature and adopted, along with the anticipated pathways. In contrast to the use cases, future projection is more aimed towards fellow scientists. This orientation is reflected in the use of more technical terminology such as ‘quantum algorithm’, ‘quantum memory’, and ‘quantum repeater’. Theoretically, we argue that expectations illuminate the mechanism of medialisation that permeates our scientific practice. Expectation, in the form of use case and future projection, articulates tangible future states, which then enables engagement both from inside the scientific sphere and eventually from the public through the media. In this sense, expectation serves as a connective element between the scientific sphere and the public sphere. It makes the scientific discoveries relatable to the public.

17:15
Same artwork, different values: how art appraisers navigate uncertainty and information asymmetries in the art market

ABSTRACT. Art market data is a key tool employed by art appraisers to assess the value of art for formal financial moments, such as: auction, insurance, inheritance, or donation. The art market is characterised by information asymmetry: auction sales data is collated and available through databases (such as ArtPrice or Artnet), while dealers' prices remain largely opaque. Art appraisers navigate this uncertainty by using art market data to simulate otherwise inaccessible or unknowable data points. As a result, multipliers have emerged as a norm to determine value for different purposes: auction value (1.0), inheritance (0.8–1.0), insurance (2.0–3.0), or donation (3.0–4.0). This phenomenon has arisen within professional practice, through a bottom-up adaptation to prevailing market conditions, to navigate a partially opaque market and the institutionalised value flexibility of art.

From an ANT-informed perspective, these multipliers operate as key mediators, translating partial market data into actionable and institutionally recognised assessments. Rather than simply reflecting market transactions, multipliers encode professional judgment, institutional norms, and market conventions, making explicit the socially constructed flexibility of art’s value. The emergence of multipliers illustrates that valuation is not a neutral reflection of economic reality but a performative practice that stabilises value under conditions of uncertainty.

This observation emerged from 14 months of field work apprenticing as an art appraiser and over 50 interviews with experts in the field of art appraisal. By centring the multiplier as both a practical and theoretical device, this research reveals how value is enacted through the coordinated work of tools, conventions, and institutional frameworks, highlighting the socially constructed and context-dependent nature of knowledge in valuing high-value unique goods, like art. The systematic use of multipliers is significant as it demonstrates how valuation practices structure market transactions as well as institutional expectations and standards of economic legitimacy.

17:45
Participatory and Spatial Approaches within Urban Infrastructuring

ABSTRACT. Infrastructuring refers to the continuous process of infrastructure development and emphasizes the dynamic relationship between the various aspects that shape infrastructures in specific contexts and over time. While there is considerable work on the theoretical conceptualization of infrastructures and infrastructuring in various fields, we lack a comprehensive overview of the processes and practices of co-developing urban infrastructures through collaborative knowledge production and geospatial approaches. In this presentation, we address the question of what trends and directions can be identified in research on the sociotechnical processes and relational system of urban infrastructuring, and how are methods of participation and knowledge building conceptualized and realized in infrastructuring processes? We obtain our evidence from a scoping review, analysed qualitatively and quantitatively. We elicit what kinds of knowledge co-occur with different techniques of infrastructuring. Our analysis identified three points where urban infrastructuring needs more attention. First, there is limited longitudinal research on infrastructuring processes. Secondly, while there is innovative empirical work in the Majority World, there is scope for theory building that takes into account the local context as well interdisciplinary approaches. Thirdly, more work on prototyping can help create links between the conceptual and empirical as well as the processual characteristics of infrastructuring.

16:45-18:15 Session 15B: T7: Making Science Better?
16:45
Co-creating Research Assessment Reform

ABSTRACT. As research performing organizations across the globe are trying to reform the ways they reward the performance of researchers, what criteria should they use? Starting from the perspective that a strategy for reform of research assessment is more likely to be successful if it is supported by the community of researchers that will be assessed, we describe a community-based co-creation approach to define and measure research quality that respects the epistemic diversity of research. Communities of researchers working with different sources of data uphold different standards for what they regard as high quality research: they value different goals of research activities, attach different weights to various aspects of research, describe them in different terms, and set different norms for what counts as good practices. With this variation in mind, we will organize a series of deliberative conversations in seven communities of researchers in the Social Sciences and Humanities, each working with a different source of data: self-reports in surveys, personal interviews of individuals and (focus) groups, observations by researchers of behavior through equipment, participant observation by researchers, official registers, news and social media, and synthetic data. The goal of these meetings is to derive consensus-based lists of transparency indicators that data communities agree should be measured in evaluations of research quality. To broaden the basis of support for the transparency indicators, we include researchers in the conversations who are working with the same source of data in different disciplines. The conversations are part of the development of Research Transparency Check (Bekkers et al., 2025). Modeled after the development of a tool in biomedicine (Wilkinson et al. (2024), we take five steps: 1) From various surveys of researchers in the Social and Behavioral Sciences (Bekkers, 2025a, 2025b), we construct a long list of aspects of research that researchers may find important to make transparent. 2) We assess the feasibility and likely impact of measuring good practices with rule-based checks (i.e., without the use of Large Language Models); 3) We conduct a Delphi-study to identify which aspects are supported by consensus in the data community; 4) We select checks that will be included in software screening research reports; 5) We evaluate the accuracy of the tool and user experiences with the data communities.

References Bekkers, R. (2025a). Data Practices in the Social and Behavioral Sciences in the Netherlands. https://osf.io/pbkun/ Bekkers, R. (2025b). Transparency Priorities. Survey: https://vuamsterdam.eu.qualtrics.com/jfe/form/SV_cCjVtArMsVIQzwW; Results: https://osf.io/z3tr9/files/gckfh Bekkers, R., Lakens, D., DeBruine, L., Mesquida Caldenty, C. & Littel, M. (2025). Research Transparency Check. TDCC-SSH Challenge grant. Project: https://osf.io/z3tr9 Wilkinson, J., Heal, C., Antoniou, G.A., et al. (2024). Protocol for the development of a tool (INSPECT-SR) to identify problematic randomised controlled trials in systematic reviews of health interventions. BMJ Open, https://doi.org/10.1136/bmjopen-2024-084164

17:15
What makes a conference high quality?

ABSTRACT. What constitutes “high-quality” research and who decides? This proposal focuses on sites that have been little studied from this perspective: scientific conferences. With the exception of recent studies, particularly those of a historical nature, these venues have been surprisingly under-analyzed in the field of STS. While the emphasis has mainly been on their role in creating and maintaining scientific communities, the economic dimension underlying the organization of large-scale events (up to 20,000 participants in the American Chemical Society's Annual Meetings, for example) has been neglected. My proposal is to consider conferences as socio-technical agencements that constantly organize considerable collective work requiring individuals—who devote time and resources to their organization and participation—but also learned societies or companies, publishing infrastructures, etc., to ensure their continuity and development. This approach invites us to enter the worlds of science and, above all, the economics of scientific publishing, based on the organizational work, the everyday practices of professionals within these worlds, the infrastructural and technological components—in short, the processes and engineering through which this publishing machinery operates. Callon and colleagues give a procedural and relational dimension to the manufacture and qualification of goods and invite us to pay particular attention to the forms of coordination that arise between several groups of actors. The empirical contribution is based on participant observations of several conferences in materials sciences and nanosciences that are considered both "legitimate" and "illegitimate". In my talk, I will describe their physical settings, the elements and devices that contribute to the framing of transactions (scripts, business models, etc.) and the agencements that make enable people to engage in global trading of communications or articles, but also to exclude others by exploiting vulnerabilities.

17:45
FAIR Data Practices for Qualitative Research in Transdisciplinarity

ABSTRACT. In recent years, open research data (ORD) practices have gained traction, as evidenced by funding calls and journals establishing them as a requirement. ORD intends to 'make science better' by increasing transparency and reproducibility, and by making datasets reusable and citable. However, ORD practises are often based on quantitative data cultures, and are rarely applied to qualitative data. Given their growing importance, it is important to rethink ORD practices for qualitative data, taking into account the practical, epistemological, and ethical challenges of sharing such data. This need comes to the fore when conducting transdisciplinary (Td) research, where new forms of engagement between science and society co-produce problem framings and project outputs. Sharing interview or workshop data from Td projects could allow for improved learning between Td processes and increase engagement between science and society. In the FAIRqual project we adress this issue by asking the questions: What are options to share this data according to FAIR principles? How to navigate the ethical issues of research participant protection and the benefits of sharing qualitative data? Who processes and stores data from Td research, and for whom? During a workshop with Td researchers, we identified key considerations and questions relating to open qualitative data in Td research. In a second step, we explored these in expert interviews with experienced Td researchers and open science experts. Based on our insights, our aim is to provide guidance on opening up qualitative data and encourage reflection on the current data management in Td research. By doing this, we do not view ORD as a measure of 'better science', but rather as a tool to start conversations about data documentation and data governance in Td projects. This has the potential to strengthen both research integrity and the partnership between science and society.

16:45-18:15 Session 15C: T15: Health and Care Shifts in Times of AI
16:45
Tracing imaginaries of healthcare AI and robotics through public policy, engineering research, and nursing practice

ABSTRACT. AI is increasingly promoted as a solution to the challenges in health and care – from workforce shortages to demographic change, and demands for efficiency and personalization. This paper draws on a multi-sited, multi-method study to critically examine how such promises are imagined, negotiated, and contested across different social arenas, and how they contribute to ongoing shifts in health and care in times of AI. It presents a comparative analysis of imaginaries of AI-enabled healthcare robotics in Germany, spanning public policy, engineering research, and healthcare practice. Empirically, the paper draws on qualitative analyses of German policy documents, ethnographic research in a healthcare robotics initiative, and interviews and focus groups with nursing professionals. The analysis shows that policy and engineering actors often imagine robots as assistive technologies that promise relief and “good work” for nurses, yet often obscuring wide-ranging changes implied in the use of these technologies and overlooking the situated realities of nursing care work. In contrast, nursing professionals engage with these imaginaries critically. While open to the potential of AI and robotics, they emphasize that such technologies must be embedded in broader structural changes to healthcare systems. They use these AI-related discussions to voice long-standing concerns around workload, professional autonomy, and the relational nature of care. By tracing tensions between promissory AI narratives, design practices, and lived healthcare realities, the paper contributes to STS debates on the politics of expertise and future-making in healthcare. It highlights how imaginaries shape not only innovation and governance but also redistribute burdens and expectations within healthcare work. Methodologically, it demonstrates the value of including marginalized perspectives – like those of nurses – in studies of AI and robotics, and argues for more context-sensitive, care-oriented approaches to the imagining and governing of healthcare technologies.

17:07
Cultivating narrative literacy: how fictional stories structure sense-making of AI for health and care

ABSTRACT. Artificial Intelligence (AI) for personalized dietary and lifestyle advice promises to promote better consumption habits, disease prevention, and autonomy in making healthcare decisions. At the same time, AI for health and care could raise concerns about privacy, unequal access, changes in doctor-patient relationships, and increasing individual responsibility for health and disease.

While a growing body of research on public and stakeholder perceptions of AI in healthcare focuses on individual attitudes, beliefs, and concerns, we seek to understand how stakeholders make sense of emerging technology by drawing on culturally embedded fictional stories. We ask: How do stakeholders imagine the future of AI for personalized dietary and lifestyle advice? Which narratives do stakeholders invoke to make sense of this emerging technology?

We supervised a six-week interdisciplinary student project and conducted more than 20 semi-structured interviews to capture the narrative sense-making practices of stakeholders who had an interest in or could become affected by AI for personalized dietary and lifestyle advice with a particular focus on human hydration. Through a narrative analysis we found that fictional stories from a variety of sources, ranging from classic poems to science-fiction movies, structured stakeholders’ imagination. Moreover, we noticed that stakeholders creatively adapted these stories to align them with their lived experience.

We discuss the implications of these findings for the responsible development of AI. We emphasize the relevance of narrative literacy – the capacity to understand how narratives are composed, the tacit social knowledge and cultural norms they (re)produce, and the effects they generate in the world – for hermeneutic technology assessment in transdisciplinary projects.

17:29
“Digitaal als het kan”: Framing Value and Necessity of Digital Innovations in Dutch Mental Healthcare

ABSTRACT. Mental healthcare systems across Europe struggle with growing demand and limited resources. Digital innovations like eHealth platforms and devices driven by Artificial Intelligence (AI) are being prioritised as effective solutions to aligning capacity with demand. In the Dutch context, scalability of digital innovations for mental healthcare is being promoted as urgent and necessary to achieving accessible and affordable care. However, recent scholarship raises concerns that the present ‘scalability zeitgeist’ might lead us to sidestep complex systemic challenges and questions the premise that scaling is inherently good (Hanna & Park, 2020; Pfotenhauer et al., 2022; Tsing , 2012). Prioritising scale may, for instance, lead companies to design products with only one type of user in mind, with harmful consequences for marginalised groups. There are thus growing efforts to scale innovations in healthcare responsibly, for example by balancing standardization and customization in AI innovation (Lukkien et al., 2025). In a broader pursuit towards responsible scaling, this paper investigates how the added value and necessity of digital innovations has been framed by mental health professionals, technology developers, and policy and regulatory actors in the Dutch mental healthcare context from 2005 to 2025. I analyse three types of sources: a) articles in major Dutch professional mental health journals; b) news and opinion pieces in national newspapers; and c) policy and monitoring reports. Through a framing analysis, I identify implicit and explicit assumptions, promises, and paradigms that underpin digitalisation and scaling ambitions in relation to digitalisation in mental healthcare, as well as what becomes less visible or omitted when these innovations are framed as solutions to mental healthcare challenges. This analysis contributes to understanding what scaling framings have been prioritised over time, which potential alternative scaling framings may have been less visible, and how underlying paradigms of care and policy priorities encourage or discourage scaling.

References: Hanna, A., & Park, T. M. (2020). Against Scale: Provocations and Resistances to Scale Thinking. arXiv. https://doi.org/10.48550/ARXIV.2010.08850 Lukkien, D. R. M., Nap, H. H., Peine, A., Minkman, M. M. N., Moors, E. H. M., & Boon, W. P. C. (2025). Responsible scaling of artificial intelligence in healthcare: Standardization meets customization. Ethics and Information Technology, 27(3), 34. https://doi.org/10.1007/s10676-025-09842-5 Pfotenhauer, S., Laurent, B., Papageorgiou, K., & Stilgoe, A. J. (2022). The politics of scaling. Social Studies of Science, 52(1), 3–34. https://doi.org/10.1177/03063127211048945 Tsing, A. L. (2012). On Nonscalability. Common Knowledge, 18(3), 505–524. https://doi.org/10.1215/0961754X-1630424

17:51
Scripts-in-the-making: the continues rescriptions of AI assisted digital self-monitoring during technology development process.

ABSTRACT. This paper presentation focusses on a research initiative aiming to create a Digital Twin platform where citizens, using AI models, can compare their health data to big data references. The ambition of the technology promotors is to improve early detection and prevention of cardiovascular disease. As various STS scholars already argued, technologies-in-the-making are not just material objects. Instead, they embody various visions, values and promises on for example, in our case, ‘good health’, ‘patient empowerment’ and ‘individual responsibility’ that ultimately also shape decision-making processes regarding e.g. investments and how can or should use the new technologies and why (Rip & Talma, 1998; Aykut et al., 2019).

With the help of participatory observations of consortium meetings and formal and informal conversations with technology developers, we studied, for 2.5 years, the various ‘scripts-in-the-making’ of the Digital Twin platform. With this, we attempted to increase understanding how broader contexts, such as the presence of clinical guidelines, business strategies, reimbursement policies, as well as academic research cultures, encourages certain forms of use of digital twins and discourages or constrains others, with a particular focus on which forms of patient empowerment are (not) stimulated and why.

Preliminary findings reveal emerging “scripts-in-the-making” that position patients not only as users but also as active data collectors and managers, reflecting a drive to improve algorithmic accuracy and create big data research opportunities for scientists. Yet, these ambitions are tempered by concerns over accessibility, privacy compliance, data governance and medical guidelines and certifications. This presentation will dive into how these concerns shape the various scripts-in-the-making, including the imagined end-users.

Rip, A., & Talma, A. (1998). Antagonistic Patterns and New Technologies. In Getting New Technologies Together (pp. 299–323). https://doi.org/10.1515/9783110810721.299

Aykut, S., Demortain, D., & Benboudiz, B. (2019). The Politics of Anticipatory Expertise: Plurality and contestation of Futures Knowledge in Governance — Introduction to the special issue. Science & Technology Studies, 32(4), 2–12. https://doi.org/10.23987/sts.87369

16:45-18:15 Session 15D: T3: Ethics and Technology in Practice
16:45
Ethics in and through InSilico Health

ABSTRACT. The rise of InSilico Health (ISH) technologies, particularly Virtual Human Twins (VHT), signals a potential shift in biomedical research and healthcare innovation (Marques et al., 2024). By simulating biological processes, these technologies aim to advance precision medicine and improve clinical trials, although their effectiveness remains uncertain to date (Leo et al., 2022; Pappalardo et al., 2019; Rousseau et al., 2024; Samei, 2025). Alongside these developments, ethical frameworks such as Ethical, Legal, and Social Implications (ELSI), Responsible Research and Innovation (RRI), and Technology Assessment (TA) seek to guide their responsible development and use, yet their integration into daily innovation practices and their influence on decision-making and technology development remain uncertain (Elhadj et al., 2024; de Jong, 2025).

This presentation draws on early insights from an ongoing PhD project that uses an empirical ethics approach to investigate how ethics is framed, negotiated, and operationalized in and through interdisciplinary human and non-human actors involved in the R&D of ISH. Rather than treating ethics as an external or merely formal requirement, this study approaches ethics as an active and dynamic component of research practice, one that is negotiated, contextually shaped, and influenced by broader social, institutional and power dynamics, while also shaping decision-making and the development of ISH technologies.

The presentation introduces the project's key research questions and conceptual considerations and invites discussion on the role of ethics, the distribution of ethical responsibilities, and on the practices through which ethical reflection becomes integrated among interdisciplinary actors.

17:07
Values in Diagnostic Cancer Technologies: Co-constitution, Multiplicity, and Ethical Reflection in PREDI-Lynch

ABSTRACT. In the development and deployment of diagnostic cancer tests, values assigned to aspects such as quality of life or diagnostic utility are typically treated as predefined and stable entities (Ferrante di Ruffano et al., 2023; Minhinnick et al., 2025). However, insights from valuation studies show that values are not inherent to health innovation technologies (Boenink & Kudina, 2020; Hoven & Manders-Huits, 2020). Rather, they are co-constituted, multiple, and dynamic (Hees et al., 2023). I operationalize this conceptualization to answer the following research question: How are values assigned, negotiated, and contested by stakeholders involved in PREDI-Lynch in the development of diagnostic precision technologies for colorectal, endometrial, and urothelial cancers?

Based on a thematic analysis of data collected through in-depth interviews with PREDI-Lynch stakeholders, I discuss preliminary insights into the nature of values in diagnostic cancer technologies. These insights highlight the multiplicity of values and reveal tensions related to the values attributed to different diagnostic tests, cancer indications, and stakeholders. Through this, I show how these tensions and trade-offs remain subtle and gradually surface as stakeholders continue to engage with the technology and with one another.

By demonstrating this, I argue that emphasizing co-constitution and multiplicity of values and technology through application of this approach helps make value tensions visible early on in technological health innovation projects. Furthermore, I argue that linking these insights to subsequent co-creation activities with participating stakeholders may subsequently stimulate practical ethical reflection, which will be the next planned step of this study.

With this presentation, I aim to collectively reflect on the consequences of these practical reflections and the conditions under which their impact on subsequent developments and decision-making within technology-driven innovation projects like PREDI-Lynch becomes noticeable.

17:29
A Practical Dialogue Approach to Emotional-Moral Reflection on Risks and Technologies

ABSTRACT. A global transition is underway from animal testing to animal-free risk assessment strategies. Although public concerns on animal welfare were key in initiating this transition, current debates are dominated by a science-policy focus on technological barriers, while public engagement in these discussions is limited (NCad, 2024), and questioning the need for testing – human use of risky chemicals – is omitted. The public’s perceptions of risk are often construed by experts as irrational and emotional (Roeser, 2010), further marginalizing citizen voices. Nevertheless, anger over involuntary exposure to risky chemicals - whether in humans or animals - reflects a felt value judgment that is rational and deserves serious consideration. To address the lack of public engagement and neglected emotional dimension in these discussions, we have developed a dialogue session format inspired by Sabine Roeser’s work. Roeser (2010) posited a public deliberation approach that takes emotions as a starting point for debates about acceptable risks, and as a source of critical reflection . Moving beyond the framing of an ‘emotional’ public with irrational concerns, she argues that emotions can highlight legitimate evaluative aspects of technological risks, such as justice, fairness and autonomy. From Roeser’s theory, we have built a dialogue tool that puts collective emotional-moral reflection into practice, and with this submission hope to beta-test our prototype in this conference track. Our approach enables emotional-moral reflection that is more accessible and helps to foreground ethical reflection rather than focusing only on descriptive facts. Values underlying risks and technology become more tangible and explicit rather than abstract cognitive concepts. In other words, the tool does justice to the way we encounter our own values in day-to-day life and enables conversation from there. Morality has a social dimension that requires deliberation with other as well as individual reflection, and this dialogue format can be useful to practice emotional-moral reflection with laypersons and experts, hence putting ethical reflection into practice.

References NCad. (2024). Evaluatie van het NCad advies “Transitie naar proefdiervrij onderzoek.” Roeser, S. (2010). Risk, Technology, and Moral Emotions. Moral Emotions and Intuitions, 1–207. https://doi.org/10.1057/9780230302457/COVER

17:51
When the Heart becomes Technology: A Practice-Oriented Ethical Exploration Using Phenomenology and Public Intuition
PRESENTER: Heleen Groote

ABSTRACT. Emerging biomedical technologies raise ethical questions that extend beyond technical performance. The development of a Total Artificial Heart (TAH), proposed as a response to the growing demand for donor hearts, is one such example. I argue that while expert discussions predominantly focus on safety, functionality, and biocompatibility; ethical reflection on lived experience, identity, and embodiment often remains underrepresented in innovation practice. Exploring these themes are of great importance, as lived experience has become more valuable the past couple of years than ever before.

Using the Holland Hybrid Heart project as a case study, this contribution combines philosophical analysis with public engagement. The theoretical framework is inspired by Jean-Luc Nancy’s phenomenological reflection L’Intrus, which describes the experience of living with a life-preserving yet foreign heart. Although Nancy reflects on a donor heart rather than an artificial one, his notions of dependency and vulnerability are of great use in creating a practice-oriented lens for understanding public intuitions surrounding a TAH, rather than a biomedical theoretical one.

To collect public intuitions about a TAH, I organise focus group discussions with members of the general public. It is expected that these discussions reveal ethical relevant concerns regarding identity, bodily integrity and changing attitudes towards organ donation – subjects that are often marginalized in expert-discussions. Rather than treating these responses as irrational or emotional barriers, the study aims to reframe them as ethically informative signals that can guide design choices, communication strategies, and implementation pathways.

By translating phenomenological insights and public engagement into a structured reflection, this paper contributes to the practical turn in ethics and technology studies. I expect to gain an insight in how ethical reflection can be embedded as a normal and actionable component of biomedical innovation.

17:00-18:30 Session 16: T11: Rural-Urban Knowledge Infrastructures for Transformation (Walkshop 2)

Tetem (Stroinksbleekweg 16, 7523 ZL Enschede)

17:00
Artistic Fringe - Walkshhop 2

ABSTRACT. From housing and energy cooperatives to community gardens, artistic practices and action groups, it seems that alternative rural-urban futures are already in the making. Such initiatives make space for and embody alternative values and relations in the here-and-now, while creating imaginaries of what such futures could look like. But what kinds of infrastructures allow for embedding local initiatives in processes of wider societal transformation?

We propose to use the STS-NL conference itself as a knowledge infrastructure to further explore and bring together alternative spaces, practices and imaginaries at the University of Twente campus and in the cultural ecosystem of Enschede. Through two workshops, using artistic and walking methodologies, we ground our academic work in past and present practices, spaces and imaginaries of commoning in the local landscape.

Walkshop 2 explores arts-based methods for rurban transformation with artists and makers and seeks to strengthen connections between cultural organisations in Enschede (such as Tetem, Concordia, Rijksmuseum Twente) and the University of Twente. It builds on the Planting Seeds project, where we rethink and prototype libraries for biodiversity justice, in collaboration with Creative Coding Utrecht (CCU) and Tetem. We invite CCU to use a data-walking methodology and associated tools to explore and attune to human-nature relations as experienced around Tetem and the city centre of Enschede. This approach builds on ongoing collaborations with the RUrban Futures Collective.

The RUrban Futures Collective is a transdisciplinary collective of researchers at the University of Twente who explore the conditions for just, biodiverse and democratic futures by addressing the interactions between rural-urban spaces, practices and imaginaries. Using collaborative, engaged and arts-based methods, we seek to democratise conversations about biodiversity, climate, and futures.