Digital trust in a high-risk environment: Navigating digital trust in the Russian anti-war movement abroad
ABSTRACT. In response to the Russian full-scale invasion of Ukraine in 2022, Russian nationals residing abroad mobilised to voice their opposition to the war, while some Russians decided to leave the country to be able to protest more freely. The Map of Peace, an independent project monitoring anti-war groups in and outside Russia, has identified 216 groups worldwide, with 16 in Russia. However, since not all groups or activists are willing to share their data publicly, this number is only a preliminary estimate.
In the digital age, individuals leave personal digital traces that can be accessed and analysed by both private companies and governments. This also applies to protest movements, where digital tools play a key role in organising and mobilising (Rossi & Artieri, 2014). Trust is a crucial factor in activism, as it enhances individuals’ perceptions that participation is both safe and meaningful (Benson & Rochon, 2004). So, trust in the digital sphere, where the increasing share of communication and coordination happens, is one of the crucial factors for the success of protest movements.
In authoritarian contexts, digital tools serve not only as means for communication and coordination but also as instruments through which opposition groups can effectively undermine state information controls. These tools facilitate the exposure of human rights violations and challenge the constructed image that regimes seek to project to both domestic and international audiences (Keck & Sikkink, 2014). In light of the increasing significance of online media and social networks for oppositional and dissent activities, authoritarian regimes have incorporated digital surveillance tools and techniques into their surveillance assemblages.
Research indicates that authoritarian regimes increasingly utilise digital technologies to enhance their capacity to identify, monitor, and suppress dissent and protest activities with greater speed, efficiency, and cost-effectiveness (Liu, 2023; Lynch, 2011). By leveraging digital tools, such regimes are able to implement a range of tactics, including the surveillance of local activists and political dissenters, the identification of individuals participating in protests, and the manipulation of information dissemination (MacKinnon, 2011). The perceived threat of an omnipresent security agent, who collects intimate details about one’s life, fosters an environment of self-censorship and induces conformity among the population (Richards, 2013). Moreover, this environment can produce a chilling effect on opposition movements, even in the absence of physical violence, thereby compelling conformity (Göbel, 2013; Xu, 2021). So, both digital and traditional forms of surveillance can deter mobilisation in authoritarian contexts by fostering fear of repression (Dimitrov & Sassoon, 2014), thus weakening the movement’s ability to function (Earl et al., 2022).
Authoritarian regimes not only target domestic opposition but also possess the capacity to extend their surveillance and repressive mechanisms beyond their national borders (Moss, 2016). This transnational repression is facilitated through the strategic use of an authoritarian surveillance assemblage, enhanced by digital surveillance technologies (Michaelsen, 2016). In their toolkit of transnational repression tactics, these regimes have increasingly adopted digital tools to identify and monitor dissent networks, track their activities, hack into and deface social media accounts and websites, engage in phishing to acquire confidential information, and disseminate both private and public threats and other repressive tactics (Michaelsen, 2016; Moss, 2016). Research indicates that, while digital platforms can empower immigrant communities from authoritarian states to mobilise and articulate dissent, they also provide authoritarian regimes with the means to surveil, infiltrate, and punish activists outside the state’s borders (Dalmasso et al., 2017). Consequently, authoritarian states employ a range of digital tools, alongside legal-administrative mechanisms and the use of the bodies of family members as proxies, to exert extraterritorial control. These practices not only diminish trust among activists but also induce a chilling effect on dissent and activism (Dalmasso et al., 2017).
The perception of being (potentially) under surveillance can have a complex impact on oppositional activists, including those expressing their voice from outside an authoritarian regime. This perception may contribute to a deterioration of trust, subsequently diminishing their visibility and the intensity of their activism. As a result, these activists are often forced to balance privacy and visibility. This paper aims to investigate the formation of trust in the digital sphere among Russian anti-war activists. The study is based on the qualitative data obtained through semi-structured interviews with 64 Russian anti-war activists (fieldwork February – March 2024) residing outside of Russia.
Since the onset of the anti-war movement abroad, the activists have relied on digital tools to communicate, both locally and internationally. While using digital platforms is typically perceived as a crucial form of communication, coordination, and increasing visibility, many Russian expatriates viewed using them as inherently high-risk. So, members of anti-regime communities believed that using digital platforms could pose risks to themselves and also to their family members, friends, and colleagues living in Russia. According to the interviewees, speaking out online can have various consequences for themselves, including opening of a criminal case for defaming the Russian Army, extremism, etc., tracking and scapegoating, permanent exile, and denied access to consulate services. As to their family members, they mentioned problems at work, especially if their family members work in the public sector, searches and police harassment.
In a climate of heightened threats, activists must navigate complex strategic decisions regarding the disclosure of their identities and the degree of trust they extend to other members of their communities. Some interviewees intentionally disclosed personal information, such as their real names and photographs, to establish credibility and reinforce trust within activist networks. Conversely, the majority opted to pseudonyms and refrain from sharing personal images, naming safety concerns for both themselves and their families in Russia among the main reasons for doing so. The former group primarily consists of individuals who are more deeply involved in activism, including community organisers and those with prior activist experience prior to emigration, who articulate publicly their views or who possess stable legal status in their host country. In contrast, the latter group – mainly consisting of occasional participants, those who migrated after the full-scale invasion, and individuals lacking stable legal status – predominantly adheres to a more cautious approach in their activism in the digital sphere.
The majority of the interviewees developed their precautionary measures based on common sense and their understanding of the digital repression techniques employed by the Russian state, rather than relying solely on digital security guidelines or training. While the interviewees acknowledged the usefulness of the latter, the activists had implemented the security measures prior to engaging with instructional materials or participating in training sessions. This indicates a proactive approach to digital security, reflecting an awareness of the risks associated with their activities.
At the community level, the majority of the anti-war activist groups implemented verification mechanisms to screen new members, thereby mitigating risks and fostering trust among participants. These mechanisms included practices such as online doxing and in-person vetting during protests or meetings. Within inter-community cooperation, the establishment of trust primarily relied on vouching by long-standing members, which served as a key method for accepting new members in chats. Additionally, activists exhibited differing levels of trust towards various digital platforms and communication tools, owing to considerations about the protection of personal data, contacts, and communications. As a result, applications such as Signal, encrypted email services, and secret chats were regarded as relatively secure, particularly when discussing forthcoming plans that had not yet been publicly disclosed or with activists within Russia. Conversely, platforms like WhatsApp and Russian-owned services, such as VKontakte and Yandex, were largely avoided due to concerns over privacy and security. In this regard, Telegram stood out as, on the one hand, it was one of the most used apps for communication among activists, mainly perceived as convenient and flexible, and on the other hand, it was largely perceived as possibly being infiltrated by Russian security agents.
So, this paper demonstrates that the proliferation of digital communication technologies has not only expanded the repressive capabilities of the state but has also significantly enhanced the toolkit available to anti-regime activists. While digitally-enabled repression tends to have a limited deterrent effect on public advocacy by individuals residing outside of Russia, it still can produce such effects as trust erosion and compel the dissent actors to exercise caution in their activities. Furthermore, the research findings indicate that individuals with stable legal status and established social networks in their host countries experience comparatively lower levels of anxiety about their activities in the digital sphere. In contrast, those in precarious legal situations exhibit heightened concern for their own safety as well as for that of their relatives and friends remaining in Russia.
Experimentally Identifying Motivated Belief Updating on Politicized Topics in Germany: The Role of Critical Thinking
ABSTRACT. False information has become pervasive in digital communication, shaping public opinion and political discourse worldwide. Disinformation campaigns highlight the challenge of distinguishing truth from falsehood in our digital age, emphasizing the need for improved digital literacy and critical evaluation skills. Research indicates that individuals with higher critical thinking skills are better at differentiating fake from real news, while motivated reasoning may lead individuals to attribute greater credibility to false information that aligns with their preexisting beliefs. To investigate how motivated reasoning and critical thinking interact to shape individuals’ assessments of political information and subsequent belief updating, we conducted an online experiment with 933 participants. Participants first completed measures of critical thinking (including the Cognitive Reflection Test, Critical Thinking Disposition Scale, and Need for Cognition Scale) and provided their political preferences. Utilizing an adapted version of Thaler’s motivated reasoning paradigm (2024), participants responded to numerical questions on current German political topics. After providing their median belief, they received either true or false information that was consistent or inconsistent with their political preferences. Participants then assessed the information's veracity and had the opportunity to update their initial belief.
Preliminary findings indicate that although participants do not assign higher veracity assessments to preference-consistent information, they subsequently adjust their beliefs in the direction of their political preferences. Higher CRT scores are associated with more accurate assessments of true and false information; however, no significant link was found between critical thinking measures and biased belief updating. This suggests that credibility assessment and belief updating may operate as distinct processes. These insights are crucial for developing strategies to combat false information online, as they help explain why individuals may accurately assess the veracity of information yet still behave in ways that reflect their preexisting political biases.
Becoming Platform: Trust, Masculinity, and Infrastructure
ABSTRACT. [TRACK E]
Becoming Platform: Trust, Masculinity, and Infrastructure
Nicola Bozzi, University of Greenwich
My contribution is a theoretical exploration of trust as a rhetorical tool in the cultural and political shift towards platformisation (Poell, Nieborg & Van Dijk, 2019), focusing in particular on the evolving relationship between digital platforms, framed as infrastructures for the performance of the self (Bozzi, 2024), and the rise of powerful, hypermasculine figures like Donald Trump, Elon Musk, and Joe Rogan, who notably leverage the concept of (dis)trust towards “mainstream” and “legacy” media in favour of sprawling alt-tech platforms like Truth Social, Rumble, or even Musk’s own X (formerly known as Twitter).
Context:
Given the recent election of Donald Trump as 47th President of the USA, and the seismic impact this event has had at many levels, the role of “trust” in institutions, the opinions of experts (e.g. journalists, health professionals, etc), and media sources once considered authoritative has become more central than ever. More specifically, from a media and cultural studies perspective, there is an urgent need to analyse and contextualise Trump’s disruptive communicative style, as well as the communicative infrastructures and platforms that facilitated it. In this respect, the role of tech mogul and X owner Elon Musk and the influential podcaster and comedian Joe Rogan are especially relevant. Both figures have in fact controversial relationships with “trusted” media sources, and have been carving out an area of influence based to a significant extent on their antagonism towards them. While they have been discrediting mainstream and legacy media as “fake news” and framing their own controversies as witch hunts, these figures have also more or less directly contributed to the cultural momentum of alt-tech platforms (Truth Social, Rumble) or the negotiation of platforming boundaries within more mainstream platforms (e.g. Rogan with Spotify, Musk with X and, indirectly, Meta’s new moderation policies).
Significantly, these high-profile figures, brought closer by their accentuated friction with both state institutions and established media during the Covid pandemic, have been able to leverage ideals espoused by early free software movements and later social media platforms, like transparency and “free speech”, to connote their vested political and economic interests as inherently universal and non-political, juxtaposing them to the perceived totalitarianism of content moderation, DEI initiatives, and generally (as Musk called it) the “woke mind virus”. With this premise, each of them has been able to consolidate their influence in part through a range of platforming practices – “platforming” particular individuals and thus widening the Overton window, negotiating cross-platform boundaries, and framing infrastructural discourse (most notably through direct use of social media platforms and podcasting). Given the historical link between extremist movements and the infrastructural needs satisfied by alt-tech platforms (Donovan, Lewis & Friedberg, 2019, 50), the prominent status of these men makes this topic especially urgent.
Methods / Case study:
Using the popular Joe Rogan Experience podcast as the main case study, my main argument is that this popular format has facilitated broader cultural shifts that call for urgent critical inquiry from media and communications scholars. Firstly, the personification of complex social and cultural issues like “free speech” into individual battles between individuals and the establishment - e.g. Elon Musk vs liberal Twitter (Ferrari Braun, 2023); Donald Trump vs the US political establishment). Secondly, the rise of “platforming” as both a discursive metaphor and a business model, offered as the only solution for a masculinity in crisis against “woke” and “cancel culture” (Ng, 2022).
The presentation will thus combine a media and cultural studies approach with a broader-picture sociological reading that draws from platform studies and Internet studies and a theoretical speculative approach. More specifically, I will analyse the convergence of figures like Rogan, Trump, and Musk around the socio-technical figure of the “platform” by discussing relevant material like JRE-themed reaction videos, memes, and AI-generated digital art, organised in a timeline and discussed in the context of relevant socio-political events. The goal of the presentation is demonstrating how their cultural positioning of a masculinity under threat, the protection of US tech-exceptionalism, and Silicon Valley business models are increasingly based on the personification of platform-capitalist values and fostering distrust towards state institutions, legacy media, and accountability towards diversity.
Research questions:
How does the performance of masculinity enacted by these figures participate in the shaping of trust in institutions and media?
How instrumental has platformisation been to the political power shift that has shaken the US, and how is the increasingly palpable politicisation of tech being framed for the public?
Is it possible to remain critical of platformisation while participating in, and striving to become, platforms?
Theoretical contribution:
As a result, I propose the concept of “becoming platform” as a key conceptual scaffolding for reading the current cultural momentum of these figures. Positioning the “becoming platform” of Rogan, Musk, or Trump (each of whom have come to embody platforms of sorts – respectively: JRE, X, Truth Social) in the context of the dangerous emergence of a “platformed personality capitalism” founded on “personality as infrastructure” (Rosamond, 2023), I discuss the identity politics of these powerful men and the way they function as discursive catalysts of platformisation as an urgent cultural and political issue.
Fueled by trickle-down identification with these charismatic, disruptive leaders, the “becoming platform” mantra is however paradoxical, provocative, and unattainable. In other words, a Ponzi scheme involving multiple tiers tumbling down to the ordinary, passive social media media users. Actual billionaire platform owners like Elon Musk and Mark Zuckerberg sit on top, establishing the more material infrastructural support at a political-institutional and technological level; figures like Joe Rogan, Jordan Peterson, and even Andrew Tate (with their “informational” or “educational” platforms) negotiate cultural boundaries within and beyond the grid of mainstream and “alternative” media, challenging trust in traditional experts and intellectuals; building on the positioning of masculinity as an endangered pillar of society secured by the aforementioned layer, smaller masculinity gurus like Wes Watson turn street credibility into life coaching credibility through what Joshua Citarella calls “proof of body” (2022), a form of influence based on body image as evidence of success that is currently challenging common beliefs in nutrition online. At the bottom, crowds of smaller-scale clones aspire to ascend the ranks and “become platform” themselves, mostly remaining stuck at influencer level or, worse, passive users waiting to be bored, radicalised, and/or impoverished by another crypto scam rug-pull. Unlike the “accidental megastructure” described in Benjamin Bratton’s influential book (2015), the “stack” of platformed masculinity is short, tipped in favour of “old-fashioned” neoliberal capitalism, and often driven by reactionary identity politics.
Keywords: platformisation, Joe Rogan, Elon Musk, Donal Trump, social media, masculinity
Bibliography
Bozzi, Nicola. 2024. “Meta’s Artistic Turn: AR Face Filters, Platform Art, and the Actually Existing Metaverse.” Information, Communication & Society, November, 1–20.
Bratton, B.H. 2015. The Stack. On Software and Sovereignty. Cambridge, MA: MIT University Press.
Citarella, Joshua. 2022. “Raw eggs, pink pills, and embodied identity: Online communities create their own proof in a vacuum of truth”. Document Journal.
Donovan, Joan, Lewis, Becca and Friedberg, Brian. "Parallel Ports. Sociotechnical Change from the Alt-Right to Alt-Tech" In Post-Digital Cultures of the Far Right: Online Actions and Offline Consequences in Europe and the US edited by Maik Fielitz and Nick Thurston, 49-66. Bielefeld: transcript Verlag, 2019. https://doi.org/10.1515/9783839446706-004
Poell, Thomas, and David Nieborg, and José van Dijck. 2019. "Platformisation". Internet Policy Review 8 (4). DOI: 10.14763/2019.4.1425. https://policyreview.info/concepts/platformisation.
Rosamond, Emily. 2023. “YouTube personalities as infrastructure: assets, attention choreographies and cohortification processes”. Distinktion: Journal of Social Theory 24(2).
Ferrari Braun Agustin. 2023. “The Elon Musk Experience: Celebrity Management in Financialised Capitalism.” Celebrity Studies 14 (4): 602–19.
Ng Eve. 2022. Cancel Culture: A Critical Analysis. London: Palgrave Macmillian.
ABSTRACT. The COVID-19 pandemic served as a "sociological litmus test" that forced individuals and communities to articulate their positions toward scientific authorities and health institutions. This study examines the yoga community as a microcosm that illustrates broader societal tensions surrounding trust in expertise.
Pre-pandemic, yoga maintained an ambiguous coexistence with mainstream science—characterized by vague reciprocal acknowledgment, a blurred separation of powers, and mutual yet largely unspoken suspicion. This ambiguity allowed practitioners to engage with alternative frameworks without directly challenging medical institutions. The pandemic, however, made this ambivalence untenable, triggering what we term the "Great Yoga Split."
On social media platforms, yoga communities polarized as many influencers embraced conspiratorial narratives while others aligned with scientific authorities. What had previously been a comfortable continuum divided into opposing camps we characterize as "conspiritualist yoga" and "scientistic yoga." This binary choice between defending dominant institutions or accepting radical skepticism mirrored tensions experienced across society—from civil society organizations to professional associations, from trade unions to leisure communities, and even within families and friend circles.
Through digital ethnography focused on Instagram, we examine how this split evolved post-pandemic, identifying four distinct trajectories that reflect broader patterns of trust reconfiguration in digital environments. While focusing on a specific community, our findings reveal common challenges in negotiating expertise in times of crisis and the digital mediation of trust in contemporary society.
CO-CONSUMING DISTRUST: ANALYSING AN ALTERNATIVE GENRE OF TECHNOLOGY CRITICISM THROUGH AMAZON’S BOOK RECOMMENDATION NETWORKS
ABSTRACT. This paper presents original research into an underexplored genre of technology criticism that combines elements of conspiratorial thinking, anti-globalist narratives, and critiques of technological overreach. To surface key texts in this genre, the paper employs a novel digital bibliometric method, which repurposes book recommendations from Amazon.com's book marketplace as a research tool to analyse reactionary “narratives of distrust” surrounding “data colonialism.”
This research reveals how this genre of literature reframes the critique of data colonialism through a distinctly reactionary lens, positioning Big Tech as a central player in a broader globalist agenda. Books such as Dark Aeon: Transhumanism and the War Against Humanity (Allen, 2023), Indoctrinated Brain: How to Successfully Fend Off the Global Attack on Your Mental Freedom (Johnson, 2023), PsyWar: Enforcing the New World Order (Smith, 2022), and The Great Reset: And the War for the World (Jones, 2022) share a profound skepticism toward technological power and its capacity to redefine human existence in the era of artificial intelligence. Published by authors and publishers with ties to the MAGA movement, these texts offer a counterintuitive perspective on the relationship between MAGA ideology and Big Tech, contrasting the movement’s alignment with figures like Elon Musk with a deep-seated suspicion of technology’s influence.
These texts collectively develop a conspiratorial counter-narrative to the optimistic accounts of globalization and digital transformation long promoted by Silicon Valley. A recurring theme within this genre is the critique of "transhumanism," which these texts frame as a technocratic agenda to transcend human limitations. They argue that transhumanism is not a neutral or benevolent project but a manifestation of oligarchic ambitions, driven by a desire for "eternal life," as articulated by former Trump strategist Steve Bannon: “A transhumanist is somebody who sees Homo sapien here and Homo sapien plus on the other side of what they call the singularity... And why are they going to do it? No. 1, when you get to know them and see where they’re spending the money, it’s because they want eternal life.” (Bannon, 2025)
While critical discussions about transhumanism are not new (Geraci, 2012; Fuller, 2011), these texts blend legitimate technology criticism with a conspiratorial framework. The paper engages with the complexity of this dynamic, acknowledging the difficulty in drawing a clear boundary between theory and conspiracy theory (cf. Masco & Wedeen, 2023; Beckman & Di Leo, 2023). From a theoretical perspective, these texts may be situated within Andrew Feenberg’s (1999) “substantivist” tradition of technology criticism, echoing Heidegger’s (1977) assertion that “the essence of technology is nothing technological.” However, the paper contends that this tradition is reinterpreted through a “paranoid” lens, combining Richard Hofstadter’s (1965) “paranoid style” of American political thought with Fredric Jameson’s (1992) concept of postmodern “high-tech paranoia.”
Using natural language processing (NLP) techniques, the research identifies patterns in the rhetoric of these texts that reveal how they co-opt and distort critiques of Big Tech to fit their conspiratorial frameworks, to construct a dystopian vision of Big Tech’s control over global consciousness. Methodologically, the paper draws inspiration from Michel Callon's scientometric techniques (Callon et al., 1983) to identify emerging thematic clusters within the discourse network of technology criticism. The research introduces “co-consumption analysis,” a novel methodological approach that repurposes Amazon’s “also bought” book recommendations. Through custom-built web scraping tools, the method collects data to construct networks of book titles, which it then visually analyses (using network visualising software) to understand how these texts are relationally positioned within specific topical (viz. ideological) clusters and to identify key texts for closer analysis. Preliminary findings indicate that both the “Best Seller in Communication and Media Studies” and “Best Seller in Cybernetics” categories prominently feature works from this genre, suggesting a significant mainstream engagement with these reactionary ideas. Using Amazon metadata in this way can be understood as a workaround to adapt digital methods to a post-API environment (Perriam et al, 2020).
The dual approach of this paper—as both an empirical study of cultural texts and a methodological contribution—provides valuable insights into how reactionary narratives of distrust gain visibility within digital platforms. By examining how Amazon's algorithms facilitate the circulation of these texts, the research not only highlights a significant cultural phenomenon but also offers a replicable method for future studies into how digital marketplaces shape public discourse.
Ultimately, this research invites scholars to broaden their understanding of technology criticism by including genres that, while often marginal within academic discourse, wield considerable influence within popular culture. This study underscores the need to critically engage with how digital tools and platforms can amplify not only mainstream ideas but also fringe and conspiratorial perspectives, contributing to the broader discussions of power, control, and resistance to "The Tyranny of Big Tech" (Hawley, 2021)
References
Allen, J. (2023). Dark Aeon: Transhumanism and the War Against Humanity. Antelope Hill Publishing.
Bannon, S. (2025, January 31). On Broligarchs vs. Populism. The New York Times.
Beckman, F., & Di Leo, J. R. (2023). Conspiracy and Theory. University of Minnesota Press.
Callon, M., Law, J., & Rip, A. (1983). Mapping the Dynamics of Science and Technology. Springer.
Couldry, N., & Mejias, U. A. (2020). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.
Feenberg, A. (1999). Questioning Technology. Routledge.
Fuller, S. (2011). Humanity 2.0: What It Means to Be Human Past, Present and Future. Palgrave Macmillan.
Geraci, R. M. (2012). Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. Oxford University Press.
Hawley, J. (2021). The Tyranny of Big Tech. Regnery Publishing.
Hofstadter, R. (1965). The Paranoid Style in American Politics and Other Essays. Harvard University Press.
Jameson, F. (1991). Postmodernism, or, The Cultural Logic of Late Capitalism. Duke University Press.
Johnson, M. (2023). Indoctrinated Brain: How to Successfully Fend Off the Global Attack on Your Mental Freedom. Patriot Press.
Jones, A. (2022). The Great Reset: And the War for the World. Skyhorse Publishing.
Livingston, M. (2015). The World Turned Upside Down: A History of Conspiracy Theories. Palgrave Macmillan.
Masco, J., & Wedeen, L. (2023). Conspiratorial States: Theory, Politics, and Practice. Duke University Press.
Perriam, J., Birkbak, A., & Freeman, A. (2019). Digital methods in a post-API environment. International Journal of Social Research Methodology, 23(3), 277–290.
Smith, R. (2022). PsyWar: Enforcing the New World Order. Liberty Bell Publishing.
Trust in Oppositional Media in Russia after 2022: Digital Accessibility, High Quality Content, but Slightly Declining Trust
ABSTRACT. Right after the beginning of the Russo-Ukrainian war, the Kremlin introduced measures of so-called 'war censorship' in Russia. As a result, the websites of prominent oppositional media outlets were blocked by Roskomnadzor, and many journalists fled the country fearing criminal prosecution. Continuing working from exile, oppositional media rely on digital infrastructure to remain accessible to their audience in Russia. This presentation argues that even though digital infrastructure allows for dismantling barriers established by authoritarian control, the geographical and political distance that emerged due to the exiled position of OM can affect trust in them. As previous studies have shown, working from exile altered oppositional media reporting by enhancing political positioning when covering events back in Russia (Dovbysh & Rodina, 2022). Based on an online survey, I identified that oppositional media in general keep the trust of their audience primarily because of such qualities of their content as objectivity and thoroughness of analysis. However, there is a rising distrust due to perceived low ‘social concern’ (Meyer, 1988). The respondents criticise oppositional media for neglecting the interests of Russians, engaging in emotional manipulation, and spreading propaganda.
Relationships in the Age of AI: A Review on the Opportunities and Risks of Synthetic Relationships to Reduce Loneliness
ABSTRACT. Humans possess an intrinsic need for meaningful relationships that foster a sense of belonging and purpose, essential to psychological well-being (Baumeister & Leary, 1995). However, the modern era reveals a paradox: On the one hand, digital information and communication technologies, such as social media, enable unprecedented connectivity. On the other hand, more individuals than ever suffer from loneliness (Doyle & Link, 2024). The technological affordances for forming relationships between humans and machines have sparked discussions that such technology in synthetic relationships (SRs) could help combat loneliness (e.g., Aggeler, 2024; Pazzanese, 2024; Sahota, 2024). Especially fitting is Replika, the self-proclaimed “AI companion that cares” (Luka Inc., n.d.), with which already millions of users have formed an attachment seeking comfort (De Freitas et al., 2024). On the other hand, Sewell Setzer III committed suicide, and the role of an AI companion from Character.AI is highly discussed (Roose, 2024b). However, what are the broader implications of entrusting a crisis of social relationships to AI? What implications of the opportunities and risks of SRs for the individual's well-being and the fabric of society can be derived?
Synthetic Relationships as a New Countermeasure for Loneliness?
Indeed, loneliness is pervasive across age, gender, education, and cultures (Barjaková et al., 2023; Hutten et al., 2022; Lim et al., 2020), and it is recognised as a global crisis (e.g., Haseltine, 2024; Johnson, 2023). Since loneliness stems from the subjective perception of unmet social needs, high-quality relationships are necessary to bridge the gap towards an experience of connectedness (Hawkley & Cacioppo, 2010). The core challenge for severely lonely individuals to overcome is establishing trust in society and reducing their hypersensitivity to negative social cues. However, the current landscape of interventions against loneliness suffers from limitations. Cognitive-behavioural therapy, group therapy, and animal therapy, while proven effective (Käll et al., 2020; Masi et al., 2011; Nimer & Lundahl, 2007), suffer from scalability and accessibility issues (Deutsche Welle, 2021; German Bundestag, 2022; Yue et al., 2022). Large-scale social-connection projects (e.g., Casabianca & Nurminen, 2022; Jopling, 2015) and social prescriptions (Liebmann et al., 2022) are more scalable, but evidence of their effectiveness is limited (Liebmann et al., 2022; Lim et al., 2020; Masi et al., 2011). At the same time, these interventions only indirectly address loneliness by addressing the ability to form relationships or providing context for forming connections. SRs could be the high-quality relationships being sought and address loneliness directly and personally (Ventura et al., 2025).
The emergence of increasingly sophisticated AI companions has enabled the formation of SRs — defined as “continuing associations between humans and AI tools that interact with one another wherein the AI tool(s) influence(s) humans’ thoughts, feelings, and/or actions” (Starke et al., 2024, p. 1). While digital communication technology in the past has not successfully reduced loneliness (Bonsaksen et al., 2023; L. Zhang et al., 2022), new hope rests on these recent advances in generative artificial intelligence (AI). AI-based Large Language Models (LLMs) can produce human-like text outputs (Köbis & Mossink, 2021), hold lasting conversations (Brandtzaeg & Følstad, 2017), and tailor output toward human users (Clabaugh & Matarić, 2018).
Opportunities and Risks of Synthetic Relationships for Loneliness
From our narrative review of the opportunities and risks of SRs (Ventura et al., 2025) based on psychological relationship science and practical evidence, one conclusion is inevitable: the potential of conversational AI is also enormous in the form of SRs, and it is up to us as a society to decide how and in what way we want to shape it (Ventura et al., 2025). Interest in SRs in society is growing (e.g., Metz, 2020; Roose, 2024a; Williams, 2025), but they have not yet gone mainstream. So, there is still time to introduce design guidelines and regulations for SRs that contribute to improving our societies. If we are not proactive, we leave it to companies to set standards for the design of SRs. Drawing a parallel to social media as a significant technology influencing interpersonal social structures, a lack of regulations of SRs may lead to a comparably dangerous technology.
Key impact factors of SRs can be derived from their technical features. As a digital service, they can be accessed 24/7 from anywhere. With a digital device and growing LLM language support (Petrić Howe, 2024; Ronen, 2024; K. Zhang et al., 2024), they can be accessible to most humanity. Their customizability and controllability (e.g., Luka Inc., n.d.; OpenAI, 2023) allow users to adapt their experience to their needs. Regardless of customisability, AI companions adapt automatically to their human partners (Karami et al., 2016). Finally, benefits are particularly evident in unrestricted co-growth for a lifelong companion that may never leave your side.
Considered through the lens of psychological relationship science (Finkel et al., 2017; Perlman et al., 2018), these technical features enable AI companions to fulfil the role of meaningful interpersonal relationships. AI companions can be uniquely responsive (Reis & Shaver, 1988), especially regarding previous interactions (Karami et al., 2016; Zhou et al., 2020). Intimacy (Reis & Shaver, 1988) can develop quickly when the user desires (Skjuve et al., 2022). Relationship insecurity (Mikulincer et al., 2021; Murray et al., 2006) can be effectively reduced (Altman & Taylor, 1973) without fear of relationship repercussions (Skjuve et al., 2021; Xie & Pentina, 2022; Zhou et al., 2020). Similarly, unlimited social support can be requested (Ta et al., 2020), fostering the perception of a safe haven and secure base behaviour (Bandura, 1986). Thus, an effective balance between vulnerability and relationship security can be established (Murray et al., 2006). Based on these elements and in line with Bandura's social learning theory (Bandura, 1986), important social skills and -cognitions can be practised and developed in the context of the SR. In turn, these learnings can also be applied in interpersonal relationships. Thus, the SR can serve as an addition to, rather than a substitute for, a social network (Kahn & Antonucci, 1980) to reduce loneliness in the short term and promote social belonging in the long term.
Minor deviations from these rosy prospects for SRs can contribute to a more divided society than ever before. LLM Sycophancy (Wei et al., 2023) can lead to human partners going less out of their comfort zone and reinforcing users’ short-term opinions, doubts, and emotions instead of long-term benefits. Thus, maladaptive social behaviour gets promoted, not just leading to social disconnection and loneliness but may also lead to inappropriate communication habits or unreciprocal behaviour. A close and intimate SR can also lead to highly addictive behaviour. Current users of AI companions (Marriott & Pitardi, 2024; Skjuve et al., 2021) and corresponding research (Gabriel et al., 2024; Laestadius et al., 2022) already warn of such addiction. Companies will also have tremendous power over AI features, companion design and AI content (Gabriel et al., 2024). This is especially dangerous since such digital products can quickly establish market monopolies and lead to lock-in effects, making switching SRs unfeasible (Sherry, 2016). This position of power not only harbours the risk of ideological manipulation and thus a threat to democracy, but updates and changes to AI companions are perceived by users as ‘update sickness’, where users may have the impression of losing a trusted friend from one day to the next.
Implications for Trust in AI – Towards Calibrated Anthropomorphism for Calibrated Trust
The rise of SRs presents a unique challenge for trust in AI. While AI companionship holds promise as a scalable intervention against loneliness, ensuring appropriate trust calibration is crucial (Lee & See, 2004; Wischnewski et al., 2023). Users must neither overtrust nor undertrust AI companions—trust must align with their actual capabilities, ethical constraints, and limitations.
One fundamental issue in trust calibration is calibrating anthropomorphism—the extent to which AI companions appear and behave like humans (Epley et al., 2007). Anthropomorphism is necessary for trust formation, as people engage more deeply with AI when it exhibits human-like responsiveness, personality, and social cues (Pentina et al., 2023; Roesler et al., 2021). However, if AI is too human-like, users may overtrust, forming emotional dependencies or mistaking AI responses for genuine human empathy. Conversely, if AI lacks sufficient human qualities, users may undertrust, dismissing SRs as inauthentic and ineffective.
This trust paradox poses a key design challenge—where both too much and too little anthropomorphism can miscalibrate trust. AI companions should be human-like enough to facilitate engagement but distinctly artificial to prevent unrealistic expectations. Potential solutions include:
• Intentional non-human design: AI companions could adopt fictional, robotic, or stylized non-human avatars rather than realistic human forms.
• Transparent interaction cues: AI should constantly signal its artificial nature through appearance, interaction style, and controlled emotional expressions.
• Adaptive trust safeguards: AI should dynamically adjust its responses based on user engagement, ensuring it fosters trust without dependency.
As AI companionship becomes more sophisticated, calibrating both anthropomorphism and trust must be a regulatory priority. Ethical design and oversight should ensure AI companions strike the right balance—engaging but not deceptive, supportive but not substitutive. If these challenges remain unaddressed, SRs may create misplaced trust with unforeseen psychological and societal consequences.
Do people trust GenAI in Journalism? Testing the effects of human vs AI task performance on the perceived trustworthiness of news articles [Extended Abstract]]
ABSTRACT. Recent research points towards negative effects of AI disclosures on the perceived trustworthiness of news. However, much of this research tests relatively generic AI disclosures that suggest complete automation and do not accurately reflect the nuanced applications that journalists actually use generative AI for. To better understand the relative importance of different use cases for people's trustworthiness judgements, we conducted a conjoint experiment with a representative sample of 742 Dutch respondents in January 2025. Preliminary results suggest that on the whole, all forms of AI disclosures - regardless of whether they pertain to tasks during news gathering, production, or verification - decrease readers' trust in the labeled news. These effects are consistent across three different news topics. However, we also conduct moderation analyses that point to important individual-level differences depending on readers' political position, knowledge of AI in journalism, and general attitudes towards algorithms. By the time of the Amsterdam Trust summit, we will further enrich these initial insights with the results of cluster analyses to detect distinct preference profiles among particular user groups.
Trust in the digital society: Perspectives from the linguistics-driven, interdisciplinary Next-Generation Fakespeak project
ABSTRACT. Fake news and (other types of) disinformation, understood as misleading or false information that can cause harm and where there is often an intention to deceive, are not new phenomena. However, the technological infrastructure developed over the last decades allows contents of dubious veracity to be spread unfiltered to millions of people at the wink of an eye. On social media, fake news has been found to spread ten times as quickly as genuine news (Krekó 2021) and sensationalist contents that evoke strong negative emotions tend to be favoured.
The recent development within generative AI has been extremely rapid, with the launch of OpenAI’s ChatGPT in November 2022 as a major milesone. Generative AI opens up possibilities, but is also a Pandora’s box. Specifically, the advancement of large language models (LLMs) amplifies challenges in the context of online disinformation (Goldstein et al. 2023). The capability of LLMs to produce synthetic content at scale threatens to significantly increase the volume of disinformation at a quality that makes it indistinguishable from authentic human-created content. As the volume of online information increases, discerning what is true from what is not becomes increasingly more difficult. Indeed, recent research suggests that AI-powered disinformation is more difficult to detect than disinformation made by humans (Zhou et al. 2023).
Against this background, the EU Commission, NATO and UN agencies in addition to 1500 experts consulted by the World Economic Forum perceive disinformation and fake news, and the subsequent erosion of trust in and legitimacy of newly elected governments, as one of the biggest threats to democracy in the near future (Willsher & O’Carroll 2024, WEF 2024). Indeed, studies have shown that generative AI was used to “sow doubt, smear opponents, or influence public debate” in 16 countries already in 2023 (Funk et al. 2023), and at the end of February, 2025, the American Sunlight Project published a report about the utmost disturbing expansion of the so-called Pravda network – “a massive system of automated propaganda aggregation that spreads pro-Russia narratives globally” (ASP 2025). According to the report, the pro-Kremlin, anti-democratic disinformation campaigns orchestrated by this network have a growing reach and it is likely that their content is flooding the training data of large language models (LLMs). Moreover, the influence operations are pervasive, increasingly sophisticated and amplified on various social media platforms, thereby “further eroding trust in credible sources” (ibid.) and democratic institutions. In an investigation conducted by Newsguard of ten leading AI models, all ten models, including OpenAI’s ChatGPT, You.com’s Smart Assistant, Elon Musk’s Grok, Microsoft’s Copilot, as well as Meta AI, Google Gemini and Perplexity repeated disinformation spread by the Pravda network. Since it was launched in April 2022, the network has grown to cover 49 countries and dozens of languages (forskning.no 2025).
Thus, research is sorely needed on, first, how to identify AI-generated disinformation in general, and Russian state-sponsored disinformation narratives in particular and, second, how to mitigate harmful attempts at influencing public opinion. The Next-Generation Fakespeak project (NxtGenFake) grew out of these knowledge needs. Thus, in this paper we will present this new project, which is funded by the Research Council of Norway and based at the University of Oslo, and which will start up in October 2025. The project is interdisciplinary involving media science and computer science in addition to linguistics. In the paper we will also present a pilot study in which we provide our initial answers to the following research questions: how can we extract pro-Kremlin disinformation narratives in English, explicitly and implicitly, from a selection of LLMs? Based on a small sample of disinformation narratives: from a linguistics and media science perspective, what can we say about their linguistic and discursive features, respectively? And third, from a computer science perspective: can such (defining) linguistic and discursive features contribute to the development of AI tools for disinformation identification and, if so, to what extent?
Providing answers to such research questions is urgent. Harmful networks such as the Pravda network is about to “flood LLMs with pro-Kremlin content”, thereby amplifying disinformation narratives “across a range of platforms, from AI chatbots to mainstream media”. We also need to ensure “greater transparency and oversight of AI training systems” (ASP 2025). The ultimate goal of the NxtGenFake project is to contribute towards the development of AI-powered tools for identification of and warning about suspicious textual contents, thereby enabling stakeholders within, i.a., the media and justice sector, to take relevant measures against potential (foreign) (AI-powered) influence operations. Against this background we will contribute to shedding light on one of the fundamental questions of the Amsterdam Trust summit 2025: how can we make sure that new digital infrastructures deliver on their promises, while keeping the best interest of their users and of the society in mind? More specifically, we contribute to track E – Narratives of trust and distrust in popular culture, specifically ’the rise and impact of fake news’ and ’misinformation and disinformation in (social) media content’.
Trust of the Digital Citizen: Evidence from the Judiciary, News and the Workplace
ABSTRACT. As digital technologies increasingly mediate core societal institutions, trust in these systems becomes a crucial issue. This roundtable proposed by the UvA’s Digital Citizenship Sector Plan explores trust that citizens in the digital era exhibit across three key domains: justice, media, and the workplace. Different contributions will illustrate the role of trust in these domains opening a discussion on trust in relation to digital citizenship.
The first contribution examines shifts in news trust. The “post-truth era” and the rise of generative AI, fuel uncertainty, skepticism and declining trust in news media. Even though citizens are exposed to fabricated information less often than commonly assumed (Acerbi et al., 2022) public concerns around exposure to false or misleading information are relatively high (Hameleers et al., 2023). Such heightened concerns can be further exacerbated by warnings about the possibilities and possible misuse of generative AI, which may have negative consequences for political participation and trust in democratic institutions (Ognyanova et al., 2020). Ironically, digital markers of inauthenticity (such as fact-checks, community notes, and platform warnings) are also reinterpreted by distrustful communities as markers of authenticity. This is how “they” are trying to hide “the truth”.
The second contribution narrows down to the organizational level and investigates how organizations need to build trust with their employees as they increasingly implement data-driven human resource management practices (Weibel et al., 2023). As these practices and their success require employees to share their data accurately but are also subject to ethical concerns and algorithmic biases (Edwards et al., 2022; Tursunbayeva et al., 2021), trust building has become a major focus for both researchers and practitioners, which this contribution will investigate and contrast.
The third contribution explores the effect on trust in the judicial process. Currently, the narrative of the impartial judge potentially creates false expectations among citizens because it might not be humanly possible to be impartial (see for example, Rachlinski and Wistrich, 2018; Peer and Gamliel, 2013; Van Aaken and Sarel 2022). Not fulfilling this expectation could affect the trust people have in the judiciary, yet trust in (European) judiciaries is an under-researched area (Popelier et al., 2022). Some scholars show the potential arguments on how digitizing the judicial process is a way to make it more just, when created fair (see for example Reiling, 2009; Zalnieriute et al., 2021; Javed and Li, 2025) where others showcase the risk of how it may create different unfairness (see for example Javed and Li, 2025). What then is the potential effect of a digitized judicial process on citizens’ trust in the judiciary?
By bringing together these perspectives, this roundtable situates trust as a core issue of digital citizenship—the rights, responsibilities, and vulnerabilities of individuals navigating AI-driven institutions at the micro, meso and macro level, offerring a platform for interdisciplinary dialogue on how trust is challenged and fostered in an era where digital infrastructures increasingly govern societal decision-making.
Vaccine coverage decline in Brazil: an analysis through the technological authority’s concept
ABSTRACT. Despite a troubled start (Needell, 1987), vaccination in Brazil has been consolidated since the creation of the National Immunization Program [Programa Nacional de Imunizações] (PNI) (Carvalho, 2024; Domingues et al., 2020; Nascimento, 2011; Risi, 1984). Vaccination coverage rates during the 21st century, until 2015, exceeded the recommended 95% (Domingues et al., 2020). However, from 2016 onwards, vaccination coverage rates declined dramatically (AVAAZ & SBIm - Sociedade Brasileira de Imunizações, 2019; Barata et al., 2023; Donalisio et al., 2023, 2023; Sato, 2018; Sato et al., 2023). The paper focuses on measles, polio, and hepatitis B vaccines, as they are administered throughout Brazil and have a history of successful campaigns. They show similar trends of decline in their vaccine coverage and have started to worry Brazilian specialists in public health. Following the emergence of such crisis, have already been identified: a measles outbreak in 2019 (Sato et al., 2023); the imminent risk of reintroduction of the wild poliomyelitis virus (Donalisio et al., 2023); and the increase in mortality from the hepatytis B disease (Sousa et al., 2023). Drawing on discussions from venues such as the World Health Organization, we adopt the concept of vaccine hesitancy (MacDonald, 2015) to address the diverse processes that influence persons to forgo vaccination for themselves and/or the children under their care. Would the decline in vaccination coverage then be a symptom of a crisis in trust regarding vaccines or the Brazilian public health system?
The paper suggests the concept of technological authority as a contribution to the vaccine hesitancy debate. The sociological operationalization of a concept derived from discussions in a field other than public health can support deepening the understanding of the vaccine hesitancy phenomenon, especially regarding the aspects connected to or influenced by the dissemination of fake news and misinformation online.
The technological authority concept is based on an approach that sees technologies, especially communication technologies, as central to Westernized and Western societies, long before the emergence of digital technologies. Even though digital technologies allowed and enhanced other articulations (Jameson, 1993) between people and communicational technologies. Here, we will only focus on one of these articulations. The aim is to build a critical and historical approach to vaccination (Moulin, 2003). To consolidate the concept of technological authority, previously we compared different ways in which technologies created by human beings endowed with a communicative character can embody authority. Our grounded hypothesis is that this concept's operationalization can allow public health researchers to understand better one facet of a complex, multifaceted, and multi-mediated phenomenon: vaccine hesitancy. Thus, we are not exhausting all the analytical possibilities here or linking Brazil's vaccination rate decline exclusively to fake news transmitted online.
Authorities, in a nutshell, are voices that cannot be ignored since they refer to a deduction from an ontological system that justifies it beforehand (Arendt, 1968). Bhabha (2004, p. 112), convergently states that “(…) its source (…) must be immediately, even intuitively, apparent (...) and held in common.” However, not necessarily, having authority is synonymous with being able to exercise coercion (Clastres, 2020). So, Authority does not refer to who can or cannot command but to something that cannot be ignored and has its legitimacy validated intuitively. The written word occupies/occupied a role of authority (Severi, 2020) in Western societies, and it has been exported worldwide through the colonization processes (Bhabha, 2004; Dussel, 1995).
After discussing the role of the technologies, mainly the communicational ones, in Western and Westernized societies, we shed light on how digital technologies emerge and establish themselves as authorities in a context, profoundly characterized by sharing meanings through a market-driven logic (Burrell & Fourcade, 2021; Canclini, 2001; Crary, 1999; Horkheimer & Adorno, 2007; Martin-Barbero, 2010; Pedersen et al., 2021; Pimentel Junior, 2010; Simmel, 2004). Just as in the context of analog media, these technologies are intermediaries between private companies and their users/consumers, which they access directly. Digital devices, of course, don't inherit technological authority from the written/printed word or anything like that, as we argue. Still, they contend with it for the role of a cornerstone in constructing meaning or interpretations of the world. The increasing centrality of digital devices in the lives of Western and Westernized human beings as cornerstones for interpreting the world and sharing meanings is such that Burrel and Fourcade (2021) state that there is an algorithmic second opinion haunting every contemporary professional, from physicians and attorneys to teachers and professors. These multifaceted and complex dynamics are addressed through the concept of technological authority.
The paper's methodology pursues an intent to conduct not theory-testing but a theory-building (Luker, 2008) contribution. It uses different datasets for its purpose. These datasets came from the Brazilian Health Ministry database (DATASUS), relevant reports (AVAAZ & SBIm - Sociedade Brasileira de Imunizações, 2019; Barata et al., 2023), and academic literature (Donalisio et al., 2023; Frugoli et al., 2021; Galhardi et al., 2022; Sato, 2018; Sato et al., 2023). The peer-reviewed papers and academic reports that provide the data we worked on were selected because they allowed us to test the concept of technological authority in Brazil's context of vaccination coverage and its challenges. They also provide an empirical background, allowing us to expose possible applicabilities of the technological authority concept in public health studies in the 21st century. The second-hand data enabled us to analyze three main variables: vaccination coverage, investment in vaccination campaigns, and the role of fake news in vaccine hesitancy.
To delve deeper into the issue, we selected vaccines that prevent measles, polio, and hepatitis B to focus on. We use different datasets to discuss the role and influence of digital media, traditional media, and/or other authority figures on vaccination decisions and to determine when and what kind of fake news regarding vaccines has appeared in Brazil since the 2010s.
Comparing vaccination coverage with investment in vaccination campaigns, we could not establish a correlation between the amount invested in vaccination campaigns and vaccination coverage. Strong evidence of the relevance of technological authority for the analysis is that vaccination campaigns that led Brazil to surpass 95% of vaccination rates relied on traditional media (Carvalho, 2024; Rocha, 2003). Nevertheless, since 2016, these campaigns have not been enough to keep these rates. A report of 2019, by AVAAZ and SBIm (2019), found that the proportion of people who believe in misinformation about vaccines is higher among those who use social networks as a source of information, which corroborates the statements of a decline in the authority of traditional media (Fujita et al., 2022).
We grounded the hypothesis that the design (Flusser, 2007) of virtual environments mediated by certain technologies can influence vaccination decisions, at least for some groups, as the AVAAZ and SBIm (2019) report highlight and studies as Tripodi (2022) support. Therefore, the influence of fake news suggested by the works consulted (AVAAZ & SBIm - Sociedade Brasileira de Imunizações, 2019; Barata et al., 2023; Galhardi et al., 2022) is only supported if we consider that there is a cultural possibility for this in Brazil. The existence of a technological authority that is embodied differently in different devices, either by its logic or by its design, helps us to compose this broader analytical framework.
Also, half of the fake news about vaccines that runs through Brazilian online environments comes from publications translated from English. This could mean, among other things, that the US is the epicenter of the misinformation about vaccines spread worldwide or the central role of English-language publications in online environments that, once produced, are easily translated and spread to people who don't know the language, which allows the same fake news to continue capturing attention and making a profit. By highlighting the foreign origin of approximately half of the fake news about vaccines analyzed by AVAAZ and SBIm (2019), we address that Brazil is not an isolated case, but part of a bigger picture to be understood (Allington et al., 2023; Al-Uqdah et al., 2022; Al-Zaman, 2021; Nah et al., 2023).
In such context, simply demanding that health professionals raise awareness among the people they serve (Domingues et al. 2020) or that the government invests in advertising campaigns not only through traditional media systems, but also online (Fujita et al. 2022) are insufficient processes. Nevetheless, it is difficult to demand that health professionals educate the population when sometimes they do not have consolidated knowledge about vaccines, and it is even possible to identify vaccine hesitancy among these professionals (Souza et al., 2015). Concerning TV campaigns, Rocha (2003) had already stated that by only connecting them to the idea of care, as often they do, there was no education about the importance of vaccines but probably the opposite, a de-education. A documented way to combat misinformation is to enhance the understanding of how online algorithms work and distribute information to users (Chung & Wihbey, 2024). Thus, further studies and actions are needed to address how this enhancement could operate in the context as the one in Brazil. Thereby, the paper aims to provide a novel theoretical framework that employs the concept of technological authority to explore contemporary dynamics such as Brazil’s declining vaccination coverage.
The Doppelganger Operation: Russian Disinformation Tactics and Their Global Impact on Trust and Democracy
ABSTRACT. Trust in media and democratic institutions is increasingly under threat due to sophisticated information warfare tactics employed by state and non-state actors. This study analyzes the Doppelganger operation, a large-scale Russian disinformation campaign aimed at manipulating public perception, discrediting Western leadership, and influencing electoral processes. By leveraging artificial intelligence, fake media outlets, social media bots, and diplomatic amplification, the campaign systematically disseminated pro-Kremlin narratives to erode trust in legitimate news sources and democratic governance. Using a hybrid methodology combining qualitative case study analysis and quantitative machine learning techniques, this research examines how Russian propaganda aligns with pre-established "temniki" guidelines, revealing a structured approach to narrative manipulation. Our findings demonstrate the systematic targeting of key democratic institutions, particularly in the European Union, with narratives centred on fearmongering, moral erosion, economic collapse, and authoritarian overreach.
Trust in the Digital Age: News Influencers, Political Misinformation, and Audience Perceptions
ABSTRACT. The rise of social media news influencers has disrupted traditional political communication by enabling these non-traditional actors to assume roles once held by journalists and campaign surrogates. While previous research has explored the presence of political content within specific influencer subgroups, little is known about the supply-side mechanisms that enable influencers to generate and sustain the trust of their audiences. This study addresses this gap by analyzing social media content from the fifty most prominent U.S.-focused news influencers and traditional media accounts across Instagram, TikTok, and X/Twitter during the 2024 U.S. Election. We examine whether the concepts of competency and warmth distinguish influencer communications and influence audience tolerance for misinformation. Building on this analysis, we will conduct a survey experiment to assess whether trust in influencers is differentially affected by the promotion of falsehoods. We hypothesize that influencers who emphasize warmth over competency maintain trust even when spreading misinformation, unlike creators reliant on high-competency frames. Collectively, this research integrates real-world data and experimental insights to advance understanding of the communication strategies shaping trust in digital political discourse.
Hacking the Past: Cybernostalgia and the Role of Digital Narratives in Ukraine’s Security
ABSTRACT. Despite growing volumes of available information, the trust in digital media has been steadily declining (Suarez, 2024). The increasing reliance on short, engaging content has shifted individual decision-making processes, turning political messaging into a tool for manipulation rather than informed discourse. What once served as a means for data collection and voter outreach has now been classified as a potential security threat (Hamilton, 2025). Among the many tactics used to influence public sentiment, nostalgia stands out as an especially potent force (Sedikides et al., 2008).
As Europe approaches a critical election cycle, several high-stakes issues—support for Ukraine in its war against Russia, economic instability, energy diversification, and political trust—hang in the balance. The outcome of these elections has far-reaching consequences for international security, as shifts in public opinion can reshape NATO and EU commitments. This paper explores how cybernostalgia, the digital revival of past political and societal narratives, is weaponized to sway voters, reinforce ideological divides, and challenge democratic stability.
By examining the digital trends and its role in recent events, this study highlights the risks of nostalgia-driven campaigns in shaping public opinion. In order to map the online activity, hashtag research is paired with narrative mapping and focused analysis on specific users to capture digital archetypes employed to influence the audience.
The preliminary findings provide insight into the new tool of interference, its scope, range and technological development. The results point towards techniques for identification of orchestrated cybernostalgia in political campaigns. As geopolitical tensions rise, ensuring electoral integrity is not just a national priority but a critical issue for global security. The findings have therefore direct implications for future support for Ukraine in war against Russia and European security.
This track explores how (AI-generated) mis- and disinformation undermines public trust, and how individuals and institutions perceive or respond to these threats.
Countering AI-Generated Visual Disinformation: Testing the Effectiveness of Different Label Sources on Climate Change and Immigration Imagery
ABSTRACT. The rise of generative artificial intelligence (AI) has introduced powerful tools for creating synthetic visual content, which is frequently used to construct and spread disinformation on social media. Research suggests that AI-generated visual disinformation can be deceptive, as it may be perceived as authentic information (e.g., Hameleers et al., 2023; Shen et al., 2021), influence attitudes toward politicians (Dobber et al., 2021), and lead to a loss of trust in visuals (Weikmann et al., 2024) and news on social media (Vaccari & Chadwick, 2020). However, while the negative consequences of visual disinformation are well-researched, less is known about effective strategies to mitigate its impact. Specifically, it remains unclear which interventions successfully reduce the adoption of false beliefs, how they affect credibility perceptions of manipulated images, and who citizens trust to lead such efforts. To address this question, we conduct a survey-embedded experiment among a diverse sample of Dutch participants (N=1.100). We compare the effectiveness of currently existing platform interventions, namely (a) AI-generated labels, (b) fact-checking labels (still used in the EU but discontinued by Meta in the US as of January 2025), and (c) user-based community notes, against (d) a control group without intervention. We test the effect of these sources on correcting belief in the false claim, perceived credibility of the visual, and perceptions of the credibility of the label source. Additionally, we investigate whether such effects are moderated by baseline trust in fact-checking and AI attitudes. The pretest for this experiment is currently running, and data collection is set to start on March 24, 2025.
An attributional approach to organizational misinformation and trust: the role of intentions and consequences
ABSTRACT. Introduction
While misinformation research mostly focuses on the political sphere, a prominent source of falsehoods circulating in society is often neglected (Ruokolainen et al., 2023): legitimate organizations. Examples of organizational misinformation—i.e., false or misleading information disseminated by an organization—abound. Some organizations, such as those in oil industries, are known for deceiving the public through coordinated disinformation campaigns (Reed et al., 2021). Yet organizations also misinform their stakeholders accidentally (Ruokolainen et al., 2023): Air Canada recently found itself in an awkward predicament when their chatbot inadvertently provided a customer with faulty refund information (Belanger, 2024); information the airline eventually was held accountable for.
Examples of misinformation spread by organizations differ considerably in the degree in which the organization intended the dissemination of misinformation and in the severity of the consequences. Coordinated disinformation campaigns from oil industries have a high degree of intent with very severe consequences, as they actively undermine the scientific consensus on the harmful environmental consequences for humanity (Reed et al., 2021). Conversely, a chatbot that provides faulty information regarding refund policies may result in some inconvenient additional administrative work, appearing futile in comparison. While research on corporate deception shows that the dissemination of false or misleading information by organizations undermines trust in the organization (Brugman et al., 2024; Markowitz et al., 2021) little is known about how this is affected by the intentions and consequences of organizational misinformation.
Drawing from attribution theories (Hewstone, 2004), we examine how the dissemination of false or misleading information by an organization affects trust in that organization, and to what extent this relation depends on intentionality and consequences.
Organizational misinformation and trust
Organizations are expected to be responsible societal actors as they wield considerable power in society (Holmström, 2020; van der Meer & Jonkman, 2021). Stakeholders often depend on organizations’ decisions and information about these decisions (Meyer & Choo, 2024). When an organization informs inaccurately, i.e., when an organization spreads false or misleading information, we speak of organizational misinformation. This may negatively affect stakeholders.
While misinformation is sometimes used as an umbrella term for false or misleading information, misinformation research typically distinguishes misinformation from disinformation based on the source’s underlying intent (Lecheler & Egelhofer, 2022). Organizational misinformation then refers to false or misleading information unintentionally spread by an organization while organizational disinformation refers to the intentional dissemination of false or misleading information (Lecheler & Egelhofer, 2022).
The spread of organizational mis- or disinformation likely affects organizational trust as these practices violate the expectation that organizations inform their stakeholders accurately (Meyer & Choo, 2024). Trust, generally understood as a form of complexity reduction, provides people with expectations of how others will behave (Valentini & Kruckeberg, 2011). To operate effectively, stakeholders must be able to place trust in organizations (Holmström, 2020).When an organization fails to inform accurately, this is a violation of such an expectation and consequentially of trust (Holmström, 2020; Meyer & Choo, 2024).
Previous research shows that the dissemination of false information by an organization negatively affects people’s trust in that organization (Brugman et al., 2024; Markowitz et al., 2021). Yet studies on organizational deception (Chen & Cheng, 2020; Markowitz et al., 2021; Meyer & Choo, 2024) overlook the inadvertent dissemination of false or misleading information by organizations (Ruokolainen et al., 2023), which is—arguably—likely more prevalent in an organizational context compared to blatant deception. Therefore we ask the following research question.
RQ1: How does the spread of false or misleading information by an organization affect organizational trust?
Attributions of trust: the role of intentions and consequences
Attribution theories suggest that people’s attributions of responsibility for a negative outcome affect their evaluations of trust (Hewstone, 2004). Thus when an organization provides false or misleading information to their stakeholders, trust in that organization will be negatively affected, and this will depend on 1) the underlying intent and 2) the severity of the consequences.
Firstly, if organizational communication causes negative outcomes, attributions of responsibility and evaluations of trust depend on the underlying intention, i.e., the difference between disinformation and misinformation. One of the most prominent examples of organizational disinformation are the large-scale campaigns to undermine the scientific consensus on climate change by oil companies (Reed et al., 2021). Intentional deceit is also found in the sphere of corporate washing where organizations pretend to be more environmentally or socially conscious than they actually are (Brugman et al., 2024). Yet organizations also provide people inadvertently with false or misleading information (misinformation). Apart from providing outdated or intelligible information (Ruokolainen et al., 2023), the implementation of fallible generative artificial intelligence provides new risks for organizations to inadvertently mislead their stakeholders. Specifically, we expect that organizational disinformation results in lower levels of trust compared to organizational misinformation. Lastly, it must be noted that information by an organization does not have to be false for negative outcomes to occur (e.g., Jungherr, 2024; Martin, 2024). An organization can provide objectively true information while stakeholders are still inconveniently affected, for instance when stakeholders acquire misleading information elsewhere. Here, we expect that trust in the organization decreases less compared to when an organization disseminated false or misleading information to its stakeholders. We therefore hypothesize that:
H1: Organizational disinformation results in lower levels of organizational trust compared to organizational misinformation, which (b) also leads to lower levels of organizational trust compared to when an organization spreads no false or misleading information.
Secondly, the evaluation of trust depends on the severity of the negative consequences (Claeys et al., 2010). There are, of course, cases where misinformation causes considerable harm to society just as there are cases in which misinformation is mostly futile (Acerbi et al., 2022; Ecker et al., 2024; Levy, 2022). The severe health-related consequences of pro-tobacco campaigns stand in stark contrast to examples where a small organization presents intelligible or outdated information to their stakeholders, which may merely result in irritation. We expect that when severe negative consequences are present, organizational trust will be lower than when the negative consequences are limited. Moreover, cases in which there is serious harm but where organizations misinformed people unwittingly, such as the Thalidomide crisis (Dally, 1998), suggest that when negative consequences are severe, the intention becomes secondary, as people’s trust in the organization likely reaches a floor level. Indeed, attribution theories consider the outcome most decisive in the attribution process (Hewstone, 2004). Concretely we therefore suspect that the severity of consequences interacts with the intention to deceive, leading to the following hypotheses:
H2: The presence of severe (vs limited) negative consequences of communication by organizations leads to lower levels of trust.
H2.1: When negative consequences are limited, (a) organizational disinformation leads to lower levels of organizational trust compared to organizational misinformation, which (b) also leads to lower levels of organizational trust compared to when an organization spreads no false or misleading information.
H2.2: When negative consequences are severe, there is no effect of different information types on trust.
Trustworthiness differs from trust as trust has a relational aspect (a willingness to accept vulnerability) whereas trustworthiness is a disposition (Colquitt et al., 2007). The underlying intentions and resulting consequences of misinformation also likely affect different aspects of trustworthiness: integrity, benevolence, and competence.
RQ2: How do intentions and consequences of organizational misinformation affect perceived integrity, benevolence, and competence?
There is, lastly, evidence that organizational misconduct, such as greenwashing, has a spillover effect to nascent organizations (Wang et al., 2019).
RQ3: In organizational communication, how do intentionality and consequences of false or misleading information affect the broader organizational field?
Methods
To test our hypotheses, we preregister and conduct a 3 (disinformation vs misinformation vs no false/misleading information) by 2 (severe negative consequences vs limited negative consequences) between-subjects online survey experiment. We aspire to draw a sample of 600 Dutch respondents, which will be representative for the Dutch population in terms of age, gender, and educational level. Data collection starts toward the end of March and the manuscript will be finalized before June 1st 2025. Results will be presented at the conference.
After a brief survey, respondents will be randomly assigned to one of six conditions in the form of a short news article about a fictitious bank, a case in close experiential proximity to most people’s life worlds. In the story, a negatively affected customer is voiced who either experienced severe negative consequences or limited ones, because the bank either spread false information intentionally, unintentionally, or not at all (control group).
For the dependent variables, we measure trust and trustworthiness. For exploratory purposes, we measure (perceived) motivation, locus of control, and responsibility for the negative outcome.
Discussion
Theoretically this paper contributes in the following ways. We draw attention to legitimate organizations as potential sources of misinformation. By drawing from misinformation studies, we add to debates of corporate deceit by highlighting the fact that organizations may also inadvertently mislead their customers. And while intentions and consequences are prominent in definitions and debates within misinformation studies, their effects on trust are rarely empirically assessed.
References
Acerbi, A., Altay, S., & Mercier, H. (2022). Research note: Fighting misinformation or fighting for information? Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-87
Belanger, A. (2024, February 16). Air Canada must honor refund policy invented by airline’s chatbot. Ars Technica. https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/
Brugman, B. C., van Huijstee, D., & Droog, E. (2024). Debunking the corporate paint shop: Examining the effects of misleading corporate social responsibility claims on social media. New Media & Society, 14614448241288482. https://doi.org/10.1177/14614448241288482
Chen, Z. F., & Cheng, Y. (2020). Consumer response to fake news about brands on social media: The effects of self-efficacy, media trust, and persuasion knowledge on brand trust. Journal of Product & Brand Management, 29(2), 188–198. https://doi.org/10.1108/JPBM-12-2018-2145
Claeys, A.-S., Cauberghe, V., & Vyncke, P. (2010). Restoring reputations in times of crisis: An experimental study of the Situational Crisis Communication Theory and the moderating effects of locus of control. Public Relations Review, 36(3), 256–262. https://doi.org/10.1016/j.pubrev.2010.05.004
Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. The Journal of Applied Psychology, 92(4), 909–927. https://doi.org/10.1037/0021-9010.92.4.909
Dally, A. (1998). Thalidomide: Was the tragedy preventable? The Lancet, 351(9110), 1197–1199. https://doi.org/10.1016/S0140-6736(97)09038-7
Ecker, U., Roozenbeek, J., van der Linden, S., Tay, L. Q., Cook, J., Oreskes, N., & Lewandowsky, S. (2024). Misinformation poses a bigger threat to democracy than you might think. Nature, 630(8015), 29–32. https://doi.org/10.1038/d41586-024-01587-3
Hewstone, M. (2004). Causal attribution: From cognitive processes to collective beliefs (Transferred to digital print). Blackwell.
Holmström, S. (2020). Society’s Megatrends and Business Legitimacy: Transformations of the Legitimizing Business Paradigm. In J. D. Rendtorff (Ed.), Handbook of Business Legitimacy: Responsibility, Ethics and Society (pp. 345–370). Springer International Publishing. https://doi.org/10.1007/978-3-030-14622-1_19
Jungherr, A. (2024). Foundational questions for the regulation of digital disinformation. Journal of Media Law, 0(0), 1–10. https://doi.org/10.1080/17577632.2024.2362484
Lecheler, S., & Egelhofer, J. L. (2022). Disinformation, Misinformation, and Fake News: Understanding the Supply Side. In Knowledge Resistance in High-Choice Information Environments. Routledge.
Levy, N. (2022). Conspiracy Theories as Serious Play. Philosophical Topics, 50(2), 1–20.
Markowitz, D. M., Kouchaki, M., Hancock, J. T., & Gino, F. (2021). The Deception Spiral: Corporate Obfuscation Leads to Perceptions of Immorality and Cheating Behavior. Journal of Language and Social Psychology, 40(2), 277–296. https://doi.org/10.1177/0261927X20949594
Martin, B. (2024). What’s Wrong with Misinformation? Science & Technology Studies. https://doi.org/10.23987/sts.144333
Meyer, M., & Choo, C. W. (2024). Harming by Deceit: Epistemic Malevolence and Organizational Wrongdoing. Journal of Business Ethics, 189(3), 439–452. https://doi.org/10.1007/s10551-023-05370-8
Reed, G., Hendlin, Y., Desikan, A., MacKinney, T., Berman, E., & Goldman, G. T. (2021). The disinformation playbook: How industry manipulates the science-policy process—and how to restore scientific integrity. Journal of Public Health Policy, 42(4), 622–634. https://doi.org/10.1057/s41271-021-00318-6
Ruokolainen, H., Widén, G., & Eskola, E.-L. (2023). How and why does official information become misinformation? A typology of official misinformation. Library & Information Science Research, 45(2), 101237. https://doi.org/10.1016/j.lisr.2023.101237
Valentini, C., & Kruckeberg, D. (2011). Public relations and trust in contemporary global society: A Luhmannian perspective ofbthe role of public relations in enhancing trust among social systems. Central European Journal of Communication, 1, 91–107.
van der Meer, T. G. L. A., & Jonkman, J. G. F. (2021). Politicization of corporations and their environment: Corporations’ social license to operate in a polarized and mediatized society. Public Relations Review, 47(1), 101988. https://doi.org/10.1016/j.pubrev.2020.101988
Wang, H., Ma, B., & Bai, R. (2019). The spillover effect of greenwashing behaviours: An experimental approach. Marketing Intelligence & Planning, 38(3), 283–295. https://doi.org/10.1108/MIP-01-2019-0006
The Misuse of Scholarly Communication Principles in Health Misinformation
ABSTRACT. Health misinformation poses significant challenges to public health, scientific discourse, and trust in science. This paper explores how health misinformation misuses scholarly communication principles, focusing on tactics that exploit academic and pseudoscientific justifications. The study involves qualitative and quantitative analysis of social media posts related to HPV vaccine hesitancy, COVID-19 vaccine hesitancy, and claims that vaccination causes autism. The findings highlight two main types of misuse: referencing articles from unreliable sources and manipulating research claims. The paper discusses the limitations of detecting such misuses and proposes steps to mitigate these risks, emphasizing the importance of promoting digital media literacy and transparency in scientific communication.
From Facts to (Dis)trust? Investigating Cross-Lagged Effects Between News Media Trust and Exposure to Fact-Checks
ABSTRACT. Fact-checking has become a crucial tool in combating misinformation, yet concerns persist regarding potential unintended consequences, particularly its impact on media trust. This study examines potential spillover effects by investigating causal dynamics between exposure to fact-checks and media trust over time, either positively or negatively. Using longitudinal panel data from Flanders (Belgium) and employing a cross-lagged panel model with random intercepts (RI-CLPM), we assess reciprocal effects across three waves during the six months leading up to the 2024 Belgian federal election. The findings indicate no significant cross-lagged relationship in either direction, suggesting that fact-checking neither erodes nor enhances media trust over time. Given the increasing scrutiny of fact-checking by political actors and technology platforms, these results provide important insights by challenging assumptions about negative spillover effects of fact-checking.
Exploring Misinformation Threat Perceptions and Trust Dynamics: Insights from Focus Group Discussions
ABSTRACT. With emerging technology and a changing media environment, citizens are facing an increasing number of challenges finding and identifying trustworthy information. In public discourse, misinformation specifically has been highlighted as a pressing threat to individuals and society (Ecker et al., 2024; Global Risks Report 2025, 2025). Others, however, label the misinformation discourse as a moral panic (Jungherr & Schroeder, 2021) and suggest that alarmist narratives around the threat of misinformation – leading to insecurities among the public – might explain declines in trust and other negative outcomes (Hoes et al., 2024; Jungherr & Rauchfleisch, 2024). Public perceptions of misinformation and its potential threats could thus be shaped in multiple ways and, in turn, reflect in citizens’ interactions with and trust in their information environments and democracy. Still, there is a lack of academic research on whether, how, and why citizens themselves perceive misinformation as threatening, and what role their information environment and technology play in their assessments.
It is crucial to consider the perspective of citizens on the threats of misinformation as this subjective, emotional dimension could drive other interrelated perceptions and developments regarding confidence and trust in the information environment and democracy. Indeed, as one factor, (perceived) misinformation exposure has been linked to declining trust in media (Altay et al., 2024; Hameleers et al., 2022; Stubenvoll et al., 2021), albeit only making up small parts of people’s media diet (Acerbi et al., 2022). However, there is also evidence for a spillover effect of interventions such as fact-checking and warnings of misinformation threats, which negatively impact trust in credible information and democracy (Hoes et al., 2024; Van Der Meer et al., 2023), showing that a strong public emphasis on these threats could be harmful as well (Hameleers, 2023). If people hold disproportionally high threat perceptions regardless of their – likely rather low – exposure to misinformation, this could perpetuate insecurities navigating their information environment and skepticism towards all kinds of information and media. In contrast, underestimations of the threat and feeling personally unaffected could indicate a lack of critical media use and increased susceptibility to misinformation (Martínez-Costa et al., 2023). In this study, we aim to unravel these misinformation threat perceptions by investigating the following research question: What type(s) of threats do people associate with misinformation and why?
Previous work has revealed initial indicators of how citizens’ perceptions can diverge from academic and public assessments of the role and impact of misinformation. First, there seem to be significant discrepancies between academic views on what constitutes misinformation and the perceptions and understandings of citizens. Not only do the latter tend to be broader than scholarly definitions (Nielsen & Graves, 2017), they also vary between individuals (Kyriakidou et al., 2023). Therefore, in the context of this study, we understand misinformation as an umbrella term for all types of messages that could be considered false, inaccurate and misleading (Hadlington et al., 2023; Wu et al., 2019) from a citizen perspective. Consequentially, different understandings of what the concept entails may also lead to diverging assessments of the prevalence and, in turn, the perceived intensity of the threat of misinformation (Rogers, 2020).
Second, citizens might have different ideas of what is at stake and where the threat stems from. Research has shown that the perceived subjects of the threat are often others instead of oneself (Altay & Acerbi, 2023; Hadlington et al., 2023), indicating a third-person effect regarding the perceived susceptibility to misinformation threats (van der Meer et al., 2023). Partisan biases, populist attitudes, and hostile media perceptions could impact what citizens identify as the origin of misinformation threats (Hameleers & Brosius, 2022; Schulz et al., 2020). Beyond that, however, we know little about how people arrive at their understandings of the nature of the threat and who or what is believed to be jeopardized by misinformation.
To explore citizens’ threat perceptions of misinformation in detail, we investigate what threats people associate with misinformation and how these perceptions are formed. While a few studies have provided first qualitative insights into misinformation perceptions (Hadlington et al., 2023; Kyriakidou et al., 2023; Nielsen & Graves, 2017), this study extends this literature by addressing the debated topic of potential misinformation threats from a citizen perspective. We zoom in on perceived consequences of misinformation, origins and sources associated with misinformation threats, as well as the signals and cues citizens rely on for their assessments of the threats. Using a qualitative, inductive approach, we aim to unravel social and mediated construction and amplification processes of the threats associated with misinformation (Binder et al., 2015; Kasperson et al., 1988).
In this preregistered study, we aim to conduct six focus groups with German citizens taking place in March, shortly after the 2025 German federal election (expected N = 30). Data collection is expected to be completed by March 16. To facilitate a common language and shared understanding of the information environment and media technologies, participants will be grouped based on age. We will hold two focus groups for each of the age groups of 16-34 years old, 35-54 years old, and 55 and older. Within the groups, participants will vary in sociodemographic background, political attitudes, as well as levels of trust in media, science, and politics to include a variety of perspectives. We will apply reflective thematic analysis based on Braun and Clarke (2006) to generate themes and key factors relevant for answering our research question.
This study aims at unraveling citizens’ threat perceptions of misinformation to gain a better understanding of the relationship between misinformation and developments connected to the erosion of trust in media and destabilization of democracy (Bennett & Livingston, 2018). Using focus groups, this study is one of the first inductive approaches to researching misinformation (threat) perceptions and allows for rich, detailed accounts of people’s perspectives on the evolving media landscape and society in the light of misinformation. Our findings will aid future research investigating trust dynamics and the role of misinformation by extending the knowledge on perceptions of misinformation threats, media and technology, and trust among citizens and can contribute to the development of interventions targeting disproportionate threat perceptions to restore trust in credible information, science, and institutions.
References:
Acerbi, A., Altay, S., & Mercier, H. (2022). Research note: Fighting misinformation or fighting for information? Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-87
Altay, S., & Acerbi, A. (2023). People believe misinformation is a threat because they assume others are gullible. New Media & Society. https://doi.org/10.1177/14614448231153379
Altay, S., Lyons, B. A., & Modirrousta-Galian, A. (2024). Exposure to Higher Rates of False News Erodes Media Trust and Fuels Overconfidence. Mass Communication and Society, 1–25. https://doi.org/10.1080/15205436.2024.2382776
Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139. https://doi.org/10.1177/0267323118760317
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
Ecker, U., Roozenbeek, J., Van Der Linden, S., Tay, L. Q., Cook, J., Oreskes, N., & Lewandowsky, S. (2024). Misinformation poses a bigger threat to democracy than you might think. Nature, 630(8015), 29–32. https://doi.org/10.1038/d41586-024-01587-3
Global Risks Report 2025. (2025). World Economic Forum. https://www.weforum.org/publications/global-risks-report-2025/digest/
Hadlington, L., Harkin, L. J., Kuss, D., Newman, K., & Ryding, F. C. (2023). Perceptions of fake news, misinformation, and disinformation amid the COVID-19 pandemic: A qualitative exploration. Psychology of Popular Media, 12(1), 40–49. https://doi.org/10.1037/ppm0000387
Hameleers, M. (2023). The (Un)Intended Consequences of Emphasizing the Threats of Mis- and Disinformation. Media and Communication, 11(2). https://doi.org/10.17645/mac.v11i2.6301
Hameleers, M., & Brosius, A. (2022). You Are Wrong Because I Am Right! The Perceived Causes and Ideological Biases of Misinformation Beliefs. International Journal of Public Opinion Research, 34(1), edab028. https://doi.org/10.1093/ijpor/edab028
Hameleers, M., Brosius, A., & De Vreese, C. H. (2022). Whom to trust? Media exposure patterns of citizens with perceptions of misinformation and disinformation related to the news media. European Journal of Communication, 37(3), 237–268. https://doi.org/10.1177/02673231211072667
Hoes, E., Aitken, B., Zhang, J., Gackowski, T., & Wojcieszak, M. (2024). Prominent misinformation interventions reduce misperceptions but increase scepticism. Nature Human Behaviour, 8(8), 1545–1553. https://doi.org/10.1038/s41562-024-01884-x
Jungherr, A., & Rauchfleisch, A. (2024). Negative Downstream Effects of Alarmist Disinformation Discourse: Evidence from the United States. Political Behavior. https://doi.org/10.1007/s11109-024-09911-3
Jungherr, A., & Schroeder, R. (2021). Disinformation and the Structural Transformations of the Public Arena: Addressing the Actual Challenges to Democracy. Social Media + Society, 7(1), 2056305121988928. https://doi.org/10.1177/2056305121988928
Kyriakidou, M., Morani, M., Cushion, S., & Hughes, C. (2023). Audience understandings of disinformation: Navigating news media through a prism of pragmatic scepticism. Journalism, 24(11), 2379–2396. https://doi.org/10.1177/14648849221114244
Martínez-Costa, M.-P., López-Pan, F., Buslón, N., & Salaverría, R. (2023). Nobody-fools-me perception: Influence of Age and Education on Overconfidence About Spotting Disinformation. Journalism Practice, 17(10), 2084–2102. https://doi.org/10.1080/17512786.2022.2135128
Nielsen, R., & Graves, L. (2017). “News you don’t believe”: Audience perspectives on fake news. Reuters Institute for the Study of Jounalism. https://ora.ox.ac.uk/objects/uuid:6eff4d14-bc72-404d-b78a-4c2573459ab8
Rogers, R. (2020). Research note: The scale of Facebook’s problem depends upon how ‘fake news’ is classified. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-43
Schulz, A., Wirth, W., & Müller, P. (2020). We Are the People and You Are Fake News: A Social Identity Approach to Populist Citizens’ False Consensus and Hostile Media Perceptions. Communication Research, 47(2), 201–226. https://doi.org/10.1177/0093650218794854
Stubenvoll, M., Heiss, R., & Matthes, J. (2021). Media Trust Under Threat: Antecedents and Consequences of Misinformation Perceptions on Social Media. International Journal of Communication, 15(0), Article 0.
van der Meer, T. G. L. A., Brosius, A., & Hameleers, M. (2023). The Role of Media Use and Misinformation Perceptions in Optimistic Bias and Third-person Perceptions in Times of High Media Dependency: Evidence from Four Countries in the First Stage of the COVID-19 Pandemic. Mass Communication and Society, 26(3), 438–462. https://doi.org/10.1080/15205436.2022.2039202
Van Der Meer, T. G. L. A., Hameleers, M., & Ohme, J. (2023). Can Fighting Misinformation Have a Negative Spillover Effect? How Warnings for the Threat of Misinformation Can Decrease General News Credibility. Journalism Studies, 24(6), 803–823. https://doi.org/10.1080/1461670x.2023.2187652
Wu, L., Morstatter, F., Carley, K. M., & Liu, H. (2019). Misinformation in Social Media: Definition, Manipulation, and Detection. SIGKDD Explor. Newsl., 21(2), 80–90. https://doi.org/10.1145/3373464.3373475
Generative Listening: How Institutions Can Use AI to Build Trust with Constituents
ABSTRACT. The existing public engagement paradigm has run its course. The institutional logic of public engagement, or centralized decision-making augmented by public input, is too far removed from the cultural logics guiding most every other aspect of public and private life. Expectations born of digital culture that one should have access to personalized and real time interactions with institutions, combined with the belief that institutions should reflect the values of those they represent, makes the workings of most of our democratic institutions seem fundamentally broken. But enhanced access to and ease of data collection, coupled with availability of AI tools for analysis, can shift standard procedures of governance towards what I call generative listening: an institutional commitment to collaborate with constituents on shaping and making decisions through the critical collection, analysis, and use of data. This means
thinking beyond consultation. It means making sense of a variety of data sources, making analysis available to constituents, and diversifying where and how decisions get made. Paradoxically, the very technologies that have depersonalized most aspects of public life now have the potential to humanize the institutions that facilitate it. This paper advances the theory of generative listening through a case study of a project called Real Talk for Change in Boston, MA.
Integrating Communication Science into Explainable AI (XAI): Strategies for Deploying Trustworthy AI Models
ABSTRACT. As Artificial Intelligence (AI) systems are increasingly applied in various industries, the need for trustworthy AI grows. Explainable Artificial Intelligence (XAI) enhances transparency by providing faithful and understandable explanations of AI decision-making. However, many XAI explanations remain too technical for non-experts, limiting their real-world applicability. This paper explores how insights from Human-Machine Communication (HMC) and Communication Science can help bridge this gap by addressing users’ expectations, cognitive processing, and evaluation of AI explanations. By integrating these perspectives, XAI researchers can move beyond technical accuracy to design explanations that are accessible, interpretable, and aligned with user needs. This study provides practical recommendations and future research directions, advocating for a human-centred approach to AI transparency.
Self- vs. Meta-Perceptions of AI: A Cross-Cultural Analysis of AI-Sentiment and Mind Perceptions
ABSTRACT. With AI progressively integrating into society and people’s lives, we must understand users’ perceptions of these technologies as they build the foundation for meaningful interactions, adoption, and trust (Andrews et al., 2023; Grimes et al., 2021). Understanding how individuals view AI can provide insights into societal narratives surrounding such emerging technologies, which is crucial for informing AI development and guiding policy decisions that ensure responsible AI integration into society. However, as with nearly every part of life, personal perceptions are not formed in isolation but within the broader societal context and are not perceived in isolation. Thus, this study investigates the relationship between self-perceptions (i.e., how individuals personally describe AI) and meta-perceptions (i.e., how they presume others perceive AI). By analyzing the sentiment of adjectives used to describe AI, we explore whether people project their attitudes onto others.
Specifically, we aim to test whether the false consensus effect, which refers to the cognitive bias in which individuals tend to overestimate the extent to which others share their beliefs, attitudes, and perceptions (Ross et al., 1977), leading them to assume that their own view of AI is more widely held than it actually is. This cognitive bias may cause individuals to overestimate the extent to which others share their opinions, potentially amplifying polarized views on AI. In this vein, cultural differences also potentially influence the strength of the false consensus effect, with collectivist cultures tending to exhibit a weaker effect than individualistic cultures (Choi & Cha, 2019).
While the primary focus of this study is on the self- vs. meta-perception dynamic, we also consider geographical context as a relevant factor in shaping AI narratives. Our study examines how the geographical context shapes AI narratives and contributes to a more nuanced understanding of AI’s societal role. Recognizing international AI perception can inform more effective communication strategies and policy decisions regarding AI development and implementation, ultimately influencing trust. Considering that AI’s visibility, applications, and potential based on political and infrastructural circumstances vary significantly in global regions (Cazzaniga et al., 2024; Mamad & Chichi, 2024) underscores the need for considering international perspectives. Recent research indicates that a diverse (media) portrayal of AI and perceived (human-like) capabilities influences people’s reactions from trust (e.g., Gillespie et al., 2023; Scantamburlo et al., 2024) to acceptance and fear (e.g., Babiker et al., 2024; Dong et al., 2024; Vu & Lim, 2022) across the globe. Likewise, it suggests regional variations, with Western countries often exhibiting skepticism toward AI compared to more optimistic views in Eastern nations, such as India (e.g. Globig et al., 2024).
Liu et al. (2024) discussed that, specifically in Eastern cultures, people tend to ascribe a soul (or mind) to non-human beings ranging from animals to AI, which can be pinned down to how people react to and interact with AI. In this vein, the authors provide evidence for the notion that Chinese respondents react more emotionally to pragmatic conversational agents, particularly when embodied, than respondents from the USA. Similarly, Barnes and colleagues (2024) elaborate on how cultural identity shapes AI acceptance, with collectivist societies being more likely to integrate AI into their self-concept. In contrast, individualistic cultures tend to perceive AI as an external entity, potentially influencing how mind perception is attributed to artificial agents.
Even though AI lacks lifelike cognition or self-awareness, people may still attribute a theory of mind (Premack & Woodruff, 1978), meaning the ability to recognize others’ mental states, to AI or other artificial agents, which is used to make sense of observed behaviour (e.g., Krämer, 2008; Krämer et al., 2012; Waytz et al., 2010). This process occurs along two key dimensions: agency (e.g., planning, self-control, memory) and experience (e.g., pain, pleasure, fear) (Epley & Waytz, 2010; Gray et al., 2007). Such attributions shape how people engage with artificial agents, influencing trust, social interactions, and even moral considerations (Ladak et al., 2025; Lee et al., 2020; Shank et al., 2021). While these perceived mental capacities do not imply true sentience, people may nevertheless extend their reasoning about mind perception to broader assumptions about AI’s potential for conscious awareness. This study examines whether stronger attributions of agency and experience are linked to greater belief in AI’s current or future sentience. This allows us to explore whether mind perception is a cognitive bridge to conceptualizing AI as a potentially sentient entity.
Methods
To expand existing research, the present study examines public perceptions of AI across five diverse countries - the United Kingdom, United States, India, South Africa, and Australia - as an initial step toward a larger comparative project involving up to 20 nations with varying presence of AI in daily life and infrastructures for it. By analyzing how AI is perceived across diverse populations, this research contributes to discussions on AI’s societal role and its acceptance. As the data will be collected by the end of March 2025, we are, to date, unable to report results. However, we plan to conduct and analyse the survey described below and present the results accordingly.
Sample
To determine the sample size per country, we performed an a priori power analysis using the ‘pwr’-package (Champely et al., 2018) in R (version 4.2.1). Intending to detect small to moderate effects with 90% power and an alpha level of 0.05 at the correlational level, we decided on a minimum effect size of interest of r = |.15|. These considerations led us to a sample size of at least 462 participants per country. This option allows one to detect possible differences between the five countries with a variance up to f = |.082|. Given the resources available, we can increase the individual sample sizes to n = 500 participants per country (UK, USA, India, South Africa, Australia), leading to a total targeted sample size of N = 2.500 participants. A balanced sample of age, gender, and educational status will be recruited via the German panel provider Bilendi.
Measures and Procedure
After giving informed consent, participants will be asked to provide demographic information. Then, they will be asked to type in their personal description of AI using at least 10 words. Once they confirm their description, they will be prompted to provide at least one but up to three adjectives they associate with AI by completing the sentence, “I think AI is…”. Participants’ mind perceptions will be measured using established dimensions of agency (self-control, morality, memory) and experience (hunger, fear, pain, pleasure) (Gray et al., 2007). In addition, we will assess their belief in AI’s sentience by asking whether they think AI currently possesses a conscious mind and whether they expect it to develop one in the next five years. These items will be rated on a scale from 0 (‘completely disagree’) to 6 (‘completely agree’). This enables us to examine whether higher mind attributions correspond with stronger beliefs in AI’s sentience, potentially revealing how conceptualizations of AI’s cognitive abilities influence public expectations about its future evolution.
Analysis Plan
To analyze the data, we will employ natural language processing to extract the sentiments’ valence and intensity from participants’ adjective-based descriptions of AI. The analysis will focus on three key aspects. First, we will compare self-perceptions and meta-perceptions to determine whether individuals systematically assume that others share their views, i.e., the false consensus effect. Second, we will examine how mind perception relates to sentiment, predicting that stronger attributions of agency and experience will correspond with more emotionally charged AI descriptions. Third, we will investigate whether perceptions of AI’s agency and experience predict beliefs about its potential sentience. Specifically, we hypothesize that individuals who ascribe higher mind perception to AI will be more likely to believe that AI either already possesses or will develop sentience in the near future. Finally, we will explore cross-national differences in AI sentiment and mind perception, expecting that variations in mind attributions will influence the valence of the views about AI.
In sum, these findings will provide insights for ethical AI development, helping to align technological advancements with public expectations. Additionally, the results may inform policy-making, ensuring that regulations reflect societal attitudes toward AI and help understand how cultural and psychological factors shape public trust and acceptance of AI. Furthermore, this research has implications for AI communication strategies, aiding in developing narratives that foster trust and engagement across different cultural contexts.
References
Andrews, R. W., Lilly, J. M., Srivastava, D., & Feigh, K. M. (2023). The role of shared mental models in human-AI teams: A theoretical review. Theoretical Issues in Ergonomics Science, 24(2), 129–175.
Babiker, A., Alshakhsi, S., Al-Thani, D., Montag, C., & Ali, R. (2024). Attitude towards AI: Potential influence of conspiracy belief, XAI experience and locus of control. International Journal of Human–Computer Interaction, 1–13.
Barnes, A. J., Zhang, Y., & Valenzuela, A. (2024). AI and culture: Culturally dependent responses to AI systems. Current Opinion in Psychology, 101838.
Cazzaniga, M., Jaumotte, M. F., Li, L., Melina, M. G., Panton, A. J., Pizzinelli, C., Rockall, E. J., & Tavares, M. M. M. (2024). Gen-AI: Artificial intelligence and the future of work. International Monetary Fund.
Champely, S., Ekstrom, C., Dalgaard, P., Gill, J., Weibelzahl, S., Anandkumar, A., Ford, C., Volcic, R., De Rosario, H., & De Rosario, M. H. (2018). Package ‘pwr’. R Package Version, 1(2).
Choi, I., & Cha, O. (2019). Cross-cultural examination of the false consensus effect. Frontiers in Psychology, 10, 2747.
Dong, M., Conway, J. R., Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2024). Fears about artificial intelligence across 20 countries and six domains of application. American Psychologist.
Epley, N., & Waytz, A. (2010). Mind perception. Handbook of Social Psychology, 1(5), 498–541.
Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in artificial intelligence: A global study. The University of Queensland and KPMG Australia, 10.
Globig, L. K., Xu, R., Rathje, S., & Van Bavel, J. J. (2024). Perceived (Mis) alignment in Generative Artificial Intelligence Varies Across Cultures. Preprint. DOI, 10.
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619–619.
Grimes, G. M., Schuetzler, R. M., & Giboney, J. S. (2021). Mental models and expectation violations in conversational AI interactions. Decision Support Systems, 144, 113515. https://doi.org/10.1016/j.dss.2021.113515
Ladak, A., Wilks, M., Loughnan, S., & Anthis, J. R. (2025). Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences. arXiv Preprint arXiv:2502.18683.
Lee, S., Lee, N., & Sah, Y. J. (2020). Perceiving a Mind in a Chatbot: Effect of Mind Perception and Social Cues on Co-presence, Closeness, and Intention to Use. International Journal of Human–Computer Interaction, 36(10), 930–940. https://doi.org/10.1080/10447318.2019.1699748
Liu, Z., Li, H., Chen, A., Zhang, R., & Lee, Y.-C. (2024). Understanding public perceptions of AI conversational agents: A cross-cultural analysis. 1–17.
Mamad, M., & Chichi, O. (2024). Towards a Human-Centred Artificial Intelligence in the Age of Industry 5.0: A Cross-Country Analysis. 2024 11th International Conference on Wireless Networks and Mobile Communications (WINCOM), 1–6. https://doi.org/10.1109/WINCOM62286.2024.10655038
Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526.
Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13(3), 279–301.
Scantamburlo, T., Cortés, A., Foffano, F., Barrué, C., Distefano, V., Pham, L., & Fabris, A. (2024). Artificial intelligence across europe: A study on awareness, attitude and trust. IEEE Transactions on Artificial Intelligence.
Shank, D. B., North, M., Arnold, C., & Gamez, P. (2021). Can mind perception explain virtuous character judgments of artificial intelligence?
Vu, H. T., & Lim, J. (2022). Effects of country and individual factors on public acceptance of artificial intelligence and robotics technologies: A multilevel SEM analysis of 28-country survey data. Behaviour & Information Technology, 41(7), 1515–1528.
Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383–388.
What Can Experiences of Too Much Trust in Technology Teach Us About Implementation of Future Technologies?
ABSTRACT. Introduction
The last decade, different governments in Denmark have shared the vision of the country being a Digital Frontrunner and it has been a common ambition to digitalize public institutions in fast pace. Through the period, trust in digital technologies has been high, and the policy makers and policy papers concerning digitalization have expressed high expectations to the benefits and positive outcomes of public digitalization (Vestergaard 2022). At the same time, one of the central aims has been to keep and build citizens’ trust in the public institutions and the digital technologies introduced (see e.g. Government 2018). However, the high level of trust in digital technologies on part of the policy makers and the public authorities does not always combine well with the aim of keeping and building citizens’ trust in the institutions and the digital technologies implemented. An example of this can be found in digitalization in the Danish tax authorities and the introduction and implementation of a new automatic decision-making system for carrying out public property assessments. The system failed to realize its primary aim: to rebuild citizens’ (institutional) trust in the property assessment system and the tax authorities by implementing a trustworthy system. Contrary to this aim of building institutional trust, the implementation of the new system became a breach of trust on part of the tax authorities and policymakers and resulted in crisis of citizens’ trust in the system and the tax authorities as institution (Ingvorsen et al. 2024). An analysis of the case suggests that too much and too confident trust in digital technology on part of the policymakers has contributed to the tax authorities breaching trust in relation to citizens and their expectations to the public authorities (Vestergaard & Pedersen 2025).
Using the tax authorities’ automatization of the public property assessments as a case study, I want to investigate whether and to what extent such experiences of failed public digitalization projects and disappointed expectations can teach policymakers and those responsible for implementation valuable lessons in relation to the introduction of future (digital) technologies such as artificial intelligence. The main question addressed is what, if anything, can we learn for future implementation of digital technologies from this case and cases like this?
My aims are (1) to present a case study of the new digital and automatic public property assessment system analyzing the case in light of different relations of trust focusing, on the one hand, on the citizens’ institutional trust in the tax authorities and policymakers and public authorities’ trust in digital technology. On the basis of the case study and analysis (2) to investigate trust in digital technology from a temporal perspective differentiating between, on the one hand, trust in existing technologies it is possible to interact with and thus test their trustworthiness in the present and, on the other hand, trust in future digital technologies not yet constructed and thus not possible to test or interact with. To (3) discuss what policymakers can learn from experiences such as the failure of the new public property assessment system when implementing or planning on implementing new digital technologies. I will rise the question of what experiences of introduction of automatization in public institutions potentially can teach us about introduction of artificial intelligence technologies in public institutions.
The Case Study
Trust has been characterized by a combination of risk taking and optimism regarding future outcomes. The truster takes a risk and becomes vulnerable to breaches of trust when trusting, and at the same time, the truster expects a positive outcome on the part of the trustee (Rousseau et al. 1998). This understanding of trust implies that one can trust too much taking a risk too high. This is the case for interpersonal trust, and as illustrated in the case study, also for trust in digital technologies.
The case study uses qualitative content analysis (Schrier 2014) as method and the data analyzed consists of central documents and policy papers related to the system and the decisions making process. The analysis shows that since its initiation in 2013 – after the old system for public property assessments had been strongly criticized (Rigsrevisionen 2013) – the primary aim of the project has been to build public trust in the property assessments. However, this aim is far from realized. The new system and the assessments it has generated, especially the temporary 2022 assessments published from 2023, have faced widespread criticism including from the Danish Ombudsman pointing out that public trust may have been undermined (Ombudsmanden 2024). A recent survey indicates widespread mistrust of the Danish citizens towards the new property assessments and suggests that they may have contributed to undermining citizens’ institutional trust in the tax authorities in general (Ingvorsen et al. 2024). In the analyzed policy papers and documents, five criteria for building trust are identified. The property assessments should be sufficiently precise relative to the market value of the properties, they should be uniform, securing fairness and equality for the law, and they should be transparent and hence understandable for the citizens. In addition, there should be a fair and effective procedure for complaints and concise communication between the tax authorities and the citizens. It is shown that the new system has failed on all the five criteria. Further, the analysis suggests that a main reason behind the failure of the property assessments has been a high level of trust in a fully automatic decision-making system that was expected to guarantee objectivity and provide increased precision, uniformity, and transparency as the positive outcomes. This high level of trust in a digital technology may have been a central factor contributing to the policy- and decision makers disregarding the risk of the system not being able to carry out its task. Thus, too much trust in technology, it is suggested, contributed to undermining another relation of trust, the citizens’ institutional trust in the tax authorities (Vestergaard & Pedersen 2025).
Trust in Future Technologies
On the basis of the case study, I will address a temporal dimension of trust in technology differentiating between, on the one hand, trust in presently existing technologies, and future technologies on the other. The high level of trust in technology in the form of high and confident expectations of positive outcomes - i.e. objectivity, precision, fairness and transparency - from fully automatizing public property assessments can be roughly divided into three phases.
The first phase was the period when the project was initiated by policy makers on the background of criticism of the existing property assessment system in 2013. At that time, the concurrent national digitization strategy (The Government et al. 2011) and policy papers concerning its implementation (Agency of Digital Government 2014) had the introduction of automatic case processing in the public administration as main policy aims. The expectations were high. In the elaboration of the digital strategy, it reads, “Automated case processing means digitally supported case processing steps, including final decisions that can be made without the involvement of active discretion of a case worker. Automated case management can lead to both administrative resource savings, faster case processing, greater transparency and transparency and predictability in the authorities' case processing and thus greater legal certainty …” (Agency of Digital Government 2014, 4). The second phase was the experts advising the policy makers about a new system holding high expectations of the positive outcomes of scaling up a small prototype to a full system (Engberg et. al. 2024). The third phase was when the system was constructed and the decision makers in the tax authorities (and possibly also legislators) expected a positive outcome from using the system and published the temporary 2022-assessments in 2023 without any humans in the loop checking for quality (Ombudsmanden 2024).
Learning from Experience
Currently, a prominent ambition in Danish digitalization policymaking is to introduce artificial intelligence technologies, and this time around the expected benefits also include increased efficiency, faster case processing, and time and labor saved for public employees (Government 2024). I will discuss the conditions on which earlier experiences of unfulfilled positive expectations to future technologies could inform us contributing to calibrate current expectations of the outcomes of future technology. More specifically, how disappointed expectations of fully automatizing human decisions, exemplified in the case study on public property assessments, could inform the current (high) expectations to positive outcomes of introducing artificial intelligence technologies in public institutions and administration.
References
Agency of Digital Government (2014). Afsluttende rapport om initiativ 11.3, analyse af digitaliseringsklar lovgivning. Copenhagen: Agency of Digital Government.
Engberg, P., Gronø, L., Hansen, P. L., & Leth-Petersen, S. (2014). Forbedring af ejendomsvurderingen. Resultater og anbefalinger fra regeringens eksterne ekspertudvalg. Copenhagen: Skatteministeriet.
Government (2018). World-class Digital Service. Copenhagen: Ministry of Finance.
Government (2024). Strategisk indsats for kunstig intelligens. Copenhagen: Digitaliseringsministeriet.
Government, KL & Regions (2011). Den digital vej til fremtidens velfærd. Copenhagen: Ministry of Finance.
Ingvorsen, E. S., Hecklen, A., Ørskov, O. Ussing, J., & Knudsen, J. (2024). ”Trods milliardindsprøjtninger og storstilet redningsplan: Nu tegner 'usædvanlig' måling et dystert billede”. DR Nyheder, 07.09.2024. Accessed 03.14.2025: https://www.dr.dk/nyheder/penge/trods-milliardindsproejtninger-og-storstilet-redningsplan-nu-tegner-usaedvanlig
Ombudsmanden. (2024). Foreløbige ejendomsvurderinger for 2022. Copenhagen: Folketingets Ombudsmand.
Rigsrevisionen. (2013). Beretning til Statsrevisorerne om den offentlige ejendomsvurdering. Copenhagen: Rigsrevisionen.
Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Introduction to Special Topic Forum: Not so Different after All: A Cross-Discipline View of Trust. The Academy of Management Review, 23(3), 393–404.
Schreier, M. (2014). ‘Qualitative Content Analysis’. In Flick, U. (Eds.). (2014). The SAGE Handbook of qualitative data analysis. Newbury Park, CA: Sage, 170 – 183.
Vestergaard, M. (2022). The need for speed – technological acceleration and inevitabilism in recent Danish digitalization policy papers. SATS - Northern European Journal of Philosophy, 22(1), 27-48.
Vestergaard, M. & Pedersen, E. O. (2025). Af hensyn til automatisering. Tillid og mistillid i forbindelse med digitaliseringen af det offentlige ejendomsvurderingssystem i Danmark. Nordisk Administrativt Tidskrift. (accepted / in press).
Control-Based Trust in AI Governance: Copyright Law's Role Within the EU AI Act’s Institutional Design
ABSTRACT. Extended abstract
This paper examines the role of copyright law as a control-based trust mechanism within the governance set-up of the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, 2024 (EU AI Act). Specifically, it addresses how copyright law provisions interact with the institutional actors set up within the AI Act to account for power inequalities among authors, users and AI developers to create institutional trust. Article 53 of the EU AI Act mandates the providers of general-purpose AI models (GPAIMs) to comply with copyright law provisions in the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market (CDSM Directive), particularly regarding text and data mining exceptions.
We explore three interconnected research questions:
(1) How do provisions of the CDSM Directive address power asymmetries between AI developers and copyright holders within the AI governance ecosystem?
(2) In what ways might the provisions of the CDSM Directive facilitate “control-based trust” as conceptualized by Sydow and Windeler (2004)?
(3) What dynamics emerge between regulatory actors responsible for and those overseeing broader AI governance under the institutional design of the EU AI Act?
Our methodology combines a preliminary literature review of the trust literature, a doctrinal analysis of legal provisions of the EU AI Act and the CDSM Directive, and an institutional analysis of AI governance, examining both the legal texts and the relationships between institutional bodies such as the AI Office, the European AI Board, and various stakeholders involved in AI training and deployment.
This paper contributes to understanding how hard-law approaches within AI governance may establish conditions for trust in AI systems and among stakeholders. In particular, how requirements in the EU AI Act— requiring GPAIM providers to comply with provisions of copyright law— create accountability mechanisms that may create conditions for trust-building between copyright holders and AI developers.
Theoretical background
AI Governance – a definition
‘AI governance’ has been conceptualised by Wirtz et al. (2024) by placing society at the centre. They define it as the management of AI development and deployment through established rules and guidelines within an evolving ecosystem. Such an approach seeks to address real and potential impacts of AI on society by defining structures for technological, informational, political, legal, social, economic, and ethical action. Dafoe (2024) also defines, “AI governance refers (1) descriptively to the policies, norms, laws, and institutions that shape how AI is built and deployed, and (2) normatively to the aspiration that these promote good decisions (effective, safe, inclusive, legitimate, adaptive)". For Dafoe (2024), the field of AI governance studies how humanity can best navigate the transition to advanced AI systems. An AI governance approach varies based on the field of application and risk-benefit assessments, utilizing both hard-law and soft-law tools to either restrict or encourage AI use (Villasenor, 2020).
The ecosystem of AI governance consists of members with distinct roles, interests, and actions that collectively influence governance processes (Wirtz et al., 2024). AI impacts and is governed by multiple actors rather than a single entity, including government, industry, civil society, companies and academia, all playing crucial roles nationally and globally. Together, they form an interconnected network with shared interests that both collaborate and compete, resembling a biological ecosystem (Moore, 1993). The combined capabilities of these actors create value and outcomes unattainable by any individual entity (Adner, 2006).
AI Governance, Trust and Control
Trust enables cooperation among diverse stakeholders in AI governance (drawing on Braithwaite and Levi, 1998). Trust in governance contexts represents the willingness to accept vulnerability when confronted with risk and uncertainty (Mayer et al., 1995). Trust shares a complex relationship with ‘control’ in trust literature. Some scholars argue that trust and control function as substitutes, where greater trust reduces the need for formal control mechanisms (Sako, 1998). Others argue that they are complementary elements, with each supporting and reinforcing the other. In the literature, trust, control, knowledge, and power exhibit varied relationships across different contexts.
In organisational contexts, Das and Teng (2001) distinguish between control-undermining and trust-enhancing forms of control, while Sydow and Windeler (2004) identify bidirectional relationships like “control-based trust” and “trust-based control”. Control may actively generate trust—termed “control-based trust” by Sydow and Windeler (2004)—when “control measures applied show that the actions, procedures, or results do occur as expected – that is, when the trust given turns out to be justified”. However, control measures that are not implemented in an appropriate manner can directly undermine trust. Conversely, trust may enable control—"trust-based control” (Sydow and Windeler, 2004)—by opening additional control possibilities, particularly social control options. Das and Teng (2001) further argue that competence and goodwill trust enhance all control modes in alliances, while acknowledging that sufficient trust may render control unnecessary in certain contexts (Sydow, 2006).
Trust in AI Systems
Recent scholarship on AI governance reveals a complex interplay between trust, control, power, regulation, and institutional design. EU AI Act has emerged as a focal point for analysis.
Laux, Wachter, and Mittelstadt (2023) critique the EU AI Act for conflating “trustworthiness” with “acceptability of risks” and treating trustworthiness as binary rather than as an ongoing process. Researchers have examined trust through various lenses. Gillis, Laux, and Mittelstadt (2024) explore interpersonal, institutional, and epistemic dimensions, noting that transparency sometimes decreases rather than increases trust. Lahusen, Maggetti, and Slavkovik (2024) propose “watchful trust” - balancing trust with necessary vigilance in complex systems.
Tamò-Larrieux et al. (2024) take a more pragmatic approach, identifying sixteen factors affecting trust in AI and demonstrating that AI governance can directly influence only six factors, which include - legislative measures, permissible AI tasks, automation levels, competence standards, transparency requirements, and power dynamics between users and providers, suggesting regulators should focus strategically rather than attempting to address all aspects of trust indiscriminately. Human oversight, as discussed by Laux (2024) and Durán and Pozzi (2025), also plays a significant role in AI governance.
The literature gap – trust within actors in AI governance
While substantial research examines trust in AI systems themselves, Zhang (2024) notes that further research is needed into institutional trust in the actors behind AI systems within contemporary political and economic contexts. Research by Zhang and Dafoe (2019) suggests that public trust varies significantly across different AI actors, but existing literature focuses primarily on non-regulatory actors such as university researchers, technology companies and so on. This gap presents an opportunity for examining the institutional dynamics between regulatory actors within hard-law AI governance mechanisms like the EU AI Act—moving beyond principles of trustworthy AI toward understanding the complex interrelationships between trust, oversight, and governance in AI regulatory ecosystems.
Power dynamics, control-based trust and AI governance
Power dynamics fundamentally shape trust relationships in AI governance, as highlighted by Tamò-Larrieux et al. (2024). The inherent information and power asymmetries between AI providers and users create significant governance challenges, with large technology companies wielding disproportionate influence in framing debates and setting standards that serve their interests. Such power asymmetries directly impact user trust in AI systems, as ordinary consumers struggle to question or understand corporate actions and motivations (Van Dijck et al., 2018; Nowotny, 2021).
EU AI Act addresses these power imbalances through a multi-layered governance approach that establishes several key institutional actors. These include the AI Office (Article 64), the European AI Board (Article 66), a Scientific Panel (Article 68), and an Advisory Forum (Article 67). Together, these institutions form an AI governance ecosystem.
Creating Control-Based Trust through law
We argue that the EU AI Act creates what Sydow and Windeler (2004) describe as “control-based trust” by providing constraints within which different actors must engage with AI systems despite inherent uncertainties. The relationship between law and trust is complex, with some scholars arguing that legal structures reduce uncertainty to enable trust relationships, while others suggest formal controls can trigger a “trust paradox” (Long & Sitkin, 2018).
The literature reveals mixed findings about whether hard-law approaches in governance either complements or substitutes trust. While Ribstein (2001) argues that law substitutes trust because “the shadow of coercion” impedes trust development, Hill and O'Hara (2006) propose that regulation may reduce uncertainty to a level where trust relationships become possible. Tamò-Larrieux et al. (2024) suggest that regulation should aim to optimize trust by minimizing both under-trust and over-trust in technologies. Copyright law functions as a specific control mechanism within AI govenance, as GPAIMs depend heavily on copyrighted materials for training, with the AI Act requiring GPAIM providers to comply with applicable copyright laws in EU member states (Geiger & Iaia, 2024).
Our Contribution
We integrate organisational trust theory with hard-law AI governance approaches, drawing on Das and Teng’s (2001) control types and their relationship to trust, and Sydow and Windeler’s (2004) concepts of “control-based trust”. We argue that copyright law functions as a social control mechanism that may generate trust through appropriate constraints.
Our approach recognises that trust in AI governance emerges not merely from technical reliability in AI systems, but from institutional arrangements that appropriately distribute power, knowledge, and control among actors involved in AI deployment. By focusing on institutional trust in regulatory actors we address Zhang’s (2024) identified research gap, exploring how copyright law might facilitate trustworthy relationships between regulators, AI developers, and creative communities within the EU AI Act’s institutional design.
Bibliography:
Adner, R. (2006). Match your innovation strategy to your innovation ecosystem. Harvard Business Review, 84(4), 98-107.
Bachmann, R. (2001). Trust, power and control in trans-organizational relations. Organization Studies, 22(2), 337-365.
Braithwaite, J., & Levi, M. (Eds.). (1998). Trust and governance. Russell Sage Foundation.
Bullock, J. B. (2023). Introduction and Overview. In J. B. Bullock, Y. C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford Handbook of AI Governance Oxford University Press.
Clegg, S. R., & Hardy, C. (1996). Organizations, organization and organizing. In S. R. Clegg, C. Hardy, & W. R. Nord (Eds.), Handbook of organization studies (pp. 1-28). Sage.
Dafoe, Allan, ‘AI Governance: Overview and Theoretical Lenses’, in Justin B. Bullock, and others (eds), The Oxford Handbook of AI Governance, Oxford Handbooks (2024; online edn, Oxford Academic, 14 Feb. 2022), https://doi.org/10.1093/oxfordhb/9780197579329.013.2, accessed 14 Mar. 2025.
Das, T. K., & Teng, B. S. (2001). Trust, control, and risk in strategic alliances: An integrated framework. Organization Studies, 22(2), 251-283.
Durán, J. M., & Pozzi, G. (2025). Trust and transparency in artificial intelligence: Epistemic and normative dimensions. AI & Society, forthcoming.
Geiger, C., & Iaia, V. (2024). Towards an independent EU regulator for copyright issues of generative AI: What role for the AI Office (but more importantly: What's next)? International Review of Intellectual Property and Competition Law, 55(1), 1-18.
Gillis, R., Laux, J., & Mittelstadt, B. (2024). "Chapter 14: Trust and trustworthiness in artificial intelligence". In Handbook on Public Policy and Artificial Intelligence. Cheltenham, UK: Edward Elgar Publishing. Retrieved Mar 11, 2025, from https://doi.org/10.4337/9781803922171.00021
Daniel Hult (2018) Creating trust by means of legislation – a conceptual analysis and critical discussion, The Theory and Practice of Legislation, 6:1, 1-23, DOI: 10.1080/20508840.2018.1434934
Gutierrez, Carlos Ignacio, Marchant, Gary E., & Tournas, Lucille. (2020). Lessons for artificial intelligence from historical uses of so law governance. JURIMETRICS 61 (1), 1–18. https://papers.ssrn.com/abstract=3775271.
Klijn, E.-H., Edelenbos, J., & Steijn, B. (2010). Trust in Governance Networks: Its Impacts on Outcomes. Administration & Society, 42(2), 193-221. https://doi.org/10.1177/0095399710362716 (Original work published 2010)
Lahusen, C., Maggetti, M. & Slavkovik, M. Trust, trustworthiness and AI governance. Sci Rep 14, 20752 (2024). https://doi.org/10.1038/s41598-024-71761-0
Laux, J. (2024). Institutionalised distrust and human oversight of artificial intelligence: Towards a democratic design of AI governance under the European Union AI Act. AI & Society, 39(1), 2853-2866.
Laux, J., Wachter, S. and Mittelstadt, B. (2024), Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance, 18: 3-32. https://doi.org/10.1111/rego.12512
Long, C. P., & Sitkin, S. B. (2018). Control–trust dynamics in organizations: Identifying shared perspectives and charting conceptual fault lines. Academy of Management Annals, 12(2), 725-751.
Luhmann, N. (1979). Trust and power. John Wiley & Sons.
Marchant, G. (2019). “So law” governance of artificial intelligence. AI PULSE.
Marchant, G., Abbott, K., & Allenby, B. R. (2013). Innovative governance models for emerging technologies. Edward Elgar Publishing. https://doi.org/10.4337/9781782545644.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734.
Moore, J. F. (1993). Predators and prey: A new ecology of competition. Harvard Business Review, 71(3), 75-86.
Möllering, G. (2006). Trust: Reason, routine, reflexivity. Emerald Group Publishing.
Nowotny, H. (2021). In AI we trust: Power, illusion and control of predictive algorithms. John Wiley & Sons.
Paul, R. (2023). European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market. Regulation & Governance, 14(1), 1-22.
Ribstein, L. E. (2001). Law v. trust. Boston University Law Review, 81, 553-590.
Sako, M. (1998). Does trust improve business performance? In C. Lane & R. Bachmann (Eds.), Trust within and between organizations (pp. 88-117). Oxford University Press.
Shapiro, S. P. (1987). The social control of impersonal trust. American Journal of Sociology, 93(3), 623-658.
Starke, G., & Ienca, M. (2024). Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence. Cambridge Quarterly of Healthcare Ethics, 33(3), 360–369. doi:10.1017/S0963180122000445
Sydow, J. (1998). Understanding the constitution of interorganizational trust. In C. Lane & R. Bachmann (Eds.), Trust within and between organizations (pp. 31-63). Oxford University Press.
Sydow, J. (2006). How can systems trust systems? A structuration perspective on trust-building in inter-organizational relations. In R. Bachmann & A. Zaheer (Eds.), Handbook of trust research (pp. 377-392). Edward Elgar.
Sydow, J., & Windeler, A. (2003). Knowledge, trust, and control: Managing tensions and contradictions in a regional network of service firms. International Studies of Management & Organization, 33(2), 69-99.
Tamò-Larrieux, A., Mayer, S., & Zihlmann, Z. (2024). Can law establish trust in artificial intelligence? Regulation & Governance, 18(3), 781-804.
Thaler, R. H. (2000). From homo economicus to homo sapiens. Journal of Economic Perspectives, 14(1), 133-141.
van Dijck, José, Thomas Poell, and Martijn de Waal, The Platform Society (New York, 2018; online edn, Oxford Academic, 18 Oct. 2018), https://doi.org/10.1093/oso/9780190889760.001.0001
Villasenor, J. (2020). Soft law as a complement to AI regulation. Brookings.
Wirtz, B. W., & Müller, W. M. (2019). An integrated artificial intelligence framework for public management. Public Management Review, 21(7), 1076-1100.
Wirtz, B. W., Langer, P. F., & Weyerer, J. C. (2024). An Ecosystem [Ev1] Framework of AI Governance. In J. B. Bullock, Y. C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford Handbook of AI Governance. Oxford University Press.
Young, L. C., & Wilkinson, I. F. (1989). The role of trust and co-operation in marketing channels: A preliminary study. European Journal of Marketing, 23(2), 109-122.
Zhang, Baobao, 'Public Opinion toward Artificial Intelligence', in Justin B. Bullock, and others (eds), The Oxford Handbook of AI Governance, Oxford Handbooks (2024; online edn, Oxford Academic, 14 Feb. 2022), https://doi.org/10.1093/oxfordhb/9780197579329.013.36, accessed 11 Mar. 2025.
Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Center for the Governance of AI, Future of Humanity Institute, University of Oxford.
(En)Trusting our Privacy to PETs? Consumer trust frames in Privacy-Preserving Computation
ABSTRACT. Privacy-enhancing technologies (PETs) integrated into Privacy-Preserving Computation (PPC) promise robust protection of personal data and are marketed as ‘trust technologies’. Yet, consumer trust in PETs is shaped not only by technical assurances but by complex social, institutional, and ethical dynamics. This qualitative study examines how consumers perceive PETs through inductive thematic analysis of focus group discussions centred around Google's PET-enabled services (busyness indicator, password checkup, and Gboard predictive typing). Four distinct consumer trust frames emerged: (1) Comforting illusion of trust: users implicitly trust PETs as automatic safeguards; (2) Delegated trust: trust placed primarily in regulatory frameworks rather than the technology itself; (3) Disillusioned trust: scepticism and resignation resulting from complexity, opacity, and privacy fatigue; and (4) Pragmatic trust: selective and context-dependent trust based on usability, brand reputation, and convenience. The findings underscore that signalling best practice alone does not guarantee trust, and that critical ethical questions persist around consumer agency, privacy stratification, and corporate accountability. In sum, there is a need to enhance trustworthy signalling of PET performance and strengthen accountability mechanisms, thereby supporting genuine consumer privacy.
Digital Trust and Activism: Women of Sacrifice Zones Resisting Necropolitics in Chile
ABSTRACT. Abstract:
In the context of increasing distrust in institutions and the growing role of digital platforms in shaping public discourse, this study examines how women activists from Mujeres de Zonas de Sacrificio en Resistencia (MUZOSAR) in Chile use digital media to construct trust and counter dominant narratives of environmental injustice. Sacrifice zones—areas subjected to environmental degradation due to industrial activities—are paradigmatic sites of necropolitics (Mbembe, 2019), where state and corporate actors determine which lives are expendable in the name of economic progress. Quintero-Puchuncaví, Chile’s most emblematic sacrifice zone, has suffered decades of pollution, health crises, and social marginalization (Ponce, 2020; Bolados, 2023), leading to widespread institutional mistrust.
This study draws on digital ethnography and digital discourse analysis (Jiang, 2019; Weitin, 2019) to examine how MUZOSAR activists leverage social media to reshape trust relations in the digital sphere. Through an analysis of online narratives—Facebook and Instagram posts, press articles, and activist communications—the research explores how these women mobilize an ethics of care (Tronto, 1994; Brugère, 2022) to resist necropolitical governance and reclaim agency over environmental decision-making. Their activism disrupts dominant media representations, which often erase or delegitimize grassroots struggles (Bolados & Sánchez, 2017), and instead constructs alternative knowledge frameworks that foster collective responsibility and transnational solidarity (Cusicanqui, 2018; Arancibia, 2023).
However, the digital sphere presents contradictions in trust dynamics. On the one hand, MUZOSAR’s online presence increases visibility and legitimacy, enabling activists to challenge official narratives and mobilize support (Tufekci, 2017; Castells, 2012). On the other hand, digital platforms reinforce algorithmic marginalization, expose activists to online harassment, and facilitate the proliferation of disinformation campaigns that undermine their credibility (Zuboff, 2023; Della Porta, 2024). These challenges illustrate broader tensions in the digital society, where trust in expertise and science is eroded while untrustworthy actors gain influence (Klein, 2015; Alkon & Agyeman, 2011).
By situating the struggles of Chilean women environmental activists within the global debate on trust in digital infrastructures, this study highlights the dual role of digital media as both a tool of empowerment and a space of epistemic violence. It contributes to feminist political ecology (Bolados et al., 2017; Svampa, 2014) and critical studies on digital trust by demonstrating how grassroots movements engage with and contest digital platforms to forge new trust relations. Ultimately, the case of MUZOSAR underscores the need for a critical reassessment of trust, legitimacy, and power in digital activism, particularly in contexts of environmental injustice and socio-political marginalization.
REFERENCES
Alkon, A. H., & Agyeman, J. (2011). Cultivating food justice: Race, class, and sustainability. MIT Press.
Arancibia, L. (2023). Memoria, resistencia e imaginación contra la crisis ecológica y la lucha por el agua. En J. Morales, A. D’Atri & K. Muñoz (Eds.), Conflictos ambientales y extractivistas en América Latina. Abordajes diversos desde los imaginarios sociales (pp. 71-92). UPAEP - USC.
Bolados, P. (2023). Resistance of women from “sacrifice zones” to extractivism in Chile. En S. Neas, A. Ward & B. Bowman (Eds.), Routledge Handbook of Latin America and the Environment. Routledge.
Bolados, P., & Sánchez Cuevas, A. (2017). Una ecología política feminista en construcción: El caso de las "Mujeres de zonas de sacrificio en resistencia", Región de Valparaíso, Chile. Psicoperspectivas, 16(2), 33-42. https://doi.org/10.5027/psicoperspectivas-vol16-issue2-fulltext-977
Brugère, F. (2022). La ética del cuidado. Ediciones Metales Pesados.
Castells, M. (2012). Redes de indignación y esperanza: Los movimientos sociales en la era de Internet. Alianza Editorial.
Cusicanqui, S. R. (2018). Ch’ixinakax utxiwa: Una reflexión sobre prácticas y discursos descolonizadores. Unrast Verlag.
Della Porta, D. (2024). Methodological pluralism in social movement studies: Why and how. En L. Cox, A. Szolucha, A. Arribas & S. Chattopadhyay (Eds.), Handbook of Research Methods and Applications for Social Movements (pp. 115–126). Edward Elgar Publishing. https://doi.org/10.4337/9781803922027.00015
Jiang, L. (2019). Digital discourse analysis: A survey of research themes, frameworks, and methods. Language and Linguistics Compass, 13(11), e12343. https://doi.org/10.1111/lnc3.12343
Klein, N. (2015). This changes everything: Capitalism vs. the climate. Simon & Schuster.
Mbembe, A. (2019). Necropolitics. Duke University Press.
Ponce, C. (2020). El Chernóbil chileno: Movilización anti-extractivista en la zona de sacrificio de Quintero-Puchuncaví. Revista Inclusiones, 478-493.
Svampa, M. (2014). ¿El desarrollo en cuestión? Algunas coordenadas del debate latinoamericano. En Saltar la barrera: Crisis socioambiental, resistencias populares y construcción de alternativas latinoamericanas al neoliberalismo (pp. 21-38).
Tronto, J. C. (1994). Moral boundaries: A political argument for an ethic of care. Routledge.
Tufekci, Z. (2017). Twitter and tear gas: The power and fragility of networked protest. Yale University Press.
Weitin, T. (2019). Digital discourse analysis: A conceptual framework. Journal of Digital Humanities, 6(1), 25-43.
Zuboff, S. (2023). The age of surveillance capitalism. En Social theory re-wired (pp. 203-213). Routledge.
Defining and Designing Disinformation: A Qualitative Exploration of Dutch Youths’ Perceptions and Experiences of Disinformation
ABSTRACT. In an age where (AI-generated) misinformation and disinformation are regarded as the most pressing short-term risks to society—and among the greatest long-term threats ahead (WEF, 2024)—youth are at the forefront of this unpredictable digital environment. As frequent consumers of news through digital formats such as social media and search engines, young people benefit from social and emotional connections online, including access to peer support (Popat & Tarrant, 2023). However, these same platforms expose them to harmful content, including disinformation—intentionally misleading information designed to serve an ulterior motive (Wardle, 2018).
Despite their high engagement with digital media, youth often lack awareness of the mechanisms behind these platforms, making them particularly susceptible to spreading political misinformation online (Duvekot et al., 2024). Social pressures from peers further drive their engagement with disinformation (Duffy et al., 2020; Herrero-Diz et al., 2020), a factor often overlooked in mainstream misinformation research. Within this landscape, Dutch youth represent a unique case. Historically characterised by high media trust (Ohme et al., 2022), this trust is now in decline (Nelissen et al., 2022), while their media literacy levels remain below the basic threshold necessary for identifying disinformation (Krepel et al., 2024). Additionally, as first-time voters, they are particularly vulnerable to political misinformation.
Given these challenges, this research aims to deepen our understanding of how youth encounter, respond to, and perceive misinformation, shedding light on their concerns and vulnerabilities as they navigate an increasingly complex and unprecedented digital landscape.
Against this backdrop, we focus on Dutch youth aged 16-21. In this study, we conduct qualitative in-depth group interviews, which form the first part of a larger mixed-method project that will also involve a survey experiment. The interviews conducted for this study will inform the experiment, the second part of the overarching project. For the interviews (expected N = 20, data collection currently ongoing) we explore youths’ information environments in relation to their encounters with misinformation, letting them reflect on and narrate their own experiences. These interviews will enable us to paint a broad picture about the ways in which youths come across misinformation and how these encounters affect them. In the second part of the focus groups, the participants are asked to perform a task, which will involve them creating misinformation using Generative AI. After the task, we prompt a discussion about individual participants’ feelings while executing the task, as well as their feelings about the output they have created. With this exercise, we will be able to tap into both the defining attributes of the type of misinformation youths encounter, as well as how they relate to the content in terms of the emotions and thoughts that emerge during and after the creative process.
Preliminary findings indicate a strong third-person effect, with respondents perceiving others—particularly younger and older generations—as more susceptible to misinformation than themselves. Significant differences emerge across education levels, both in terms of the types of misinformation individuals are exposed to and their levels of media literacy. Additionally, misinformation manifests differently for younger adolescents; rather than being predominantly political, it also concerns topics such as sports, science, consumer products, and local events. These findings highlight varying concerns about misinformation across age groups, with particular worry about its impact on both younger and older generations.
By centring youths’ experiences, feelings, and encounters with misinformation, we aim to further and deepen existing understanding of the extent to which the challenge presented by misinformation affects Dutch youths, and the role of new technologies such as Generative AI in potentially further transforming this challenge. The results could be informative for existing and developing media literacy training material and can help tailor interventions to the experiences, perceptions, and concerns of today’s youth.
Truth is Stranger than Fiction: Knowledge, Belonging and Discursive Existence of Conspiracy Theorists on YouTube
ABSTRACT. (I hereby, kindly, indicate my interest in consideration of my paper for publication)
Conspiracy theories have steadily become entrenched in contemporary discourses, often framed through stereotypes that depict theorists as hostile, uneducated, and antithetical to democratic values. This research critically examines and challenges this prevailing myth of a "golden age" of conspiracy theorization and its archetypical agents. Through a rigorous examination of four prominent YouTube channels identified as "truther" platforms, I illustrate how these producers articulate distinctive epistemic and existential positions characterized by radical skepticism towards perceived illusions or deceptions of societal reality. In this research, I tried to discover how the authors of certain YouTube conspiracy channels know and express their perception, attachment, and animosity to a world that they deem as illusory, deceptive, and surreal. By employing video-ethnographic methods and transforming the YouTube videos that they upload daily into texts, which generally consist of incessant speech and lively gestures, I strived to bring this seemingly contradictory situation to light through critical discourse analysis. In this study, rather than contending with the stereotypical image of conspiracy theorists that are being circulated in society, through the lens of the concept of fringe belonging that I have developed, I tried to illuminate the situation, conceptual horizon, and in-group rituals of these people.
Employing an innovative methodological approach—interpreting YouTube content as ethnographic discursive artifacts— I uncover a nuanced triadic model of reality operative within these communities. This model prioritizes an exalted inner self, circumscribed immediate vicinity, and a broader "fake" societal sphere, thus redefining traditional paradigms of belonging and knowledge. Contrary to dominant portrayals, these truthers' narratives reveal complex and contradictory forms of attachment, animosity, and epistemic autonomy, reframing conspiracist discourse as both a critique and resistance to conventional epistemic, social, and governmental norms. By providing a critical avenue that witnesses a contestation of the existing literature on conspiracy theories, this study contests conventional scholarly depictions of conspiracy theories and conspiracy theorists. Instead, it underscores conspiracy theorization as an intricate and inadmissible sociocultural practice, critically reflecting, the precarious, unpredictable, and uncertain political-epistemological regimes and practices of contemporary societies. Ultimately, this digital ethnographic analysis contributes to a broader understanding of marginalized knowledge communities, emphasizing the urgent need for nuanced scholarship attentive to these liminal, yet profoundly telling social positions.
From Trust to Seduction: The Gig-economy of War and its Advertising Campaigns
ABSTRACT. On July 7, 2024, the advertisement of the recruitment account “Privet Bot” (Ptivet means Hi in Russian) inviting the residents of European countries to join the fight against Ukraine’s Western allies was posted on the “Grey Zone”, now banned but then-time largest pro-Russian Telegram channel associated with the Wagner mercenary group .
Several video clips of different industrial fires were followed by the text describing different fire accidents allegedly in Romanian cities and attributing them first to the activity of some mysterious “Romanian friends who, the ad claims, said privet to us”, but then to the readers themselves: “You are the ones who boldly said PRIVET to us…”. In this elusive language, saying “privet” is an euphemism for conducting pro-Russian sabotage.
Gig-Economy of War
During the last year, several investigations have been published by Ukrainian and European media about people in Ukraine and in Europe who Russia recruited through Telegram and other online platforms to commit espionage, sabotage, arson, vandalism, and murder. The announcements are usually scattered among job offers, housing tips, and internet scams across channels frequented by refugee groups, or through the channels related to gambling or drugs, where people who are in need, in a vulnerable situation, in despair, or in search of quick money can be easily tracked.
The business is organized in such a way that there is a long chain of middlemen, which makes it difficult to attribute the recruitment to specific actors, and the people recruited don't really know who they are working for and what operations they are involved in. Or, rather, they have an option not to be aware of who they are working on and what they are doing.
In the case of recruitment networks, this kind of plausible deniability of the individual involvement in war operations is enabled by the architecture of a labor process, standard to, what we call, gig-economy, - an economic system, where people earn income by providing on-demand work, services, or goods, and are paid for each task. It is the latest manifestation of automation of the labor process, which implies a dissolution of the production process into small tasks that can be performed by interchangeable workers who don’t see the complete cycle of production and don’t control the final product. It also implies the dissolution of a working collective and substituting it with the mass of atomized individuals coordinated through digital platforms.
In case the gig economy converges with war, the war operations can be implemented as a set of discrete tasks by random atomized contractors. This kind of war operation can be performed in a foreign country without military violation of the border, and even without an organized group or at least a vaguely formed community of supporters of a certain military agenda. The basic resource that is needed (at least for the start of the operation) is a certain amount of precarious individuals.
So, unlike the classical proxy war model, where the operations are outsourced to organized paramilitary units, in this model of outsourced war the proxy is an atomized and automated individual who can participate in a military operation without making a decision to participate in this military operation. The war is being marketed as “just a gig”, as a profitable deal. The very structure of this kind of employment offers you an option to be willfully ignorant of what you contribute to.
Images of the Future
But even in the case of more explicit recruitment, as on the Grey Zone channel, which is obviously militaristic, the message is still constructed according to the plausible deniability logic, which perfectly matches the elusive aesthetics of advertising.
Let’s look once again at the Privet bot ads. It is structured as fake. According to the Ukrinform investigation, the inscriptions of the images were all false. The videos used as proof of pro-Russian sabotage in Romania in fact depicted earlier accidents of different origins and in different places.
The menacing collectivity the ads refer to is also fake because there is no collective subject behind those unrelated fires. Yet, even if this collectivity doesn’t exist, YOU, the reader, are addressed as if you already belong to it; you are addressed as an agent of violence.
“The collective realm is imaginary in advertising, but its virtual consumption suffices to ensure serial conditioning”, wrote Jean Baudrillard in the “System of Objects”, describing the advertising strategy of solicitation based on what he calls “the presumption of collectivity”. Which means that ads typically strive to provoke our desire for some product by referring to an imaginary collectivity of people who already desire it.
For the advertising to work, it doesn’t matter that the image of the community it ascribes you to is fictional and doesn’t have a referent in reality. Quite on the contrary, the task of the advertisement is to evoke a desire, which, as we know, thrives when there is a lack. That is why the advertisement sign doesn’t refer to what exists, it draws attention to what is absent: “The image creates a void, indicates an absence, and it is in this respect that it is 'evocative'”, Baudrillard claimed. The advertising shows you something fictional and tries to evoke your desire to make it real in the future. Its images don’t represent what’s already there, they strive to shape what will come. These are images of the future.
While calling something a fake, we imply the concept of “fact” as its counterpart. Fact is something finished, something belonging to the past, and fake is a falsified past, we assume. We still tend to perceive information as a domain of facts and, therefore, a domain of the past. Yet, the temporality structure of the present-day information space, heavily defined by the AdTech and MarTech, seems to be starting to shift. What we call fakes are the symptoms of such a shift.. Although shaped as facts, fakes have a structure of advertisements: they play with the aesthetics of factuality only to sell us the fiction as already defined reality. They are aimed not at our knowledge about the past but at our imagination about the future.
Memetic Violence
“Images circulate more easily than words. When interpretation is too hard, when making an argument takes too long, little images are ready stand-ins”, wrote Jodi Dean reflecting on the viral circulation of images on the internet. Images can convey emotions or desires without being precise about them.
The more elusive the advertisement image is, the more people can project on it their own desires. The more open the meme is to different meanings, the more spreadable it is.
It is not by chance that one of the most circulated memes in history is the so-called “Disaster Girl” meme, featuring a smiling girl who probably just burned someone’s house. While the reasons behind your problems might be complex and hard to grasp, the Disaster Girl meme can convey how you feel about them, offering also an emotional release by acknowledging your dark side.
It is not by chance, the Telegram channels recruiting people for making arsons use the variation of disaster girl meme to advertise the job. The job they advertise is exactly the reproduction of this meme; yet in reality. The invitation is to embrace your rage and get paid for it.
Yet, the most important part of the job is to produce an image of the harm you inflict. The recruited firebugs say that curators make it clear that the content is a priority. For the car arson gigs in Ukraine, for example, they sometimes suggest the contractors take an already burned-out car, burn it again, and film it. The material damage is not the main aim of the sabotage. Be it a car arson, a vandalizing monument, a pro-Russian graffiti, the main aim is to create media content. The job is to produce an image of violence, while the violence becomes an image-production tool. In this creepy loop, the image production and production of violence coincide. Content creators are violence-makers.
Ernest Dichter, director of the Institute for Motivational Research, said that the task of the ad is to give permission to the desires that you would tend to feel awkward about. “The problem confronting us now is how to allow the average American to feel moral even when he is flirting, even when he is spending money, even when he is buying a second or third car”, he wrote back in the fifties. The way to give permission is, again, through the presumption of collectivity: if you can imagine other people doing this, then you are more ready to do it too.
Atomized gig-workers are hired to scatter the signs of violence across the media field to stage the presence of a violent collectivity in the given society and therefore give permission for the desire for violence, encourage passage a’lact, a passage to action. Everyone can invest their own meaning and aspirations into this spectacle of the violent collectivity; everyone is invited to fantasize of being a part of a preferable imaginary group. The aim of this campaign is to incline you to unleash your anger so it can be used as a military resource. It seduces you with the images of violence and a loom of collective action, and eventually sells you the war. As soon as there is a sufficient number of angry customers in a given society, there is a potential for a new booming market for the war industry.
Fostering Trust: Disinformation Interventions in the Dutch (Alternative) Media and Activist Landscape
ABSTRACT. Public trust in mainstream media and governance has been steadily declining in recent years. Scholars of post-truth studies often attribute this trend to the rise of disinformation and conspiracy theories, which proliferate through digital platforms. In response, mainstream institutions have implemented disinformation interventions designed to curb the spread of misleading narratives and discourage public engagement with controversial knowledge claims. However, alongside these efforts, alternative media have gained unprecedented popularity, with a growing number of individuals consuming alternative knowledge daily. This study examines the (unintended) consequences of disinformation interventions through qualitative content analysis and interviews with both mainstream and alternative actors. By investigating the factors that contribute to the success or failure of these interventions, we aim to provide a deeper understanding of their effectiveness and broader societal impact.
AI-generated photo-based images: their ontological status and interpretation
ABSTRACT. In a recent paper in Science, Epstein, Hertzmann, et al. announce that “generative AI is not the harbinger of art’s demise, but rather is a new medium with its own distinct affordances” (Epstein, Hertzmann, et al., 2023, p. 1110). The authors recall that throughout the history of art, several stages emerged when new technologies seemed to threaten some, if not all, artistic practices. For instance, the invention of photography as a new technology for mechanically recording light values and producing depictions of scenes in front of the camera was perceived by many as bringing about the end of painting and drawing. Especially in Western art and culture, where realistic portrayal of scenes, relying on perspectival visual representation, was important at that time, the automatic and mind-independent nature of photography was easily viewed as “superior” to handmade, mind-dependent images in terms of realistic depiction. However, as Epstein, Hertzmann, et al. also remind us, photography did not by any means replace painting and drawing. Although portraiture did indeed become largely a photographic genre from that time onwards, photography also liberated painting from realism. Many view developments in painting, such as Impressionism, for instance, as the most welcome effect of this liberation.
Building on this analogy, Epstein, Hertzmann, et al. argue that artists and audiences need not perceive generative AI as a threat to current art practices. While it will inevitably bring about changes, artists will reformulate current practices by using generative AI as a new tool in their creative endeavours. Automatisms offered by generative AI should be seen as the rise of a new medium, providing new ways for creative artistic work done by humans. As Epstein, Hertzmann, et al. point out, one potentially misleading aspect of the perception and reception of works produced with generative AI lies in the term we use for it. The term “artificial intelligence” might misleadingly imply human-like intentions, agency, or even consciousness or self-awareness. Removing some misconceptions about generative AI may also foster its acceptance as a new technological tool in the hands of human artists.
The controversy surrounding the ontological status of AI-generated artworks often revolves around the question of authorship. Besides the human beings involved in the process, such as programmers and artists utilising the program, generative AI programs themselves have been suggested as possible candidates for being considered authors (see Elgammal, 2019, and Mazzone and Elgammal, 2019, for instance). However, others argue that, in the absence of consciousness and conscience, AI-generated artworks should be considered as artworks produced by human persons and mediated by a generative AI program, rather than being artworks produced by the AI program itself. In other words, the ontological status of artworks is derived from the connection between an artwork and consciousness and conscience (see Linson, 2016, for instance). Another sceptical argument about AI authorship is that, according to our current understanding, art is created by social agents. Therefore, until this understanding is changed, generative AI cannot be credited with the authorship of art (see Hertzmann, 2018).
In my paper I argue that the ontological status of AI-generated photo-based images, whether they are artworks or other photo-based images, is better understood in terms of their contextual interpretation rather than in terms of their connection to consciousness, conscience, or social agency. This also means that I do not consider artistic creativity and non-artistic image-making creativity to be fundamentally distinct from the point of view of ascribing authorship of such images to generative AI. (However, they are distinct in terms of interpreting them as artworks or non-artistic images.) For my arguments, I rely on the theory of pictorial illocutionary acts developed by Kjørup (1974, 1978) and Novitz (1975, 1977), as well as on the theory of photographic illocutionary acts proposed by Bátori (2015, 2018).
According to the theory of pictorial illocutionary acts (Kjørup, 1974, 1978, Novitz 1975, 1977), the production and presentation of images themselves are to be understood and interpreted as pictorial locutionary acts, similar to verbal locutionary acts, such as uttering words and sentences. At the locutionary act level, only the literal semantic pictorial meaning of the image is interpreted. This meaning is based on our visual recognition abilities, such as object recognition, face recognition, recognising spatial relations, arrangements, and perspective. Currie (1995) refers to this pictorial semantic content as ‘natural’ pictorial meaning because it is not learned, unlike the learned symbolic semantic content of words and other morphological meaning units in natural languages. At the level of pictorial locutionary acts, contextual information is not utilised. It is only at the level of pictorial illocutionary acts that we interpret the image in the context of its presentation and use. For instance, at the pictorial locutionary act level we merely recognise the visual characteristics of the picture of a human head in the barber shop window, while at the pictorial illocutionary act level, we interpret it as a possible statement (pictorial proposition) about the skills of the barber or as a promise of getting a similarly skilful haircut in that barber shop. As Bátori (2015, 2018) further elaborates, photographic illocutionary acts constitute a specific type of pictorial illocutionary act in which the interpretation process at the illocutionary act level necessarily includes interpreting the images as indexical photographs, as opposed to non-indexical, hand-made images.
When interpreting photo-based artworks and other photo-based images produced using generative AI, the locutionary and illocutionary acts involve the following components. At the locutionary act level, audiences identify the literal semantic pictorial content of the images, utilising their visual recognition capacities. This process yields pictorial mental representations of the image content for the mental processing of the audiences. At the illocutionary act level, audiences utilise their contextual knowledge that the image they are considering is a generative AI rendering of a photo-based image or images. They also take into account that the rendering was created using a) the algorithms of the programmer and b) the ideas of the person (artist, creative professional, etc.) instructing the generative AI program. This means that the image as a whole will not be interpreted as an indexical depiction of a scene captured by the camera at the time of exposure, as the interpreter knows that the image has been altered. The role the original indexicality plays in the interpretation depends on the specific modifications and the extent of the interpreter's knowledge about them. However, in terms of their ontological status, AI-generated photo-based images will not be treated and interpreted as indexical photographs.
However, it is not clear whether this implies the emergence of a distinct genre in the process, as suggested by Epstein, Hertzmann, et al. Alternatively, there might simply be new technological means of producing composite images.
With regards to authorship, the interpretation at the illocutionary act level attributes authorship to the person (artist, creative professional, etc.) using the program, not to the programmer or the generative AI program. This is because the person utilising the generative AI program is the one who produces and presents the image (locutionary act) with the assistance of the generative AI program as an image manipulation tool. In the production and presentation process of the image, the programmer is attributed a role similar to that of camera and darkroom equipment constructors, or image manipulation software engineers. Meanwhile, the generative AI is regarded as a complex technical tool for rearranging parts of one or more indexical photographic images into a new, non-indexical image as a whole. Attributing authorship to generative AI is no more a part of the illocutionary act than attributing authorship to image manipulation software used to adjust the contrast or saturation of an image or even to rearrange an indexical photograph or photographs into a new composite, non-indexical image.
Furthermore, the differentiation between "traditional" (non-AI-generated) images and AI-generated ones draws a parallel to the contrast between handmade and mass-produced items (like shoes, tableware, etc.). In the instance of "traditional" production, the object's creator retains complete control over all the encompassing processes, whereas the designer of a mass-produced item only creates the distinctive facets of the product, without direct involvement in each stage of production.
Based on the interpretation process described at the illocutionary act level, it can be concluded that audiences come to have true beliefs about the nature of photo-based images produced using generative AI, as long as the image's nature is readable form it or deducible from the context. Audiences are not deceived in such cases. However, if the image's nature is neither deducible from the context nor readable from the image itself, they might be deceived into interpreting it as an indexical photograph of a scene captured by a camera. During my talk, I will present examples of both deceptive and non-deceptive photo-based images produced using generative AI.
Keywords: generative AI, photo-based images, pictorial and photographic illocutionary acts
References:
Bátori, Zsolt. ‘Photographic Manipulation and Photographic Deception.’ Aisthesis 11:2 (2018):35-47. doi: 10.13128/Aisthesis-23863
Bátori, Zsolt. ‘Photographic Deception.’ Proceedings of the European Society for Aesthetics 7 (2015):68-78.
Currie, Gregory. Image and mind: Film, philosophy and cognitive science. Cambridge: Cambridge University Press, 1995.
Elgammal, Ahmed. ‘AI Is Blurring the Definition of Artist: Advanced algorithms are using machine learning to create art autonomously.’ American Scientist 107:1 (2019):18-21. doi: 10.1511/2019.107.1.18
Epstein, Ziv, Hertzmann, Aaron, et al. ‘Art and the science of generative AI.’ Science 380 2023):1110-1111. doi: 10.1126/science.adh4451
Hertzmann, Aaron. ‘Can Computers Create Art?’Arts 7:2 (2018):18. doi:10.3390/arts7020018
Kjørup, Søren. ‘George Inness and the Battle at Hastings, or Doing Things with Pictures.’ The Monist 58:2 (1974):216-235.
Kjørup, Søren. ‘Pictorial Speech Acts.’ Erkenntnis 12 (1978):55-71.
Linson, Adam. ‘Machine Art or Machine Artists?: Dennett, Danto, and the Expressive Stance.’ In V.C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Switzerland: Springer International Publishing Switzerland, 2016, pp. 443-458. doi: 10.1007/978-3-319-26485-1_26
Mazzone, Marian, Elgammal, Ahmed. ‘Art, Creativity, and the Potential of Artificial Intelligence.’ Arts. 8:1 (2019):26. doi:10.3390/arts8010026
Novitz, David. ‘Picturing.’ Journal of Aesthetics and Art Criticism 34:2 (1975):145-155.
Novitz, David. Pictures and their Use in Communication: A Philosophical Essay, The Hague: Martinus Nijhoff, 1977.
(Dis)trust, generative AI cultural production regulation and the EU copyright system
ABSTRACT. Generative AI undoubtedly challenges the relationship between artistic and legal systems, particularly copyright law. Already built on increasingly unstable ground, EU copyright law continues to grapple with a technology that fails to track neatly onto long-established doctrines principles. The result? A legal framework that overwhelmingly prioritizes economic and investment interests while failing to adequately respond to the concerns of authors, performers, and creators. Distrust in the future of cultural production continues to grow, fuelled by fears of authorial replacement, reputational risks, and the broader precarity of cultural labour.
This paper argues that any regulatory response must start with a fundamental question: how does copyright engage with creativity, and to what extent should non-personal data related to copyright works be subsumed within its scope? Using communication theory and Niklas Luhmann’s systems theory, and Luhmann’s reflections on trust, this paper evaluates how to rebuild trust in a system that is increasingly investment-driven. It does so by examining the increasingly complex web of stakeholders (authors, exploiters, AI providers, platforms, and users) and considers how to reconcile the border of the EU copyright system in a manner that supports socio-cultural dialogue.
This builds a foundation to propose initial steps toward such a shift by rethinking access and authorial remuneration and strengthening collective personality rights. Only through a more inclusive and holistic regulatory approach can we safeguard and foster all forms of creativity, human and AI-generated output, in a manner consistent with cultural production.
Trust and Trustworthiness in AI: An Interdisciplinary Research Agenda
ABSTRACT. There have been increasing calls from governments and other regulatory bodies to make AI more trustworthy, and many ethical guidelines outlining steps to increase human trust in AI (Fjeld et al., 2020; Jobin et al., 2019). To accomplish this effectively, the emerging consensus is that interdisciplinary work that acknowledges the sociotechnical nature of trust in AI is required. This approach is reflected in recent contributions (Inie et al., 2024; Jacovi et al., 2021; Toney et al., 2024). However, there is also significant variation in how the concepts of trust and trustworthiness in AI are used in various disciplines, leading to difficulty in collaboration.
To address this issue, our working group of computer scientists, philosophers, psychologists, archival scientists, and political scientists engaged in a systematic review of how the concepts of trust and trustworthiness are used in our respective disciplines, identified key commonalities in the concepts, and surveyed emerging debates and research on AI trust in each discipline. Our research (i) lays the groundwork for clear and unified conceptions of trust and trustworthiness in AI, and (ii) sets an interdisciplinary research agenda by uncovering theoretical gaps in these concepts and emergent debates. Here we share our main findings.
1. Interdisciplinary commonalities in definitions of trust and trustworthiness: setting the stage for a shared framework of AI trust
While a dizzying number of often competing characterizations of trust and trustworthiness have been offered, our collaboration has drawn out some relatively common points of interdisciplinary convergence. First, trust is most often considered at least in part a psychological attitude that an individual has towards another entity. To (mis)trust someone involves the trustor (the one doing the trusting) being in a certain psychological state, such as perhaps believing that the other party, the trustee (the one being trusted) is trustworthy or reliable (Goldberg, 2020). An additional condition is often added, that the trustor be vulnerable in some way to the trustee. For example, the trustor may experience negative consequences if the trustee does not do what the trustor believes they will do.
Trust is also generally taken to be distinct from trustworthiness. There is a common acknowledgement that we can mistakenly trust others, or that our trust can be unwarranted or inappropriate (Koehn, 1996; Petkovic, 2023). There is less agreement over what makes a trustee trustworthy. However, two general clusters emerge: both moral and epistemic competence are centrally involved in making an entity trustworthy. While these concepts are not always understood in the same way and so more thorough interdisciplinary characterization is needed, roughly speaking, to possess epistemic competence is to possess a domain-specific body of knowledge or skill, and the ability to apply that knowledge or skill to reliably successfully complete a task. To possess moral competence is often tied to moral agency, where such an agent possesses moral values that constrain their behaviour and is sensitive to moral reasons (Baier, 1986; Daukas, 2006; Levi & Stoker, 2000).
These interdisciplinary points of agreement provide the basis for developing a more sophisticated general framework for human trust in AI (section 2.1) and determining when AI is trustworthy (section 2.2).
2. Interdisciplinary research agenda for trust and trustworthiness in AI
2.1 Creating a more holistic model of human trust in AI
One emerging area of debate is on whether trust in AI should be modelled on existing conceptions of interpersonal trust (trust between two people) or whether a new framework is required. There have been a number of novel characterizations of trust that allow for trust in AI (Coeckelbergh, 2012; Ferrario et al., 2020; Kelp & Simion, 2023; Nguyen, 2022; Taddeo, 2010; Tavani, 2015; von Eschenbach, 2021).
However, insufficient attention has been paid to the ways in which AI raises distinct problems for modelling trust, and so even these new models are inadequate. First, it is largely a technology developed by private corporations, yet little attention is paid to the trust relation between consumers and corporations, with AI often treated as an autonomous, stand-alone entity. This highlights a need for interdisciplinary theorizing about individual trust in corporations, about which shockingly little has been written outside of management journals seeking strategies to foment trust in consumers, without consideration for whether and when such trust is warranted (Khamitov et al., 2024; Sihombing & Dinus, 2024).
Second, these corporations stand in relation to governments that are increasingly seeking to regulate AI, and so theories of institutional trust ought to be part of our understanding of AI trust. While these theories have been the subject of considerable theorizing in political science and philosophy (Alfano & Huijts, 2018; Bennett, 2024; Budnik, 2018; Hardin, 1996; Hawley, 2017; O’Neill, 2002; Potter, 2002; Townley & Garfield, 2013), they have yet to be integrated with more individualistic notions of human trust in AI.
Third, while there is considerable interdisciplinary research on the role of experts in establishing public trust (Goldman, 2001; Whyte & Crease, 2010; Camporesi et al., 2017; Cairney & Wellstead, 2020) there has been little consideration of how increased human expert-AI integration, or the establishment of independent human experts in AI, play into establishing public trust in AI. This is important to take into consideration because the trust in AI that laypeople have is importantly different from expert trust, as experts are better positioned to directly assess the epistemic competence of AI in their domain. Moreover, AI is increasingly being integrated into many areas of human expert decision-making, including healthcare contexts (Asan et al., 2020; Ueda et al., 2023).
In short, we call for a more holistic understanding of the trust landscape in AI that integrates its corporate, institutional embeddedness, and the role of human expertise.
2.2 Determining what makes AI trustworthy
The common assumption that trust is AI ought to be promoted is found in prominent ethics guidelines of countries (Jacobin et al. 2020), tech companies (Microsoft, 2018; IBM, 2018), and academic publications (Bedué & Fritzsche, 2021; Gillath et al., 2021; Kok & Soh, 2020). However, the question of whether people do trust AI is distinct from whether people ought to trust AI: trust is appropriate if and only if AI is trustworthy. Therefore, it is critical to get clearer on what makes AI trustworthy.
Some argue that AI is never trustworthy because AI are not capable of exhibiting moral competence (Al, 2023; Alvarado, 2023; Budnik, 2025; Conradie & Nagel, 2024; Dorsch & Deroy, 2024; Hatherley, 2020; Nickel et al., 2010; Ryan, 2020). However, the development of a more holistic trust landscape (2.1) can inform this debate, as it allows a partial shift of trustworthiness from AI to regulators, developers, human experts, and corporations, who are capable of moral competence.
Moreover, while there is broad agreement that to be trustworthy involves possessing moral and epistemic competence (section 1), there is no consensus on what each of these amount to. As recent analysis has shown (Reinhardt 2023), in AI ethical codes trustworthiness is an overladen term, often treated as a dumping ground for all the ways we might want AI to be good. Such ambiguity makes it difficult to develop appropriate criteria to evaluate AI trustworthiness. While many have argued that concepts such as accountability, fairness, and transparency contribute to AI trustworthiness (Angerschmid et al., 2022; Knowles et al., 2022; Westover, 2024), an interdisciplinary understanding of AI trustworthiness in terms of moral and epistemic competence (section 1) provides clear, principled criteria to evaluate AI trustworthiness. This also helps to clarify the task of assessing any candidate component of trustworthiness: for each concept (accountability, fairness, transparency, etc.), the task is to specify how it contributes to assessing or establishing the moral or epistemic competence of AI.
To add to this list of candidate components, data privacy has been vastly under-theorized in the context of AI trustworthiness (Reinhardt 2023). Corporations that are not transparent about what data they are collecting and what purposes they will put it to may see users diminish trust in their AI products (Microsoft, 2018). However, exactly how data privacy relates to trustworthiness remains to be established.
A principled distinction between actual human trust in AI and the conditions under which AI is truly trustworthy is crucial to developing empirical research that can uncover the ways in which human trust is currently manipulated. There is a rich body of research from psychology tracking the ways in which human trust can be manipulated by seemingly irrelevant factors such as aesthetic design and presentation (Elhamdadi et al., 2022; Skarlatidou et al., 2013). A holistic framework (section 1) provides a principled basis from which to determine when these factors are indeed irrelevant to trustworthiness. Understanding the ways in which human trust in AI can be manipulated is in turn central to developing useful AI literacy programs or regulatory controls that prevent such manipulations from occurring.