ATS 2025: AMSTERDAM TRUST SUMMIT
PROGRAM FOR THURSDAY, AUGUST 28TH
Days:
next day
all days

View: session overviewtalk overview

11:00-12:30 Session 2A: Methods session

medium density

Location: Grand Space
11:00
Digital anthropology methods in research of conspiracy theories and distrust.

ABSTRACT. Belief in conspiracy theories is corelated to low trust in public institutions, and may have deleterious impacts on western democracies. This paper explores lessons learnt from the application of digital anthropology methods in research of the influence of conspiracist narratives of distrust, discontent, and dissent in Australia. This research is progressed in an applied setting, with the aim of producing a methodology and findings that can be leveraged to improve policy-making and national security outcomes.

If governments are to understand and mitigate malign influence, they require “…contextual knowledge, deep user knowledge and insights, and situational awareness,” and “…nuanced cultural competence relevant to target audiences including …social structures, values, beliefs … rituals and symbols”. These are capabilities that can be informed by anthropological methods. Demonstrating the capacity of applied anthropology to improve understandings of the influence of conspiracy theories, Baldino and Baldnaves (2023) propose and pilot “investigative digital ethnography” of the Australian online conspiracist milieu. Their work demonstrates that investigative digital ethnography can provide valuable insights into how online communities access, understand, and interpret disinformation.

In practice, investigative digital anthropology is attended by unique challenges. In this paper the digital anthropologist reflects on key challenges as a means to productively critique and thus inform the development of digital anthropology methods.

Challenge 1: Thinking through the assumptions. Some state actors seed and feed anti-democratic, or anti-government narratives amongst online publics as an element of broader grey-zone operations. Extremist groups, online influencers, and populist politicians also co-create and broadcast disinformation and conspiracy theories. In practice, disinformation is defined as false information which is intended to mislead, and some conspiracy theories have been described as strategic narrative. These definitions are useful in an applied context, but both rest on assumptions about the intentionality of people engaging with the narratives. Does the digital anthropologist risk crafting conspiracy theories of their own?

In an investigation of disinformation and strategic conspiracy narratives, the anthropologist accepts an implicit assumption about the intentionality of their research subjects. Does analysis of strategic narrative necessarily rest on an assumption that actors hold agency beyond that which can be demonstrated? The digital anthropologist has only partial access to incomplete data about the activity of the people who create and spread disinformation and conspiracy theories. Most online sharing of false information and conspiracy theories may be better described as organic than strategic. How does the digital anthropologist differentiate strategic communication? Do inferences about intentionality and agency risk clouding anthropological data collection and analysis?

I address these questions by considering the work of Madisson and Ventsel (2021), and highlight the unique analytical tools anthropology brings to bear on conceptualising strategic narrative.

Challenge 2: Which came first - the investigation or the narrative? Delimiting the parameters of an anthropological enquiry is often iterative, because the anthropologist is concerned with a question that arises from the disciplinary field and the applied context (i.e. an etic enquiry), and is also concerned with how the social or cultural phenomena referred to in the question are understood from an emic perspective. In this context the research question may frame referent social and cultural phenomena in ways that are subjective and thus vulnerable to idiosyncrasy. In research on the influence of conspiracy theories this begs the question; does the research enquiry provide the connective tissue that ultimately binds the diverse conspiracist discourses that the digital anthropologist seeks to describe?

In a traditional ethnography, immersion in the field site provides opportunities to challenge and balance the ethnographer. The ethnographer’s lines of enquiry are guided in part by their informants, and are limited by the realities of the field site (even in the case of ethnographies of online worlds such as Boellstorff's “Second Life”). But Caliandro (2018) explains the potentialities of “…following the circulation of an empirical object within a given online environment or across different online environments, and observing the specific social formations emerging around it from the interactions of digital devices and users.” When applying this method the digital anthropologist follows ideas, expressions, and users across cyberspace. In doing so they encounter an enormous volume of potential data, and innumerable potential informants, whose inclusion is not limited by association with defined communities.

It has been said that if you look hard enough on the internet you can find evidence to confirm anything. So how does the digital anthropologist know that the conspiracy theory narratives we study are socially and culturally related and relevant, beyond the appearance of relatedness and relevance that our enquiry lends them? Because of the limitless nature of the field site, and the sheer quantity of discursive communications on the internet, in practice it can be difficult to determine the degree to which it is the research question or the data that drives the observations of the digital anthropologist. I will provide an example of how on one platform, I turned to readily accessible AI to help grapple with this problem, and consider the ways in which digital anthropologists might work with AI.

Challenge 3: Seeing the small things The trouble with information in today’s interconnected world is the sheer quantity of it. An enormous amount of information is at our fingertips on the internet, filtered and pre-interpreted by powerful search engines and AI. Such processing power is necessary to meaningfully engage with the contemporary digital information environment, but it also provides tools which risk leaving the digital anthropologist blind to the mundane and unexpected. The best anthropology observes and finds significance in the everyday, and the unexpected insights that arise from the everyday, are often the most enlightening.

Miller and Horst (2021) propose that digital worlds are authentic, are material, and are dialectical. Online, where anonymity is enhanced (or at least self-representation is abstracted from a user’s physical presence), I argue that trust is built in mundane exchanges about the everyday, which negotiate authentic relationships, material experiences, and dialectical sense-making.

I share my experience of grounding the application of Caliandro’s method in a committed study of one delimited community, wherein attunement to the small things ultimately enriches the anthropologist’s insights into digital sociality.

Anthro-pilled Ethnographic praxis is built on establishing dialogue with our interlocutors, and the necessary condition of productive dialogue is trust. Given the goal of strengthening the very societal structures that conspiracy theorists rage against, along with the challenges outlined above, building trustful dialogue with our research subjects may appear an impossible task. I close by reflecting on the importance of optimism, humility, and intellectual curiosity in the work of the digital anthropologist.

11:15
Modeling the cognitive process of accepting clinical decision support

ABSTRACT. This study examines the cognitive mechanisms underlying trust in AI-supported medical decision-making. Using evidence accumulation models (EAMs), cognitive models to understand decision-making processes, two laboratory experiments analyzed participants' decisions based on clinical advice from either algorithms or humans. Results showed that participants required more evidence to trust AI advice, setting higher decision thresholds for AI compared to human advice. This hesitancy aligns with previous findings on algorithm aversion but refines our understanding by identifying specific cognitive processes.

11:30
Tracking Trust Dynamics in Digital Food Innovations: A Trust Experience Evaluation Toolkit

ABSTRACT. Trust is increasingly recognized as a critical factor in the success of digital innovations, having been identified as an indispensable element for user acceptance especially as technologies become more complex (Kuen et al., 2023). In complex socio-technical systems such as the emerging digital food ecosystem, trust serves as both a prerequisite for adoption and a safeguard for ethical, sustainable practice (Guo, 2022). However, public trust in new digital technologies cannot be taken for granted, given past incidents of data breaches, misuse of personal data, and biased AI outcomes that have fostered skepticism among users. This challenge is particularly salient in the food sector, where data-driven solutions (e.g., for supply chain transparency or personalized nutrition choices) promise significant benefits but must convince stakeholders that they are reliable and "trustworthy by default" (Meier et al., 2022). The Digital Responsibility Goals for Food (DRG4FOOD) project addresses this challenge by providing a framework of seven Digital Responsibility Goals (DRGs) intended to guide the development of human-centric, trustworthy digital food systems (Twinds, 2025). This EU-funded Horizon Europe initiative aims to enable new levels of innovation in areas such as food safety, sustainability, personalized nutrition, reducing food waste, and ensuring fair conditions across the entire food chain through: (1) building and integrating a toolbox of enabling technologies aligned with DRGs; (2) running a structured funding program with open calls to engage external innovators; and (3) demonstrating solutions in real-world data driven food system scenarios. While the DRG4FOOD framework provides structured guidance to enable responsible digital innovation, an essential question remains: how can we assess whether such frameworks and practices genuinely enhance trustworthiness in newly developed digital services, and does increased trust lead to more active and meaningful use? Within this context, the Trust Experience Evaluation Toolkit (Trust Toolkit) is a set of methods developed to evaluate and cultivate trust in digital food innovations. This paper introduces a methodological framework to evaluate the impact of trust-enhancing measures by validating trust dynamics and identifying key trust drivers in pilot projects, and demonstrates how the findings feed back into refining the overall DRG4FOOD framework and toolbox.

11:45
Challenges in Enhancing High School Students' Ability to Identify Scientific Disinformation: Insights from Ecological Classroom Interventions

ABSTRACT. The pervasive spread of scientific disinformation undermines public trust in expertise and fuels anti-scientific behavior, particularly among younger generations who are frequent consumers of online content. This study investigates the effectiveness of three educational interventions—Civic Online Reasoning (COR), Cognitive Biases (CB), and Inoculation (INOC)—designed to enhance high school students' ability to discern scientifically valid information from disinformation. Conducted in Northern Italian high schools with a sample of 2,288 students, the study employed an ecological experimental design that mirrors students' natural digital environments. The interventions were delivered via pre-recorded video lectures in classroom settings, and students were assessed on their ability to evaluate the scientific validity of Instagram posts—both immediately after the intervention and at follow-up sessions one to four weeks later. Contrary to prior findings in controlled online settings, none of the interventions produced a significant overall improvement in students' ability to accurately distinguish between reliable and unreliable scientific information. While the COR intervention indirectly improved accuracy through increased adoption of lateral reading and click restraint strategies, the effect size was insufficient for a significant impact. Notably, the INOC intervention inadvertently heightened general skepticism, leading students to distrust both valid and invalid scientific content. These results highlight significant scalability challenges when translating interventions from controlled environments to real-world classrooms. Factors such as classroom distractions, variability in attention, and the passive nature of video lectures may have diminished the interventions' effectiveness. The study underscores the epistemic fragilities inherent in educational settings and suggests that interventions must be more interactive and contextually adapted to be effective. Our findings contribute to the philosophical discourse on distrust in science and anti-scientific behavior by revealing the limitations of one-size-fits-all educational strategies. They emphasize the need for nuanced approaches that account for social dynamics, cognitive biases, and the formation of trust. Combating anti-scientific behavior in youth populations requires integrated methods that engage students actively and consider the complexities of their information ecosystems. We advocate for future research to explore adaptive, participatory educational models that foster critical thinking and digital literacy skills more effectively. Such models should aim to balance ecological validity with structured engagement, potentially leading to more meaningful improvements in students' ability to navigate misinformation and reinforcing public trust in scientific expertise.

12:00
Examining Perceptions of Conversational Agents’ Political Ideology

ABSTRACT. Recent research on the political repercussions of LLMs largely examines their ideological bias, primarily through a researcher prompting various agents with political surveys (Rozado, 2024; Sullivan-Paul, 2023; King, 2023). But people’s perceptions of political information produced by LLM-powered conversational agents and its ideological biases (or lack thereof) can be influenced by how people perceive technologies such as artificial intelligence (AI), such as their reliability, objectivity, and sophistication (or lack thereof). In this project, I ask what political attributions people bring to conversations with AI, and how that influences their perceptions of political ideology, partisanship, and bias in LLM-generated content. Particularly, I investigate how that differs when participants see LLM-generated content that is congruent with or oppositional to their ideology, and how that differs for people of different demographics, ideology, political interest, education, and digital literacy. Finally, I ask how such perceptions of LLM’s ideological bias influences notions of trust, credibility, and reliance on LLMs as a source of political information.

11:00-12:30 Session 2B: safeguards session I
Location: Red Space
11:00
Trustworthy AI: Opportunity or Obstacle

ABSTRACT. Trustworthy AI has emerged as a guiding principle in the development and deployment of artificial intelligence systems, particularly in communication, where ethical, regulatory, and societal concerns are paramount. This paper examines the differing approaches to Trustworthy AI in the European Union and the United States through a norm comparison analysis of standards and concepts (Rabel 1958; Michaels, 2006). It explores the implications of these frameworks for innovation and societal dynamics by integrating macroeconomic and macropolitical theories.

At the core of this analysis are the contrasting regulatory paradigms shaping Trustworthy AI. The EU’s approach is characterized by stringent legal instruments such as the EU AI Act and the General Data Protection Regulation (GDPR), which emphasize principles like transparency, fairness, accountability, and traceability (Voigt & Von Dem Bussche, 2017). These standards reflect a strong commitment to safeguarding individual rights and promoting systemic accountability. This approach is rooted in economic and political theories such as ordoliberalism, which advocates for a robust legal framework to ensure fair competition, and Keynesian economics, which underscores state intervention to stabilize markets and address social uncertainties (Biebricher et al., 2022; Coddington, 2013). Although these regulations may slow innovation due to increased compliance requirements, they aim to align technological development with long-term societal and ethical goals. In contrast, the U.S. adopts an innovation-driven, less-regulated framework that prioritizes technological agility and market competitiveness (Carayannis et al., 2014). This model is characterized by industry-led initiatives and flexible guidelines, aligning with neoclassical economic theory, which advocates minimal government intervention except in cases of market failure (Amable, 2003). By reducing bureaucratic barriers, this lighter regulatory approach facilitates rapid innovation, enabling companies to develop and deploy cutting-edge technologies rapidly (Bessant & Tidd, 2018). It also reflects the liberal tradition in political economy, emphasizing individual rights and market freedom over extensive state control. Macroeconomic theories provide further insight into these divergent approaches. For example, the innovation dynamics framework developed by Austrian-German-American economist Joseph Schumpeter identifies innovation as the primary driver of economic growth (Schumpeter, 1912). The U.S. model supports this dynamic by fostering a flexible regulatory environment conducive to transformative technological breakthroughs. However, this approach raises concerns about the ethical and social implications of unregulated AI systems, particularly in communication, where unchecked advancements may exacerbate issues such as bias, misinformation, and a lack of accountability. The EU’s regulatory model promotes regulated innovation, embedding ethical considerations into technological development. While regulations may impose higher compliance costs and slow technological progress, they foster sustainable and ethically robust AI systems. This approach asserts that long-term societal benefits, such as increased consumer trust and a more equitable digital ecosystem, are best achieved through a framework that integrates ethics into innovation. By establishing global benchmarks for Trustworthy AI, the EU’s regulatory rigor can become a competitive advantage (Mason, 2018).

This study employs normative comparison to systematically analyze the frameworks needed for governing AI in both regions. It examines the legal instruments regulating AI deployment, including constitutional, criminal, and administrative law provisions. In the EU, the EU AI Act and GDPR constitute a comprehensive legal regime addressing accountability, data protection, and ethical practices. In contrast, the U.S. relies on a patchwork of sector-specific regulations, constitutional guarantees such as the First Amendment and broader civil rights provisions, including the Civil Rights Act and the Equal Protection Clause. By comparing these frameworks, the study underscores how foundational principles shape the operational dynamics of AI systems and influence the effectiveness of regulatory oversight. Macropolitical theories illuminate the socio-political dimensions of these regulatory models. The U.S. model exemplifies a market-driven approach that prioritizes entrepreneurial initiative and rapid technological progress, aligning with liberal perspectives that emphasize individual freedoms and minimal state intervention. In contrast, the EU’s regulatory paradigm reflects a socially oriented model focused on collective welfare, ethical standards, and the protection of individual rights (Schuh et al., 2024). These theoretical perspectives reveal how each region balances innovation and regulation. In the U.S., market freedom and technological agility foster a dynamic ecosystem that drives rapid innovation. However, the lack of comprehensive regulatory oversight presents ethical and social challenges, including insufficient accountability and potential risks to consumer rights. Conversely, the EU’s rigorous framework establishes a foundation for sustainable growth by ensuring technological advancements are implemented ethically and responsibly. While this approach may slow innovation and increase costs, it aims to build a trustworthy digital ecosystem that delivers long-term societal benefits. A central question is whether the EU’s high regulatory thresholds hinder domestic companies through compliance burdens or serve as a strategic advantage by establishing global benchmarks for Trustworthy AI (Floridi, 2019). This question is pivotal for understanding the broader impact of regulatory frameworks on innovation ecosystems and market dynamics (Schmidt, 2002). The study examines multiple dimensions of regulatory impact. First, it analyzes accountability and transparency standards and their effects on companies’ operational practices. Second, it evaluates punitive measures used to enforce compliance with AI norms. Third, it assesses the roles of administrative agencies and supervisory bodies in ensuring adherence to regulations, providing a comprehensive view of enforcement mechanisms. This multidimensional approach enhances understanding of legal frameworks and contextualizes their influence within broader socio-economic and political paradigms (Hall & Soskice, 2001).

The findings extend beyond regulatory policy. In the EU, the EU AI Act and related regulations present short-term challenges for companies, such as increased compliance costs and slower innovation cycles (Rennings & Rammer, 2011). However, these measures aim to establish a robust, trustworthy digital ecosystem that could serve as a global benchmark. Strong ethical and operational standards may strengthen consumer trust and contribute to a more resilient market. In contrast, the U.S.’s flexible regulatory environment reinforces its position as a technological leader by driving rapid innovation. Nevertheless, this approach raises questions about its long-term sustainability, given growing concerns over privacy, bias, and accountability. In conclusion, this analysis aims to contributes to the discourse on Trustworthy AI by presenting a comprehensive comparative study that integrates regulatory, economic, and normative perspectives (Yu et al., 2024). It illustrates how the U.S. model fosters rapid innovation through minimal regulation, while the EU’s approach prioritizes sustainable and ethically sound AI technologies. By examining the legal and institutional foundations driving these regional differences, the study offers a framework for understanding the impact of regulatory choices on innovation, market dynamics, and societal trust. Drawing on macroeconomic theories such as Schumpeterian innovation dynamics and neoclassical economic theory, as well as macropolitical perspectives and employing normative comparison, this research establishes a foundation for future exploration of the relationship between regulation, innovation, and societal dynamics. These insights aim to guide policymakers, industry leaders, and researchers in balancing technological advancement with the imperative for ethical, transparent, and accountable AI systems in an interconnected world.

11:20
Human Oversight in AI Decision-Making: Balancing Autonomy and Accountability

ABSTRACT. Extended Abstract:

In the era of rapid technological advancement, artificial intelligence (AI) has emerged as a powerful tool reshaping various sectors from healthcare to finance. However, the increasing autonomy of AI systems has raised significant concerns regarding trustworthiness and accountability. Documented cases, such as bias identified in AI-driven facial recognition technology (NIST, 2019) and discrimination detected in AI-based hiring tools (Amazon, 2018), have illustrated the critical need for transparent oversight mechanisms. This paper proposes a comprehensive framework for human oversight in AI decision-making, balancing technological autonomy with necessary accountability and ethical standards.

Context and Importance of Human Oversight: AI technologies are implemented in scenarios directly impacting human lives, including autonomous vehicles, medical diagnosis tools, and employment screening processes. For instance, Amazon ceased use of an AI hiring algorithm in 2018 due to identified gender bias against female applicants. The complexity and opacity of algorithms frequently obscure decision-making processes, which empirical research shows significantly reduces user trust and regulatory clarity (European Commission Report, 2020).

Review of Existing Oversight Mechanisms: Current regulatory frameworks differ across jurisdictions and frequently lack explicit requirements for continuous human oversight. The European Union's General Data Protection Regulation (GDPR, Article 22) mandates transparency in algorithmic decision-making affecting individuals, and the OECD’s AI Principles (2019) advocate for accountability and transparency. However, a 2021 European Commission report indicated a lack of uniform practical implementation across member states, highlighting ongoing gaps in oversight.

Proposed Framework for Human Oversight - This paper proposes a structured oversight approach with four tiers: 1. Pre-deployment Testing and Validation: Rigorous pre-deployment testing of AI algorithms to identify biases and ensure ethical compliance. Similar to the FDA's clinical trial requirements for medical software in the U.S., algorithms should undergo assessments for fairness, accuracy, and societal impact, conducted by interdisciplinary expert panels. 2. Real-time Monitoring: Implementation of real-time monitoring, with documented use in autonomous vehicles by the U.S. National Highway Traffic Safety Administration (NHTSA) requiring automated emergency stops ("circuit breakers") in hazardous scenarios. 3. Post-decision Auditing: Routine audits similar to the financial audit standards required under the Sarbanes-Oxley Act (2002), ensuring compliance with ethical and regulatory standards. Audits and their outcomes should be publicly documented, aligning with transparency practices used by public regulatory authorities. 4. Feedback Mechanisms: Structured, documented systems for user feedback, modeled after platforms like AI incident databases currently maintained by the Partnership on AI, which collect and publicly document reported AI issues.

Ethical and Social Alignment: The proposed framework ensures adherence to international ethical standards, such as those outlined by UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021). Integrating human oversight is specifically intended to reduce biases and discrimination, documented in existing AI implementations, thereby enhancing societal trust.

Policy Implications and International Governance: Adoption of this framework necessitates clear policy directives at national and international levels. Policymakers are encouraged to specify explicit oversight mechanisms for AI deployment, following examples such as the European Union’s proposed AI Act (2021), which sets distinct oversight requirements for high-risk AI applications.

Conclusion: As AI systems increasingly function autonomously, documented oversight mechanisms become essential to maintain public trust, ethical compliance, and accountability. The framework proposed herein is built upon existing documented regulatory and governance practices, providing a clear, structured pathway for responsible AI use.

References 1. Amazon Inc. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/ 2. Artificial Intelligence Act. (n.d.). The EU Artificial Intelligence Act. Retrieved March 7, 2025, from https://artificialintelligenceact.eu/ 3. European Commission. (2020). White Paper on Artificial Intelligence: A European approach to excellence and trust. European Commission COM(2020) 65 final. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf 4. European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). European Commission. COM(2021) 206 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 5. National Institute of Standards and Technology (NIST). (2019). Face recognition vendor test (FRVT) Part 3: Demographic effects. U.S. Department of Commerce. https://doi.org/10.6028/NIST.IR.8280 6. Organisation for Economic Co-operation and Development (OECD). (2019). OECD Principles on Artificial Intelligence. OECD Publishing. Retrieved from https://oecd.ai/en/ai-principles 7. Partnership on AI. (2025). AI Incident Database. Partnership on AI. https://partnershiponai.org/workstream/ai-incidents-database/ 8. UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO Digital Library. https://unesdoc.unesco.org/ark:/48223/pf0000380455 9. UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO General Conference, 41st session, Paris. https://unesdoc.unesco.org/ark:/48223/pf0000381137

11:40
Control-Based Trust in AI Governance: Copyright Law's Role Within the EU AI Act’s Institutional Design

ABSTRACT. This paper examines the role of copyright law as a control-based trust mechanism within the governance set-up of the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, 2024 (EU AI Act). Specifically, it addresses how copyright law provisions interact with the institutional actors set up within the AI Act to account for power inequalities among authors, users and AI developers to create institutional trust. Article 53 of the EU AI Act mandates the providers of general-purpose AI models (GPAIMs) to comply with copyright law provisions in the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market (CDSM Directive), particularly regarding text and data mining exceptions. We explore three interconnected research questions: (1) How do provisions of the CDSM Directive address power asymmetries between AI developers and copyright holders within the AI governance ecosystem? (2) In what ways might the provisions of the CDSM Directive facilitate “control-based trust” as conceptualized by Sydow and Windeler (2004)? (3) What dynamics emerge between regulatory actors responsible for and those overseeing broader AI governance under the institutional design of the EU AI Act? Our methodology combines a preliminary literature review of the trust literature, a doctrinal analysis of legal provisions of the EU AI Act and the CDSM Directive, and an institutional analysis of AI governance, examining both the legal texts and the relationships between institutional bodies such as the AI Office, the European AI Board, and various stakeholders involved in AI training and deployment. This paper contributes to understanding how hard-law approaches within AI governance may establish conditions for trust in AI systems and among stakeholders. In particular, how requirements in the EU AI Act— requiring GPAIM providers to comply with provisions of copyright law— create accountability mechanisms that may create conditions for trust-building between copyright holders and AI developers. Theoretical background AI Governance – a definition ‘AI governance’ has been conceptualised by Wirtz et al. (2024) by placing society at the centre. They define it as the management of AI development and deployment through established rules and guidelines within an evolving ecosystem. Such an approach seeks to address real and potential impacts of AI on society by defining structures for technological, informational, political, legal, social, economic, and ethical action. Dafoe (2024) also defines, “AI governance refers (1) descriptively to the policies, norms, laws, and institutions that shape how AI is built and deployed, and (2) normatively to the aspiration that these promote good decisions (effective, safe, inclusive, legitimate, adaptive)". For Dafoe (2024), the field of AI governance studies how humanity can best navigate the transition to advanced AI systems. An AI governance approach varies based on the field of application and risk-benefit assessments, utilizing both hard-law and soft-law tools to either restrict or encourage AI use (Villasenor, 2020). The ecosystem of AI governance consists of members with distinct roles, interests, and actions that collectively influence governance processes (Wirtz et al., 2024). AI impacts and is governed by multiple actors rather than a single entity, including government, industry, civil society, companies and academia, all playing crucial roles nationally and globally. Together, they form an interconnected network with shared interests that both collaborate and compete, resembling a biological ecosystem (Moore, 1993). The combined capabilities of these actors create value and outcomes unattainable by any individual entity (Adner, 2006). AI Governance, Trust and Control Trust enables cooperation among diverse stakeholders in AI governance (drawing on Braithwaite and Levi, 1998). Trust in governance contexts represents the willingness to accept vulnerability when confronted with risk and uncertainty (Mayer et al., 1995). Trust shares a complex relationship with ‘control’ in trust literature. Some scholars argue that trust and control function as substitutes, where greater trust reduces the need for formal control mechanisms (Sako, 1998). Others argue that they are complementary elements, with each supporting and reinforcing the other. In the literature, trust, control, knowledge, and power exhibit varied relationships across different contexts. In organisational contexts, Das and Teng (2001) distinguish between control-undermining and trust-enhancing forms of control, while Sydow and Windeler (2004) identify bidirectional relationships like “control-based trust” and “trust-based control”. Control may actively generate trust—termed “control-based trust” by Sydow and Windeler (2004)—when “control measures applied show that the actions, procedures, or results do occur as expected – that is, when the trust given turns out to be justified”. However, control measures that are not implemented in an appropriate manner can directly undermine trust. Conversely, trust may enable control—"trust-based control” (Sydow and Windeler, 2004)—by opening additional control possibilities, particularly social control options. Das and Teng (2001) further argue that competence and goodwill trust enhance all control modes in alliances, while acknowledging that sufficient trust may render control unnecessary in certain contexts (Sydow, 2006). Trust in AI Systems Recent scholarship on AI governance reveals a complex interplay between trust, control, power, regulation, and institutional design. EU AI Act has emerged as a focal point for analysis. Laux, Wachter, and Mittelstadt (2023) critique the EU AI Act for conflating “trustworthiness” with “acceptability of risks” and treating trustworthiness as binary rather than as an ongoing process. Researchers have examined trust through various lenses. Gillis, Laux, and Mittelstadt (2024) explore interpersonal, institutional, and epistemic dimensions, noting that transparency sometimes decreases rather than increases trust. Lahusen, Maggetti, and Slavkovik (2024) propose “watchful trust” - balancing trust with necessary vigilance in complex systems. Tamò-Larrieux et al. (2024) take a more pragmatic approach, identifying sixteen factors affecting trust in AI and demonstrating that AI governance can directly influence only six factors, which include - legislative measures, permissible AI tasks, automation levels, competence standards, transparency requirements, and power dynamics between users and providers, suggesting regulators should focus strategically rather than attempting to address all aspects of trust indiscriminately. Human oversight, as discussed by Laux (2024) and Durán and Pozzi (2025), also plays a significant role in AI governance. The literature gap – trust within actors in AI governance While substantial research examines trust in AI systems themselves, Zhang (2024) notes that further research is needed into institutional trust in the actors behind AI systems within contemporary political and economic contexts. Research by Zhang and Dafoe (2019) suggests that public trust varies significantly across different AI actors, but existing literature focuses primarily on non-regulatory actors such as university researchers, technology companies and so on. This gap presents an opportunity for examining the institutional dynamics between regulatory actors within hard-law AI governance mechanisms like the EU AI Act—moving beyond principles of trustworthy AI toward understanding the complex interrelationships between trust, oversight, and governance in AI regulatory ecosystems. Power dynamics, control-based trust and AI governance Power dynamics fundamentally shape trust relationships in AI governance, as highlighted by Tamò-Larrieux et al. (2024). The inherent information and power asymmetries between AI providers and users create significant governance challenges, with large technology companies wielding disproportionate influence in framing debates and setting standards that serve their interests. Such power asymmetries directly impact user trust in AI systems, as ordinary consumers struggle to question or understand corporate actions and motivations (Van Dijck et al., 2018; Nowotny, 2021). EU AI Act addresses these power imbalances through a multi-layered governance approach that establishes several key institutional actors. These include the AI Office (Article 64), the European AI Board (Article 66), a Scientific Panel (Article 68), and an Advisory Forum (Article 67). Together, these institutions form an AI governance ecosystem. Creating Control-Based Trust through law We argue that the EU AI Act creates what Sydow and Windeler (2004) describe as “control-based trust” by providing constraints within which different actors must engage with AI systems despite inherent uncertainties. The relationship between law and trust is complex, with some scholars arguing that legal structures reduce uncertainty to enable trust relationships, while others suggest formal controls can trigger a “trust paradox” (Long & Sitkin, 2018). The literature reveals mixed findings about whether hard-law approaches in governance either complements or substitutes trust. While Ribstein (2001) argues that law substitutes trust because “the shadow of coercion” impedes trust development, Hill and O'Hara (2006) propose that regulation may reduce uncertainty to a level where trust relationships become possible. Tamò-Larrieux et al. (2024) suggest that regulation should aim to optimize trust by minimizing both under-trust and over-trust in technologies. Copyright law functions as a specific control mechanism within AI govenance, as GPAIMs depend heavily on copyrighted materials for training, with the AI Act requiring GPAIM providers to comply with applicable copyright laws in EU member states (Geiger & Iaia, 2024). Our Contribution We integrate organisational trust theory with hard-law AI governance approaches, drawing on Das and Teng’s (2001) control types and their relationship to trust, and Sydow and Windeler’s (2004) concepts of “control-based trust”. We argue that copyright law functions as a social control mechanism that may generate trust through appropriate constraints. Our approach recognises that trust in AI governance emerges not merely from technical reliability in AI systems, but from institutional arrangements that appropriately distribute power, knowledge, and control among actors involved in AI deployment. By focusing on institutional trust in regulatory actors we address Zhang’s (2024) identified research gap, exploring how copyright law might facilitate trustworthy relationships between regulators, AI developers, and creative communities within the EU AI Act’s institutional design.

Bibliography: Adner, R. (2006). Match your innovation strategy to your innovation ecosystem. Harvard Business Review, 84(4), 98-107. Bachmann, R. (2001). Trust, power and control in trans-organizational relations. Organization Studies, 22(2), 337-365. Braithwaite, J., & Levi, M. (Eds.). (1998). Trust and governance. Russell Sage Foundation. Bullock, J. B. (2023). Introduction and Overview. In J. B. Bullock, Y. C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford Handbook of AI Governance Oxford University Press. Clegg, S. R., & Hardy, C. (1996). Organizations, organization and organizing. In S. R. Clegg, C. Hardy, & W. R. Nord (Eds.), Handbook of organization studies (pp. 1-28). Sage. Dafoe, Allan, ‘AI Governance: Overview and Theoretical Lenses’, in Justin B. Bullock, and others (eds), The Oxford Handbook of AI Governance, Oxford Handbooks (2024; online edn, Oxford Academic, 14 Feb. 2022), https://doi.org/10.1093/oxfordhb/9780197579329.013.2, accessed 14 Mar. 2025. Das, T. K., & Teng, B. S. (2001). Trust, control, and risk in strategic alliances: An integrated framework. Organization Studies, 22(2), 251-283. Durán, J. M., & Pozzi, G. (2025). Trust and transparency in artificial intelligence: Epistemic and normative dimensions. AI & Society, forthcoming. Geiger, C., & Iaia, V. (2024). Towards an independent EU regulator for copyright issues of generative AI: What role for the AI Office (but more importantly: What's next)? International Review of Intellectual Property and Competition Law, 55(1), 1-18. Gillis, R., Laux, J., & Mittelstadt, B. (2024). "Chapter 14: Trust and trustworthiness in artificial intelligence". In Handbook on Public Policy and Artificial Intelligence. Cheltenham, UK: Edward Elgar Publishing. Retrieved Mar 11, 2025, from https://doi.org/10.4337/9781803922171.00021 Daniel Hult (2018) Creating trust by means of legislation – a conceptual analysis and critical discussion, The Theory and Practice of Legislation, 6:1, 1-23, DOI: 10.1080/20508840.2018.1434934 Gutierrez, Carlos Ignacio, Marchant, Gary E., & Tournas, Lucille. (2020). Lessons for artificial intelligence from historical uses of so law governance. JURIMETRICS 61 (1), 1–18. https://papers.ssrn.com/abstract=3775271. Klijn, E.-H., Edelenbos, J., & Steijn, B. (2010). Trust in Governance Networks: Its Impacts on Outcomes. Administration & Society, 42(2), 193-221. https://doi.org/10.1177/0095399710362716 (Original work published 2010) Lahusen, C., Maggetti, M. & Slavkovik, M. Trust, trustworthiness and AI governance. Sci Rep 14, 20752 (2024). https://doi.org/10.1038/s41598-024-71761-0 Laux, J. (2024). Institutionalised distrust and human oversight of artificial intelligence: Towards a democratic design of AI governance under the European Union AI Act. AI & Society, 39(1), 2853-2866. Laux, J., Wachter, S. and Mittelstadt, B. (2024), Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance, 18: 3-32. https://doi.org/10.1111/rego.12512 Long, C. P., & Sitkin, S. B. (2018). Control–trust dynamics in organizations: Identifying shared perspectives and charting conceptual fault lines. Academy of Management Annals, 12(2), 725-751. Luhmann, N. (1979). Trust and power. John Wiley & Sons. Marchant, G. (2019). “So law” governance of artificial intelligence. AI PULSE. Marchant, G., Abbott, K., & Allenby, B. R. (2013). Innovative governance models for emerging technologies. Edward Elgar Publishing. https://doi.org/10.4337/9781782545644. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. Moore, J. F. (1993). Predators and prey: A new ecology of competition. Harvard Business Review, 71(3), 75-86. Möllering, G. (2006). Trust: Reason, routine, reflexivity. Emerald Group Publishing. Nowotny, H. (2021). In AI we trust: Power, illusion and control of predictive algorithms. John Wiley & Sons. Paul, R. (2023). European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market. Regulation & Governance, 14(1), 1-22. Ribstein, L. E. (2001). Law v. trust. Boston University Law Review, 81, 553-590. Sako, M. (1998). Does trust improve business performance? In C. Lane & R. Bachmann (Eds.), Trust within and between organizations (pp. 88-117). Oxford University Press. Shapiro, S. P. (1987). The social control of impersonal trust. American Journal of Sociology, 93(3), 623-658. Starke, G., & Ienca, M. (2024). Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence. Cambridge Quarterly of Healthcare Ethics, 33(3), 360–369. doi:10.1017/S0963180122000445 Sydow, J. (1998). Understanding the constitution of interorganizational trust. In C. Lane & R. Bachmann (Eds.), Trust within and between organizations (pp. 31-63). Oxford University Press. Sydow, J. (2006). How can systems trust systems? A structuration perspective on trust-building in inter-organizational relations. In R. Bachmann & A. Zaheer (Eds.), Handbook of trust research (pp. 377-392). Edward Elgar. Sydow, J., & Windeler, A. (2003). Knowledge, trust, and control: Managing tensions and contradictions in a regional network of service firms. International Studies of Management & Organization, 33(2), 69-99. Tamò-Larrieux, A., Mayer, S., & Zihlmann, Z. (2024). Can law establish trust in artificial intelligence? Regulation & Governance, 18(3), 781-804. Thaler, R. H. (2000). From homo economicus to homo sapiens. Journal of Economic Perspectives, 14(1), 133-141. van Dijck, José, Thomas Poell, and Martijn de Waal, The Platform Society (New York, 2018; online edn, Oxford Academic, 18 Oct. 2018), https://doi.org/10.1093/oso/9780190889760.001.0001 Villasenor, J. (2020). Soft law as a complement to AI regulation. Brookings. Wirtz, B. W., & Müller, W. M. (2019). An integrated artificial intelligence framework for public management. Public Management Review, 21(7), 1076-1100. Wirtz, B. W., Langer, P. F., & Weyerer, J. C. (2024). An Ecosystem [Ev1] Framework of AI Governance. In J. B. Bullock, Y. C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford Handbook of AI Governance. Oxford University Press. Young, L. C., & Wilkinson, I. F. (1989). The role of trust and co-operation in marketing channels: A preliminary study. European Journal of Marketing, 23(2), 109-122. Zhang, Baobao, 'Public Opinion toward Artificial Intelligence', in Justin B. Bullock, and others (eds), The Oxford Handbook of AI Governance, Oxford Handbooks (2024; online edn, Oxford Academic, 14 Feb. 2022), https://doi.org/10.1093/oxfordhb/9780197579329.013.36, accessed 11 Mar. 2025. Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Center for the Governance of AI, Future of Humanity Institute, University of Oxford.

12:00
Trust as a Key Element in Regulating Digital Space

ABSTRACT. Navigating cyberspace relies on an essential yet often overlooked principle: trust. Users, regulators, and industry stakeholders must maintain mutual confidence in the digital ecosystem to ensure safety, reliability, and compliance. This paper examines the role of trust in regulating cyberspace, arguing that it is not merely a supplementary concept but a foundational element of governance. By analysing different relationships within the digital environment - including user-to-user, user-to-platform, and industry-to-regulator dynamics - this study demonstrates how trust shapes interactions and regulatory approaches. The discussion highlights that neither control nor contractual obligations can function effectively without an underlying layer of trust. The paper concludes that any regulatory framework for cyberspace must prioritise maintaining and reinforcing trust among all actors to support security, compliance, and innovation.

11:00-12:30 Session 2C: Theory session I - interpersonal and impersonal trust relations
Location: White Space
11:00
Trustless Strategic Communication

ABSTRACT. Introduction: The Necessity of a Paradigm Shift In an era of misinformation, declining institutional trust, and heightened skepticism toward authority, the role of trust in strategic communication requires critical reassessment. Traditionally, trust has been both a means and an end in strategic communication, perceived as a foundational prerequisite for achieving behavioral outcomes (Lewis & Weigert, 1985). However, recent social and technological developments challenge the feasibility of trust-based strategies. While existing approaches attempt to mitigate distrust through transparency and trust-rebuilding efforts, these solutions are not always viable. Instead, this paper introduces the concept of "Trustless Strategic Communication," a framework that facilitates reliance and engagement even in the absence of traditional trust. Trustless Strategic Communication builds on two key foundations: trustless technology, particularly blockchain and cybersecurity protocols, and the philosophical distinction between trust and reliance (Goldberg, 2020). This paper synthesizes these models to propose an alternative communication paradigm. By examining both theoretical perspectives and empirical case studies, the study explores how strategic communication can operate effectively without necessitating trust, particularly in crisis communication, political discourse, and public health messaging.

Strategic Communication and Its Dependence on Trust Strategic communication is often conceptualized as an umbrella term encompassing public relations, marketing, crisis communication, and political messaging (Hallahan et al., 2007). In traditional models, trust is viewed as an intermediary goal essential for influencing audience behavior (Zerfass et al., 2020). Numerous studies suggest that trust is the most significant predictor of long-term effectiveness in strategic communication (Argenti, Howell, & Beck, 2005). However, strategic communication does not always require trust as a prerequisite. In cases where trust is unattainable or irrelevant to decision-making, alternative approaches are needed. Trustless strategic communication provides an opportunity to achieve persuasion without reliance on perceived integrity, instead leveraging principles such as verification mechanisms, structural guarantees, and increased confidence through transparency (De Filippi, Mannan, & Reijers, 2020). The Limitations of Trust-Based Messaging rust-based strategic communication faces several inherent limitations, some of which have always existed and others that are amplified by contemporary societal shifts. In many situations, trust is simply unattainable. For instance, persuading skeptical audiences, particularly those who fundamentally distrust the communicator, can prove to be an insurmountable challenge. Additionally, efforts to rebuild trust after a crisis may be ineffective, as lingering skepticism often remains despite corrective actions. Even when trust is present, it does not always translate into the desired behavior. In some cases, factors such as incentives, convenience, or perceived risk mitigation play a more decisive role than trust in influencing decisions. Recent societal changes have further increased the necessity of finding alternatives to trust-based messaging. Rising cynicism and widespread distrust in centralized institutions have eroded the effectiveness of conventional strategic communication efforts (Eyal, Au, & Capotescu, 2024). The prevalence of para-crises, recurring misinformation, and corporate scandals has provided audiences with even more reasons to doubt institutional narratives. Moreover, audiences today increasingly demand tangible benefits over mere assurances of credibility, making efficiency and cost reduction critical considerations in persuasive communication. Technological innovations, particularly the rise of decentralized systems, reinforce the shift away from trust-dependent messaging. As individuals prioritize financial sovereignty and autonomy, communication strategies that rely on external validation become less effective (Dunnett et al., 2023). Given these challenges, it is crucial to develop a framework that enables effective communication without necessitating trust, leading to the emergence of trustless strategic communication.

Current Definitions of Trust: Theoretical and Empirical Foundations Trust is an interdisciplinary concept with diverse definitions spanning sociology, psychology, communication studies, and organizational theory (Blöbaum, 2021). Despite definitional variations, two central components emerge consistently: competence (ability) and honesty (intention) (Mayer, Davis, & Schoorman, 1995). Trust is often viewed as a necessary element of cooperation and social cohesion. However, empirical studies indicate that trust is not always attainable or necessary for strategic action. A critical distinction must be made between trust and reliance (Goldberg, 2020). While trust entails vulnerability and risk, reliance does not necessarily involve conferring goodwill or integrity onto the trusted party. This differentiation is crucial for understanding trustless strategic communication, which aims to achieve behavioral outcomes through mechanisms that do not require belief in an actor's competence or benevolence.

Trustless Strategic Communication as a Solution The inspiration for trustless strategic communication came from in blockchain technology, which introduced trust-minimizing mechanisms to ensure secure transactions (De Filippi, Mannan, & Reijers, 2020). In cybersecurity, trustless delegation reduces reliance on intermediaries while maintaining functional security (Vella, 2022). Applying these principles to communication involves structuring messages and interactions in ways that minimize the need for trust. Trustless strategic communication is defined as: "A form of strategic communication in which the target audience is persuaded to act in a way that advances the organization’s mission without attempting to increase trust, but rather by focusing on alternative persuasion methods." A primary format of trustless strategic communication relies on the reduction of vulnerability, where messages are designed to increase audience confidence without requiring risk-taking (Bolton, 2024). Instead of relying on trust, it employs binding commitments such as guarantees, contracts, or algorithmic enforcement to replace traditional assurances. Additionally, it leverages transparency as a verification mechanism rather than asking audiences to take statements at face value. Alternative formats could persuade by emphasizing the risk of not collaborating. However, these methods entail ethical dilemmas, and should only be considered in specific cases, such as promoting lifesaving behavior, as common in social marketing campaigns. Importantly, this paradigm shift does not seek to eliminate trust-based strategies but rather to supplement them in cases where trust-building is impractical or infeasible. Then, it could be employed instead of, or alongside trust-based strategies. Furthermore, as already acknowledged in regard to trustless technology, the term trustless could be inaccurate, as trust is minimized, distributed or even externalized, but it is likely to always take some part in the decision-making of any individual. Empirical Case Studies To illustrate trustless strategic communication in action, this paper examines two case studies. The first focuses on GGD Rotterdam's vaccine communication strategy during the COVID-19 pandemic. This case study investigates how public health officials navigated widespread skepticism by supplementing traditional trust-building efforts with practical incentives, clear risk disclosures, and decentralized decision-making structures (Krastev et al., 2023). The second case study explores Israel's public diplomacy efforts during the Israeli-Gaza war. This analysis examines how communication strategies were employed to engage international audiences amid a highly polarized environment. In both cases, primary data sources include interviews with key figures managing the strategic communication efforts, providing insight into the application and effectiveness of trustless strategic communication principles.

Discussion and Conclusions The findings suggest that trustless strategic communication has potential applications across various fields, including public health, corporate crisis communication, and political messaging. While it is not a replacement for trust-promoting strategies, it serves as a viable alternative or complement in contexts where trust-building is impractical or unnecessary. The increasing need for trustless strategies is driven by societal and technological trends, including declining trust in institutions and the rise of decentralized systems. Although ethical concerns exist, particularly regarding manipulation risks, trustless communication can also reduce opportunities for misinformation by emphasizing verifiable claims. Future research should empirically explore the identification of existing practices that align with trustless principles across different industries. Specifically, health communication could benefit, not only due to increasing challenges in the post-Covid area but also since healthy expert-driven behavior is probably most justified in trying to achieve reliance without trust. Additionally, comparative studies should examine the effectiveness of trustless versus trust-based communication strategies to determine optimal applications. Further investigation is needed to assess the best combinations of trustless and trust-based strategies to maximize persuasive impact. By establishing trustless strategic communication as a distinct theoretical framework, this study contributes to the ongoing discourse on trust, misinformation, and digital persuasion. It highlights the necessity of rethinking traditional communication paradigms to better navigate contemporary challenges in strategic messaging.

Reference list: Argenti, P. A., Howell, R. A., & Beck, K. A. (2005). The strategic communication imperative. MIT Sloan Management Review, 46(3), 83–89. Balomenos, K. (2023). Strategic communication as a mean for countering hybrid threats. In Handbook for Management of Threats: Security and Defense, Resilience and Optimal Strategies (pp. 371-390). Springer International Publishing. Barnoy, A., & Reich, Z. (2022). Trusting others: A Pareto distribution of source and message credibility among news reporters. Communication Research, 49(2), 196-220. Blöbaum, B. (2021). Trust and communication. Springer International Publishing. Bolton, M. L. (2024). Trust is not a virtue: Why we should not trust trust. Ergonomics in Design, 32(4), 4-11. De Filippi, P., Mannan, M., & Reijers, W. (2020). Blockchain as a confidence machine: The problem of trust & challenges of governance. Technology in Society, 62, 101284. Dunnett, K., Pal, S., Jadidi, Z., & Jurdak, R. (2023, May). A blockchain-based framework for scalable and trustless delegation of cyber threat intelligence. In 2023 IEEE International Conference on Blockchain and Cryptocurrency (ICBC) (pp. 1-9). IEEE. Eyal, G., Au, L., & Capotescu, C. (2024). Trust is a verb!: A critical reconstruction of the sociological theory of trust. Sociologica, 18(2), 169-191. Goldberg, S. C. (2020). Trust and reliance. In The Routledge Handbook of Trust and Philosophy (pp. 97-108). Routledge. Hallahan, K., Holtzhausen, D., van Ruler, B., Vercic, D., & Sriramesh, K. (2007). Defining strategic communication. International Journal of Strategic Communication, 1, 3–35. Jaakkola, E. (2020). Designing conceptual articles: Four approaches. AMS Review, 10(1), 18-26. Key, C. I. (2023). Strategic communications. In Academic Advising Administration: Essential Knowledge and Skills for the 21st Century (p. 230). Krastev, S., Krajden, O., Vang, Z. M., Juárez, F. P. G., Solomonova, E., Goldenberg, M. J., ... & Gold, I. (2023). Institutional trust is a distinct construct related to vaccine hesitancy and refusal. BMC Public Health, 23(1), 2481. Luhmann, N. (2000). Familiarity, confidence, trust: Problems and alternatives. In Trust: Making and Breaking Cooperative Relations (pp. 94-107). Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. Vella, H. (2022). The race for quantum-resistant cryptography [Quantum-cyber security]. Engineering & Technology, 17(1), 56-59. Zerfass, A., & Huck, S. (2007). Innovation, communication, and leadership: New developments in strategic communication. International Journal of Strategic Communication, 1(2), 107–122. Zerfass, A., Verčič, D., Nothhaft, H., & Werder, K. P. (2020). Strategic communication: Defining the field and its contribution to research and practice. In Future Directions of Strategic Communication (pp. 159-177). Routledge.

11:20
A Panoramic View of Trust in the Digital Society

ABSTRACT. The paper presents what I have coined a panoramic view of trust (Pedersen 2024) as a novel conceptualization that can help us approach how trust and distrust develop, change and fall apart in the modern digital society. The ubiquity of digital solutions run on networked computer systems change the social environment of trust and require us to consider trust as a holistic social phenomenon unfolding over a variety of social contexts that might seem independent but should be conceived as communicating vessels. The panoramic view of trust defines a maximal conceptual framework to comprehend the workings of trust and distrust, trustworthiness and untrustworthiness from a theoretical point of view. This conceptual framework, however, must be confronted with empirical case studies to explain trust and distrust in practice. To showcase what I mean by confronting the conceptual framework with case studies I will proceed as follows: delineate the core ideas of the panoramic view; briefly outline two case studies and indicate a panoramic view analysis of the cases.

The Panoramic View of Trust The theoretical framework of the panoramic view of trust considers trust and distrust as poles of a spectrum that become apparent in individual attitudes towards others as well as in the trust relations between agents (Pedersen 2010). Trust and distrust are anchored both in individual attitudes and social relations and thus should be theoretically conceived as both a first-person experience and as social relations in settings that are conducive to trust/dsitrust between agents. Four basic trust relations are at play in all social settings: (1) reciprocal interpersonal trust in the sense that we meet each other with trust or distrust. This trust relation is what most philosophers study (e.g. Baier 1986). (2) Trust in oneself which concerns the first-person perspective of reflection on oneself as a reasonable agent and assessor of one’s own and other’s actions and motives (e.g. Lehrer 1997). (3) Institutional trust understood as a reciprocal relation between institutions ranging from directors to street-level bureaucrats and the users of institutions (see Jackson & Gau 2016 for an example of the reciprocal relation). I understand institutions broadly as in (Douglas 1987). (4) trust in technology. There is a standing debate whether we trust or rely on technology (e.g. Petit 2009) and simultaneously the public discourse often refers to trust or distrust in technology. We need to investigate what can be meant with trust in technology. Evidently it is different to the three other relations because technology is not (yet) an intelligent agent but rather a device set up to solve an assignment or problem for the user. When applying the panoramic view of trust to a specific case, some trust relations will be highlighted while others fall in the background. For example, when discussing trust on a societal level, the relation concerning trust in oneself (2) is less important. Some very specific features of trust in technology (4) will be crucial to understand social interactions mediated by computer technology while less important if we zoom into how interpersonal trust relations unfold between children and adults in a kindergarten free of digital screen. The social embeddedness of trust relations implies that we should view attitudes of trust as either habitual trust/distrust or reflexive trust/distrust (Pedersen 2017). We seldom commence our interactions with others, institutions or technology by reflectively asking whether we should trust or distrust them (Lagerspetz 1997, Györffy 2013). After a breach of trust, we may reconsider whether we were naïve or the other (person, institution, technology) was (un)trustworthy. Similarly, if we have been engulfed with distrust from the outset and experience the other (person, institution, technology) was not only trustworthy but aimed to develop the trust relation, we may begin to reflect on the appropriateness of our attitude of distrust. Our prima facie habitual attitude of trust/distrust is thus malleable when confronted with experience and reflected upon. Whether trust is beneficial depends on the trustworthiness of the interacting agents (persons, institutions, technologies) as well as the purpose of the collaboration. A gang of digital fraudsters may be trusting and trustworthy within their group, but we would not praise the ease with which they scam unknowing internet users. The assessment of the normative laudability of relations of trust or distrust depends on their purpose and the theoretical and societal point of view from which the analysis of the specific situation is carried out. A panoramic view of trust plays with the metaphor that when we analyze trust we zoom in and out on specific features of trust in a broad landscape. When we move from the theoretical conception to specific cases, we analyze how the four trust relations take on significance and are entangled in a particular social environment. Being aware that the analysis is made from a viewpoint within the whole landscape the panoramic view also enables us to discuss when and from which point of view attitudes of trust or distrust are normatively justified. Zooming in on two cases may illuminate the idea.

Digital Property Assessment The Danish state has been a front runner in digitalization. As an internal report in 2011 put forth serious critique of the procedure undertaken by the tax authorities to evaluate private property, the ministry of taxation decided to solve the crisis of trust in property taxation by proposing a new digital system. It was assumed that the system could be ready in 2015 – but is still not fully implemented in 2025 and the preliminary property assessments sent out to property owners since 2023 have been crammed with mistakes ranging from taxation of dead citizens over demands for payment of tax for neighboring property to miscalculated repayment of surplus tax (Vestergaard & Pedersen 2025).

Analysis The high degree of trust in a fully automated digital assessment system as the solution to the problems with the old assessment system highlights a form of trust in technology within the taxation agency and from the responsible politicians. How such trust in a technological system impacts the trust relation between citizens and the tax authorities needs unpacking. In the early days decision makers’ trust was directed towards the hope that such a system would be more trustworthy and work more efficiently than the old system. We may ask whether the trust in the technology as objective and efficient included a distrust in humans and the potentially biased assessments of employees? The purpose was to rebuild institutional trust – citizens should be able to trust the property assessment system. As the system was rolled out, a crisis of institutional trust emerged as the large amount faulty taxation claims turned into a scandal recurringly discussed in the press. The relation of institution trust between citizens and the taxation agency is affected by two separate relations of trust in technology – on the one hand citizens increasing distrust the automated property assessment system and on the other the tax authorities’ trust in the same system as the solution of the earlier crisis of trust.

Digital Photo Manipulation In 2024 the communication of the English royal family faced a crisis of trust. After undergoing surgery in January 2024 Princess Kate of Wales was not seen publicly and rumors and conspiracy theories were building on social media. Not until British Mother’s Day, March 10.th 2024, the official “The Prince and Princess of Wales” accounts on X and Instagram posted news about Kate: a happy family photo. After quick scrutiny by social media users the photo was proved digitally altered and Associate Press ordered a “photo kill” noting. March 11., the official royal X account tweeded “Like many amateur photographers, I do occasionally experiment with editing. I wanted to express my apologies for any confusion the family photograph we shared yesterday caused…” giving the impression it was Kate herself personally writing to the millions of followers. Rumors continued and, finally, March 22. 2024, the official X and Instagram accounts released a video with Kate explaining that she is undergoing treatment for cancer.

Analysis The royal family as an institution should represent the best of the national values. When caught photo-manipulating it affects their trustworthiness. However, the question of digital editing and social media accounts opens an avenue of possible excuses – don’t we all use digital filters when posting? Thus, the question of the trustworthiness of digital photos comes to the fore. Can we trust the technique of photographs in a digital age? Are users knowingly accepting that celebrity photos are manipulated? What kind of manipulation is acceptable for whom? How are digital photos perceived as evidence? Should I trust myself, that I can see the manipulation, or should I place trust in the royal family as an institution? What roles do X and Instagram play as trustworthy or untrustworthy sources of news?

At the conference I further develop an analysis and comparison of the cases with the conceptual framework of the panoramic view of trust.

References:

Lehrer K. (1997) Slef-Trust, Oxford, Oxford University Press

Baier A. (1986) “Trust and Antitrust” (231-260) Ethics 96.2

Douglas, M. (1987) How Institutions Think, London, Routledge & Kegan Paul

Györffy D. (2013) Institutional Trust and Economic Policy: Lessons from the History of the Euro, Budapest, Central European University Press

Jackson J. & Gau J. M. (2016) “Carving up Concepts? Differentiating between Trust and Legitimacy in Public Attitutdes towards Legal Authority” (49-69), Interdisciplinary Perspectives on Trust: Towards Theoretical and Methodological Integration, ed. Schokley et al., Springer International Publishing AG

Lagerspetz O. (1997) Trust. The Tacit Demand, Dordrecht, Springer

Pedersen E. O. (2010) “A Two-Level Theory of Trust” (47-56) Balkan Journal of Philosophy 2.1

Pedersen E. O. (2017) ”An Outline of Interpersonal Trust and Distrust” (104-117), Anthropology & Philosophy. Dialogues on Trust and Hope, ed. Liisberg et al., Oxford, Berghahn

Pedersen E. O. (2024) ”A Panoramic View of Trust in the Time of Digital Automated Decision Making – Failings of Trust in the Post Office Scandal and the Tax Authorities” (29-47), Sats. Northern European Journal of Philosophy 25.1

Petit P. (2009) “Trust, Reliance, and the Internet” (161-174), Information Technology and Moral Philosophy, ed. van den Hoven & Weckert, Cambridge, Cambridge University Press

Vestergaard M. & Pedersen E. O. (2025) ”Af hensyn til automatisering. Tillid og mistillid i forbindelse med digitaliseringen af det offentlige ejendomsvurderingssystem i Danmark”, Nordisk Administrativt Tidsskrift

11:40
Legitimacy of Algorithmic Decision-Making in the Age of the “Vanishing Trial”: A Scoping Review

ABSTRACT. Objective: This study seeks to understand the impact of artificial intelligence (AI) on perceptions of trust and legitimacy of the justice system, while considering a wide array of dispute resolution processes and roles. Method: We employ a scoping review methodology, which provides a systematic synthesis of existing knowledge in the field, and is particularly suitable for novel research contexts. Results: Our findings show that only a small portion of the vast literature on automated decision-making (ADM) focuses on the dispute resolution arena. Of the limited research that exists in this domain, a significant portion is dedicated to the judicial setting. Our review reveals that the focus on judicial ADM gives salience to algorithmic aversion, obfuscating the fact that perceptions towards the introduction of AI in this context actually vary across domain, process type, third party function and role, process stage, and various other contextual factors. Conclusions: The emphasis in current research on judicial ADM is problematic for two reasons. First, judicial decision-making has become a rare occurrence as most court cases settle and do not reach a judicial decision following a full-fledged formal process (the “vanishing trial” phenomenon). Second, as this paper shows, the focus on judicial ADM gives salience to algorithmic aversion, which does not characterize people’s perceptions of trust and legitimacy towards the use of AI in other contexts. We conclude with the need for further research based on a more expansive (and realistic) view of the dispute resolution landscape, one that would generate knowledge on the relevance of procedural theories to the automated setting.

11:55
User perceptions of online risk and internet safety: #SaferInternetDay as an affective public

ABSTRACT. Study Purpose and Rationale

Trust in digital society can depend on how safe users feel online: the safer they feel, they more trust they are likely to have in digital technologies and their widespread use. But being online can put users at many kinds of risk, from racism, sexism, and hate speech to cyberbullying, disinformation, violations of privacy and so forth (e.g., Castaño-Pulgarín et al., 2021; Pabian & Vandebosch, 2016). This study tracks user perceptions of online risk and internet safety over a 10-year-period — 2013-2022— in order to understand the kinds of risk users are concerned about, what users believe are the best ways of addressing them, and how perceptions of risk shape user relationships with digital society.

The empirical focus of our research is Safer Internet Day (SID), an annual campaign to “make the internet a safer and better place for all” (Safer Internet Day, n.d.). Initiated by the EU SafeBorders project in 2004, SID is now a global event marked by a series of actions by affiliated organizations in more than 200 countries and territories. Their aim is “to raise awareness of emerging online issues and current concerns” (Safer Internet Day, n.d.). SID is also an occasion for internet users to talk about the concerns they perceive with the use of internet and digital technologies, which can undermine trust in digital society.

Our research draws on a mixed-methods (machine learning and discourse analysis) study of Twitter (now X) posts using SID-related hashtags published between 2013 and 2022 to answer the following research questions: RQ1. What are internet users’ main concerns related to internet safety? RQ2. How do users believe these concerns should be addressed?

Our theoretical framework draws upon the inter-related concepts of networked public and affective public. A networked public is “the imagined collective that emerges as a result of the intersection of people, technology, and practice” (boyd, 2011, p. 39). The concept draws attention to how the affordances of social networking platforms like Twitter bring together people from varying cultures and geographies while recognizing the specific ways in which people use such technologies to connect. Papacharissi adds that networked publics “are mobilized and connected, identified, and potentially disconnected through expressions of sentiment” (2016, p. 311) — and thus refers to them as affective publics. Both fear and trust are key sentiments driving connections in networked publics (Shahin, 2023).

Study Design

Data were collected with the help of Twitter’s educational API, which allowed for querying all historical data from the platform, using the Twarc Python library. We searched for the hashtags #saferinternetday or #SID<YY> or #SID<YYYY>, where <YY> and <YYYY> represented the respective year between 2013 and 2022 (e.g., #SID22 or #SID2022 for the year 2022). In order to maximize the collection of relevant posts, we ran our query for the day SID was observed in a particular year as well as the following week (e.g., 8-15 February 2022).

The raw data set comprised 538,349 posts. After removing retweets and duplicate posts, we were left with 141,263 posts. Two coders manually annotated a random sample of 500 posts using a codebook of 22 categories, based on prior literature related to online risks. But most of the categories were rarely used, leaving us with a final classification based on 5 fairly broad categories. Another 1,000 tweets were manually annotated for model development using these 5 categories and the data were used to train multiple classifiers.

The BERT transformer neural network classifier (Micro F-1 = 0.75, Macro F-1 = 0.43, Samples = 0.75) outperformed other classifiers and was selected for analyzing the complete data set (Rothman, 2021). The General Awareness category, which included tweets simply drawing attention to SID without focusing on any particular issue, predominated the data set, accounting for nearly two-thirds of all tweets. Other categories included Online Abuse, Privacy Concerns, Solutions, and Unrelated posts. Excluding Unrelated posts, we carried out a qualitative discourse analysis of nearly 100 purposively sampled posts from each year under each category to develop a more nuanced understanding of the kinds of online safety concerns and solutions users posted about and arrive at more precise answers to our research questions.

Analysis

The mixed-methods analysis led us to focus on four key themes.

Personal Stories of Abuse Sharing individual stories of online abuse was a common theme over the years, particularly focusing on various forms of abuse faced by children and young people and their effects. Such tweets brought out the affective, lived experiences of abuse to create an empathetic community of users who would be aware of the harms of online abuse. It was expected that it would be easier for people to share their stories once they came to know of other victims. Hashtags such as #DigitalMe-Safe were used to creatively initiate these discussions. Tweets also emphasized the need to involve young people in finding solutions for online abuse. For example, a tweet claimed that “#chatsafe guidelines were developed to empower #youngpeople with the skills and knowledge they need to keep themselves safe online when dealing with suicide-related content.”

Privacy and Online Scams Conversations on privacy revolved around issues such as protection of identity theft online, the rising number of online scams, and taking control of one’s online reputation. Several tweets cautioned against sharing too much information online. As one tweet noted, “The internet is written in PEN not PENCIL think before you post.” Privacy tips were focused on parents and educators. Many tweets also suggested ways in which ordinary internet users could protect themselves from fraud.

Role of Parents and Platforms Many tweets focused on the roles and responsibilities of different stakeholders of online safety, including parents of young children as well as the platforms themselves. Users, for instance, discussed the role that parents might play to increase awareness and sensitize young people about cyberbullying. Parents were also urged to closely monitor children’s online activities. For example, one tweet said “#SaferInternetDay today - a good reminder to find out what your kids are currently interested in online.” Importantly, tweets recognized that not just children but adults were also at risk online and could face various forms of abuse. There was emphasis on intergenerational learning as many tweets mentioned how parents could learn more about online safety by paying attention to the experiences of their children. Tweets also drew attention to the role of social media platforms in addressing online safety, especially on issues like countering violence.

Digital Activism and Anti-Bullying Movement Another “solution” to online abuse was to mobilize ordinary users of social media platforms to act against abuse, especially bystanders who might not face abuse themselves but witnessed it being meted out to others. As one tweet mentioned, “Do not keep quiet when you notice anyone being mean to another.” Many tweets promoted the “anti-bullying movement,” an online campaign in the form of short films created by victims of bullying. Several tweets mentioned the need for connecting and sharing respectfully online. Users noted the importance of the internet as a public space and highlighted attempts to bring together civil society groups, corporates, digital rights activists, and educational institutions to create more awareness on digital safety.

Conclusion Our study illustrates how users posting about online risks and internet safety constitute a networked and affective public, brought together by sociotechnical affordances as well as expressions of diverse sentiments. Mounting concern about issues ranging from cyberbullying to privacy scams motivated them to express themselves online. The practice was encouraged by social campaigns such as Safer Internet Day and technologically enabled by its hashtags, which allowed such users to connect with each other. These connections, in turn, led to the emergence of a networked public — an “imagined collective” (boyd, 2011) whose members cared about each other’s well-being and mobilized each other for collective action.

Papacharissi talks about “collaborative storytelling structures that discursively call affective publics into being” (2016, p. 316). We found such structures in the tweets we analyzed: personal stories of experience of abuse were abundant, but so was a recognition that such stories could help connect with other victims and encourage them to share their own stories in an environment of care. But stories don’t simply relate a problem; they also lead into attempts at their resolution. As our analysis reveals, expressions of concern about online risks and harms on Twitter were frequently accompanied by discussions of how to deal with them — and the different responsibilities of multiple actors, including platforms, parents, and bystanders, in this regard.

Greater awareness of online risks can undermine trust in digital society. But digital society, as a macrocosm of networked publics, is not just about digital technologies — it also comprises users and their practices of connectivity. Our study indicates that many users cognizant of the risks that digital media expose them to also use digital platforms to share their stories, build communities, act themselves, and mobilize others to mitigate these risks in order to make the internet a safer place to be.

References boyd, d. (2011). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), A networked self, pp. 39-58. New York: Routledge. Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608. Pabian, S., & Vandebosch, H. (2016). Short-term longitudinal relationships between adolescents’(cyber) bullying perpetration and bonding to school and teachers. International Journal of Behavioral Development, 40(2), 162-172. Papacharissi, Z. (2016). Affective publics and structures of storytelling: Sentiment, events and mediality. Information, Communication & Society, 19(3), 307-324. Rothman, D. (2021). Transformers for Natural Language Processing: Build innovative deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and more. Packt Publishing. Safer Internet Day. (n.d.). About Safer Internet Day. https://www.saferinternetday.org/about Shahin, S. (2023). Affective polarization of a protest and a counterprotest: Million MAGA March v. Million Moron March. American Behavioral Scientist, 67(6), 735-756.

12:10
Putting trust to the test: making sense of human-machine interactions on TikTok

ABSTRACT. People’s interaction with online content is increasingly facilitated by intelligent user interfaces and artificial agents. In this paper, I explore this shift by drawing on ethnographic fieldwork with users of the TikTok app. More specifically, I write on their interactions with the TikTok algorithm as a form of human-machine interaction and through the lens of trust. Along concrete ethnographic data, the paper lays out the multifaceted process in which participants negotiated trust in the TikTok algorithm as an interaction partner in their everyday pursuits for relaxation and entertainment. Understanding trust as something deeply relational, mediating the position that one takes to another, the paper outlines the constitutive embodied and affective dimensions of trust. I show how participants dealt with feelings of their trust in the TikTok algorithm being put to the test, as well as how they negotiated their distance and closeness to it accordingly. By doing so, the paper will demonstrate how trust functions a key mediator of meaningful human-machine interaction - shaping not just meaningful outcomes but also meaningful processes of interaction. From this angle, the paper closes with an argument for research on the foundational role of trust in human-machine interaction, specifically in ways that look beyond the cognitive processes of judging trust and broadening the scope towards the material and cultural contexts in which people trust others.

14:00-15:30 Session 3A: Trust Dynamics around Emerging Technologies: Trust by technology?

This track examines how and whether various technologies - from privacy tools to digital signatures - can build or restore trust in digital systems. The track takes a critical perspective, analyzing whether such technologies genuinely foster trust or merely project expectations.

Location: Grand Space
14:00
‘Must Fix Trust’: Privacy-enhancing technologies as reductive tool
PRESENTER: Tom Barbereau

ABSTRACT. Privacy-enhancing and other related digital technologies are marketed as increasing trust within the digital society. They are deployed as means to assert trust in digital transactions and interactions between actors. What this commentary argues is that digital trust is thereby reduced to the product of a technological iteration, insertion, or fix: more encryption $\simeq$ more privacy $\simeq$ more trust. However righteous it may be to foster privacy, the promise to uphold something as uncertain as trust by the use of mathematics and/or statistics alone is short-sighted. Lofty notions like `data minimisation' and `privacy-by-design' rest on deterministic assumptions. We suggest that in reductively appropriating the concept of trust and failing to meet expectations, the consequences of the technification of trust -- i.e., the making of trust a product of technê (alone) -- are paradoxical in that they actually undermine trust, by centralising power within tech companies.

14:20
Digital mediation of interpersonal trust beyond the interface: algorithms and data flows in online platforms

ABSTRACT. Introduction The present study shows the results of a methodological approach to study the digital mediation of interpersonal trust online by analyzing legal documents related to the use of algorithms in the peer-to-peer accommodation platform Airbnb. The study contributes to the understanding of digital mediation beyond the user interface by combining the analysis of publicly available software patents owned by the company with that of privacy policies, terms and conditions and community standards on the platform. Existing studies of technical mediation of trust on Airbnb focus on review mechanisms, verification systems, photos and profiles, “badges” as well as community guidelines and policies of the platform (Ert & Fleischer, 2019; Mao et al., 2020; Pumputis, 2023; Pumputis & Mieli, 2024). However, the company also uses automated systems and is working towards integrating the use of AI throughout the platform (Bloomberg Live, 2023; LinkedIn, 2024). Although they have since removed any mention of machine learning from the official communication material on the platform, in the past Airbnb admitted to using machine learning and predictive algorithms to mediate trust among users, particularly by assigning a secret “trustworthiness” score to each user (EPIC, 2020). Online platforms can successfully mediate communication between people who already know each other, fostering trust relations that can develop over time and that can exist both on- and off-line. The trust relation considered in this paper, however, is not a relation between people who already know each other, it is not disconnected from the transactional relation (in the case of Airbnb accommodation), and it does not evolve over time. Pedersen (2015 p.106) offers the concepts of prima facie trust and distrust as generic terms that cover “the immediate position of either trust or distrust that an individual agent expresses in the actual meeting with others”. Such reactions take place “without further deliberation and consideration as to whether the other is truly trustworthy or untrustworthy” (Pedersen, 2015 p.106). While the knowledge of a person that forms the basis for trust is both epistemic and affective (Potter, 2020), the digital system cannot provide a valid basis for affective knowledge but can apparently offer a high level of epistemic knowledge. Pumputis (2023), following Ert and Fleischer (2019), observes how on Airbnb the layout and structure of the platform offer initial information that is perceived as objective, forming an initial basis for perceived trustworthiness. Such perceived trustworthiness is mainly based on epistemic knowledge, but it is derived from an illusory objectivity offered by the platform. However, in the present paper I argue that what the digital system can provide is a basis to calculate risks, rather than to assess trustworthiness or establish trust.

Studying algorithms The study of algorithms is a relatively new field, but whether we can trust the conclusions made by software has been on the agenda of computer ethicists since the 1980s. The question then was “how much we should trust a computer's invisible calculations”, given that such “calculations are too enormous for humans to examine” (Moor, 1985 p. 275). The question considered here is not only whether we should trust such calculations, but also how to study them when they are kept intentionally opaque in order to capitalize on the competitive advantage they offer. The methodological approach proposed here allows researchers to shed a light on the software capabilities, the logic of these processes and the vast array of data that can be processed through these algorithms. Kitchin (2017) identifies two key translation challenges when producing algorithms. First, a task or problem needs to be translated into a structured formula with an appropriate set of rules (pseudo-code). Then, this must be translated into source code that when compiled will perform the task or solve the problem. While algorithmic auditing is the main method available to study how algorithms are used, and perhaps the only method currently available to assess exactly how they are implemented (Goodman & Trehu, 2022; O’Neil, Sargeant & Appel, 2024), looking at the algorithms itself and the dataset it processes are not the only ways to gain knowledge about them. Considering that auditing is unfortunately an unachievable method for most daily operations that take place across the globe through technological systems, other methods are necessary. Therefore, I propose to look at the information publicly available about algorithms and the data they process. This includes software patents, combined with policies, terms and conditions of use, community standards, press releases as well as other relevant material such as interviews released by spokespeople. Algorithmic auditing requires access to the source code and allows to understand the second translation challenge identified by Kitchin (2017), however patents and policies provide sufficient information to analyse the first translation problem, that is how a task or a problem is translated into a set of criteria and rules. Research design The study employs qualitative content analysis. The analysis focuses mainly on two patents owned by Airbnb for trust-related algorithms. The following patent databases were searched: Google Patents Search, Scopus, USPTO (Uniyted States Patent and Trademark Office), WIPO (World Intellectual Property Organization). Search criteria were: “Airbnb”; “Assignee: Airbnb”; “Airbnb AND trust”. Only patents assigned to or bought by Airbnb Inc. and related to trust or trustworthiness were selected for analysis. Two patents were selected:

1. Identity and Trustworthiness Verification Using Online and Offline Components (2020) PatentNo.: US10,805,315B2 2. Determining Trustworthiness and Compatibility of a Person (2021) Patent No.: US 10,936,959 B2

Since owning a patent does not mean that the platform uses the algorithm, nor does it show how it is used, other data was collected to get insights into how the company may be using or intending to use the algorithms. Additional data includes the company’s press and communication: press releases, talks and interviews with company’s CEO and founders, platform’s policies, terms and conditions, community standards and other documentation offered on the website (e.g. “how-to” pages). Moreover, the content of privacy policy and other documents was analysed to find to what personal data the company has access according to their legal terms. The documents were coded and analysed manually with an inductive approach. The analysis departed from the question of how the task fo establishing trust is translated into a set of rules and criteria (Kitchin, 2017).

Preliminary findings The acommodation platform Airbnb is used as a paradigmatic case of digital mediation of trust between strangers. Commentators have claimed that trust is the main currency of the platform (Botsman, 2012) and without it there would be no transactions among users, thus no business. The company claims to be “built on trust” and “fuelled by trust” (Airbnb, 2017, 2019). Therefore the problem that Airbnb has set for itself is to ensure that users trust each other, and being this a digital platform such problem needs to be translated into code. In the software patents analysed here, the problem is translated into the following objectives: to verify “trustworthiness of a user of an online system” (Patent 1) and to obtain “a trustworthiness score of the person” (Patent 2). The term trustworthiness is used throughout the documents but it is not defined. Patent 2 offers an operative definition of the trustworthiness score as a predictor of a user being a “positive actor” in the environment: “the trustworthiness score can be based on personality and behavior traits that predict the likelihood of the person being a positive actor in an online or offline person-to-person interaction.” The algoritms analysed in the present study insert themselves in the relationship between users by anticipating the need for epistemic knowledge in order to establish prima facie trust or distrust. They do so by providing a score of trutworthiness. However, such score is not even visible to the end users but directly determines who will interact with whom and on what terms. Therefore, while the first time users interact they might perceive an apparent prima facie trust or distrust towards each other, it is in fact already a mediated impression. By the time users interact (or don’t), the technical systems analysed here have already processed a large amount of information about each of the ursers and established whether they are worthy of trust by calculating the risk of them being a “bad actor” (Patent 2). Moreover, although it is difficult to know exactly how the system operates, we know that in a broad sense it determines which users will interact with each other as the objective is to establish “compatibility” of users (Patent 2) or to exclude “untrustworthy” users from the platform (Patent 1). Therefore, one of the ways in which the problem of trust between strangers has been solved through code is to not make strangers meet at all. Additionally, those who do meet are indeed strangers to each other, but they are not strangers to the digital system, which has already collected and processed any information available online about them. Therefore, the problem of prima facie trust among strangers is solved by eliminating the need for prima facie trust or distrust at all and instead translating the task into a calculation of risk.

References Airbnb. (2017, December 1). Perfect strangers: How Airbnb is building trust between hosts and guests. Airbnb Newsroom. https://news.airbnb.com/perfect-strangers-how-airbnb-is-building-trust-between-hosts-and-guests/ Airbnb. (2019, November 6). In the business of trust. https://news.airbnb.com/in-the-business-of-trust/ Bloomberg Live. (2023). Airbnb CEO Chesky on leveraging AI [Video]. YouTube. https://www.youtube.com/watch?v=Cgt7we3vIH4 Botsman, R. (2012, June). The currency of the new economy is trust [Video]. TED. https://www.ted.com/talks/rachel_botsman_the_currency_of_the_new_economy_is_trust Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896 EPIC (Electronic Privacy Information Center). (2020, February 26). EPIC complaint FTC in re Airbnb: Complaint and request for investigation, injunction, and other relief [PDF]. https://epic.org/wp-content/uploads/privacy/ftc/airbnb/EPIC_FTC_Airbnb_Complaint_Feb2020.pdf Ert, E., & Fleischer, A. (2019). The evolution of trust in Airbnb: A case of home rental. Annals of Tourism Research, 75, 279-287. Goodman, E. P., & Trehu, J. (2022). Algorithmic auditing: Chasing AI accountability. Santa Clara High Tech. LJ, 39, 289 Keymolen, E. (2016). Trust on the line: a philosophycal exploration of trust in the networked era. E. Keymolen. Wolf Legal Publishers, Nijmegen. Kitchin, R. (2017) Thinking critically about and researching algorithms, Information, Communication & Society, 20:1, 14-29 Liang, L. J., Choi, H. C., & Joppe, M. (2018). Exploring the relationship between satisfaction, trust and switching intention, repurchase intention in the context of Airbnb. International Journal of Hospitality Management, 69, 41-48. LinkedIn. (2024). Staff machine learning engineer, trust screenings [Job posting]. LinkedIn. https://www.linkedin.com/jobs/view/staff-machine-learning-engineer-trust-screenings-at-airbnb-3914059759/ Mao, Z. E., Jones, M. F., Li, M., Wei, W., & Lyu, J. (2020). Sleeping in a stranger's home: A trust formation model for Airbnb. Journal of Hospitality and Tourism Management, 42, 67-76. Moor, J. H. (2020). What is computer ethics?. In The ethics of information technologies (pp. 15-24). Routledge. O'Neil, C., Sargeant, H., & Appel, J. (2024). Explainable fairness in regulatory algorithmic auditing. West Virginia Law Review, 127(1), 79 Pedersen, E. O., & Liisberg, S. (2015). Trust and Hope: An Introduction. In Anthropology and Philosophy: Dialogues on Trust and Hope (pp. 1-20). Berghahn Books. Potter, N. N. (2020). Interpersonal trust. In The routledge handbook of trust and philosophy (pp. 243-255). Routledge. Pumputis, A. (2023). Complexities of trust building through sociomaterial arrangements of peer-to-peer platforms. Current Issues in Tourism, 27(11), 1800-1813. Pumputis, A., & Mieli, M. (2024). From trust to trustworthiness: Formalising consumer behaviour with discourse on Airbnb platform. In Consumer Behaviour in Hospitality and Tourism (pp. 83-102). Routledge.

14:35
Image Authenticity in the Age of AI: Digital Signatures as a Defense Against Visual Disinformation

ABSTRACT. With the rise of generative AI technology, it becomes increasingly difficult to accurately identify real and fake images in the online news environment, which leads to a threat to democracy and skepticism about the authenticity of (real) images. By authenticity, we mean (online) content that remains fundamentally unaltered and whose provenance can be traced. We emphasize that content being authentic should not be confused with content being “truthful”, which is an orthogonal property. Accordingly, our approach is different from the one employed by fact-checkers, who assess a claim’s veracity. In this conceptual and solution-driven paper, we offer the conceptual pillars for a novel approach to the well-known labeling systems so far used in mis- and disinformation mitigation. Our contribution, thus, lies in offering digital signatures to verify the authenticity of images. In doing so, we combine the knowledge and expertise gathered in media and journalism studies and cryptography.

14:55
Towards collective trust: “commons-centric” AI and security design for democratic participation

ABSTRACT. How does the introduction of intelligent computing affect the nature of collaboration, empathy, and solidarity in democratic practice? Specifically, how can AI change, obstruct, or support “collective trust” in communal decision making? While much hype circulates regarding the promise of AI to facilitate more efficient governance and “collective intelligence” at a global scale, democratic societies are simultaneously experiencing crises of trust and alienation, in part due to the growing presence of the “artificial” in shaping our realities (deepfakes, ChatGPT, etc.) (Pierson, Kerr, et al, 2023; Luka and Hutchinson, 2020) and the role of opaque Big Tech companies in mediating our access to communication and information/disinformation. This explains the prevalent use of trust, and trustworthiness, in almost all Western -- and not only Western (Beijing Academy of Artificial Intelligence, 2019) -- official discourses and policies concerning Artificial Intelligence (AI). US state-funded research looks to “foster trust in AI in the general public,” (National Institute of Standards and Technology (NIST), 2021) while the European Commission on AI (2020) calls for algorithms which are considered trustworthy “from the perspective of society as a whole.”

Policy and academic discourse has turned its attention to “public trust” in AI, yet this has mainly taken the form of shorter-term marketing research on “trustable AI,” which largely focuses on explaining AI in order to encourage adoption of the technology by individual consumers, (NIST, 2021) and on increasing public acceptance that AI will be used “on them” in the realm of policing (Knowles and Richards, 2021;). More recent regulations and statements also recognize the importance of protecting public privacy, the right to association, and workers' data-gathering in the face of AI (European Parliament and the Council of the European Union, 2021/2024), Élysée.fr, 2025) However, the power of AI to potentially aid in, fundamentally transform, or obstruct collective self-governance demands research that articulates how AI can change the very nature of trust and collaboration within democratic societies, wherein the user is the collective body.

New and re-emerging experiments with direct democracy (such as L'assemblée citoyenne de Paris, municipal digital democracy projects in Madrid and Barcelona, networks of self-governing collectives) – often developing in tandem with global networked communications and intelligent computing (Preville, 2019) – require practices of decision making, sharing of authority, and security/access control that can help to conceptualize collective trust in design, and can perhaps be further supported or developed through AI tools. Challenges arise for such groups as they seek to scale up across distances, work remotely, or automate decisions, requiring the use of algorithmic tools for a form of deliberative and collective decision-making that generally does not rely on binary or predictive logic (McKinney, 2024; Alnemr, 2020). This often leads to serious rifts within these groups as they attempt to comport themselves around digital tools, disrupting their dynamics of trust and processes of governance. As similar self-governance projects continue to advance, globalize, and digitize, these problems will only continue to become more acute and a greater threat to progressive democratic practice. This research therefore seeks to articulate a framework for “collective trust” values in intelligent computing designs. This paper presents my early research into this topic by bringing together two bodies of literature: 1) existing theories of trust, commons-management, and ethics of care, that can help us to conceptualize “collective trust” as it might apply to emerging technology like AI 2) a state-of-the industry review of the design of trust and trustworthiness in digital security/access control tools and AI. Taken together, this allows me to outline a need for future research leading to the design of algorithmic tools and intelligent computing specifically to support “collective trust.”

Collective Trust and Collective Governance The first part of this paper outlines various theories of trust, particularly those that dominate discourse around tech development. Trust is typically discussed in terms of individual one-to-one relationships (marriage, friendship, caregiver, etc.) “Public trust” is usually a discussion on the outsourcing of decision-making and responsibilities to institutions and representatives by individuals within a society. Research on “Trust and Computing” thus far tends to conceptualize the trust relationship as a dyadic one between an individual human user and technology (Simpson, 2014) or, importantly, discusses the need for regulation to help bolster public trust in technologically mediated public services (Bodó and Janssen, 2022). Many recent attempts to measure “trust” in algorithms draw on traditions of analytical philosophy and complexity theory which translate easily into mathematical models. Such theories see trust as a “mechanism for reducing complexity,” (Luhmann, 1979) drawing on views of human society grounded in evolutionary psychology and game theory, which understand human behaviors as fundamentally driven by the mandate to survive most efficiently. (Stanton and Jensen, 2021) Such models therefore seek to evaluate not just the reliability and security of an AI system, but the user’s perception of that AI’s trustworthiness, through quantifiable aspects of both qualities. Transparency is less important to achieve than predictability, thereby increasing user efficiency by taking less mental bandwidth. This model is more common in Anglophone thinking and fits more easily with existing tools modeled on the individual consumer, wherein the subject of the trust is the individual who is struggling to survive, reduce mental bandwidth, and meet her needs in a competitive environment. These ideas of trust in a technology do not address trust via a technology -- how the technology affects the nature of trust within a group or could facilitate or damage the group’s capacity for collectivity, collaboration. or solidarity.

“Collective trust” is ontologically different from private trust (individual-to-individual), and institutional trust. The term mainly arises in discussions on organizational trust, which argue that collective trust is necessary for collective action and most possible when the individuals identify strongly with the organization or group. (Kramer et al, 1996.) The question of how to conceptualize collective trust beyond the bounds of shared identity remains to be addressed.

Going beyond literature on trust, political theories of the General Will (Rousseau, 1762/1997), and the Multitude or swarm (Hardt and Negri, 2004), articulate an experience of a collective that “feels itself” as a united entity. More recent work in Feminist Ethics of Care echoes this idea that the human condition is inherently one of interdependence and vulnerability (Gilligan, 2011) and that care is not just one-to-one, but also collective (Tronto, 2005) and can be directed not only towards another individual, but towards common goods (such as natural resources, or the upholding of human rights). Such concepts are difficult to quantify but useful for discussion of public and collective (artificial) intelligence as it might govern a common good (Ostrom, 1990).

Digital Trust The second half of this paper outlines the political and social values and assumptions built into existing examples of “digital trust.” I focus here on 1) on algorithms for digital security and access control (which are articulations of trust) and finally 2) the interplay between “trust” and AI. Very few AI designs exist with the purpose of building or facilitating trust, but there is a proliferation of technical projects addressing the “trustability” of AI.

Digital security designs focus mainly on verifying the identity of an individual user (who may have been granted certain privileged access to information or authorship based on some kind of organizational trust in this individual.) This model is problematic for organizational structures that function more horizontally or in a decentralized fashion. Beyond these more dominant models, I analyze recent and emerging decentralized designs. These fall mainly into two categories: 1) those that advocate “trustlessness,” such as Zero Trust Network Architecture or Distributed Hash Tables (Blockchain, etc.) and 2) those that confer trust collectively, such as Consensus Algorithms, Secret Sharing, Multi-Party Access Control, and “Web-of-Trust” models.

The first category of technologies is likely contrary to the goal of “collective trust” but rather further atomization and individuation by eradicating the need for trust. (Bodó and Weigl, 2024). Blockchain technologies in particular are built on the delegation of verification and trust to an automated, mathematical process, rather than to a collective. This eradication of the need for community trust can work against the fostering of a collective identity or the implementation of collective values. Additionally, blockchain’s mathematical proofs for validation (“proof of work” or “proof of stake” protocols) are economically undemocratic since they rely on the financial and material support of resources (amount of computing power, energy, and sometimes, amount of currency already invested in the system) as moderators for influence. Still further research is required in these areas to consider whether more sustainable, equitable, and collectively-minded forms of distributed verification are possible.

The second category of technologies provide more interesting models of conferring trust or unlocking access collectively, and therefore solving some of the problems of bottlenecking of power that are contrary to more distributed or horizontal governance models. By breaking up a password across multiple users, these methods of access bear a close resemblance to directly democratic consensus models of decision making, wherein one (or a significant minority) “block” can stop a resolution from passing. These are interesting models but have not yet been implemented in accessible formats for civilian actors, nor tested in the field with democratic groups.

Most work on developing trust in AI has focuses on “explainable” AI (XAI), which has been offered as a (military-led) solution to user “mistrust” in machine learning. In such cases, trust is built when the user believes that they understand why the machine makes certain decisions, how to correct it, and when it “fails.” (Gunning, 2016). Trust, in this case, requires the impression of: transparency (honesty), understanding, and some user-agency to effect change in the decision-making process. XAI presents users with a human-understandable model that describes the AI.

“Fairness” in Machine Learning (ML) (especially important for decisions regarding criminal sentencing and parole, credit scoring, insurance risk, etc.) is something that users then trust in when the AI makes decisions that align with the user’s conception of “fairness” based on their understanding of its decision process. “Fairwashing” (Aïvodji et al, 2019) refers to the deliberate design of explainable AI models that present the ML decision-making process as “fair” in order to cover up unfair algorithmic processes or biased training data (for example, to benefit insurance companies that wish to justify charging higher premiums.) Fairwashing detection is technically difficult due to the “black boxed” nature of many ML algorithms, as well as the privacy restrictions on training data. Emerging research exists that detects “fairwashing” by comparing the outcomes of “fairwashed” XAI models and blackboxed models, by measuring statistical distance between the outcomes for “normal” and “sensitive” (discriminated-against) subpopulations. (Shamsabadi et al, 2022).

Finally, Deep evidential regression is a small but emerging model of digital trust based on Machine Learning where the machines learn to “mistrust” themselves. That is, not only does the AI learn (for example, in the case of self-driving cars, to identify a body in a crosswalk), it also learns under what conditions it is likely to be incorrect in its prediction. (Ackerman, 2020) This provides some hope that recent research to articulate the presence of bias (Sorbonne Université, 2024) in training data could lead to efficient corrections in the AI.

The problem of “trust” in AI and security design has arisen mainly as a question of accuracy -- or simply perceived accuracy -- in prediction (for AI), or authority of individual users (in security). Work exists at the security layer to share this authority across users in a manner that better supports a collective governance model. Much more research is needed to translate practices of collective trust into the design of AI.

15:15
Building Trust in Fair AI: Trusted Third-Party Computation for Measuring Discriminatory Impacts

ABSTRACT. [Short abstract] The increasing role of AI systems in various domains of society makes it essential to ensure that these systems are trustworthly and developed and deployed fairly. This is particularly significant in high-stakes areas like recruitment and education, where AI-driven decisions can shape individuals’ opportunities and, if biased, deepen existing inequalities [1, 2, 3, 4, 5]. Research has well-documented cases of AI systems exhibiting discriminatory behavior based on characteristics protected under EU non-discrimination law, such as gender, age, or ethnicity [6, 7, 8, 9]. Discriminatory biases can be found across nearly all AI applications [10]. For example, recent cases have shown that AI-driven recruitment tools including chatbots, video interviews, and software scanning CVs and social media profiles can replicate and reinforce existing discriminatory entry barriers to the labor market [11, 12]

In response to the growing importance of AI on the European market, the European Union introduced the AI Act (AIA) in July 2024 that establishes harmonized rules for AI [13]. The AI Act regulates primarily high-risk systems that may impact fundamental rights, including non-discrimination, and sets out several mandatory requirements for developers and deployers. However, achieving fair and non-discriminatory AI systems requires not only regulation; it also demands practical technical solutions to assess and mitigate discriminatory impacts in high-risk AI systems. These solutions must be built on a foundation of trust between a provider, deployer, and affected individuals of AI systems, especially when they involve processing sensitive personal data to prevent discriminatory impacts.

The extended abstract examines the challenges of monitoring high-risk AI systems for discriminatory impacts. Assessing such impacts often requires processing sensitive personal data, such as ethnicity or sexual orientation, which raises legal concerns under the General Data Protection Regulation (GDPR). The paper explores the limitation imposed by GDPR and the AI Act on using sensitive data for fairness assessments, particularly in contexts where power imbalances exist, for example in recruitment or education. As a potential solution, the authors analyze a trusted third-party computation technique that enables to process sensitive data while complying with GDPR. By using this secure multi-party computation technique, organizations can measure fairness in AI outcomes without directly accessing individuals’ sensitive characteristics. However, implementing this solution presents several challenges, including selecting appropriate trusted third parties, determining monitoring frequency, and addressing concerns about underrepresented groups’ willingness to disclose sensitive information. Ongoing research seeks to address these challenges, strengthen trust in fair AI, and anticipate further regulatory steps.

[Extended abstract] in pdf

14:00-15:30 Session 3B: Safeguards session II
Location: Red Space
14:00
Trust or Distrust in the Gaming Economy - Shared Practices and Missing Standards

ABSTRACT. Everyday socialities increasingly play out within and through the worlds of digital games. However, gaming remains often studied as a specialised field outside the “mainstream media landscape”. Moreover, there exists limited research on the privacy implications of gaming. In this paper, we discuss the question of trust in gaming platforms understood as a facet of today’s social media landscape. More specifically, we focus on two data-driven technologies of gaming platforms and interrogate their role in shaping and eroding trust in gaming platforms. The first is that of matchmaking understood as a form of personalization in which gamers are sorted and grouped for fair play. The second technologies are anti-cheat and anti-toxicity, forms of moderation that govern gamer’s ways of interactions with games and each other. Both types of technology establish trust in platforms as sites of playful interaction. Yet, their reliance on datafication systems creates a potential friction in the trust relation that users can have with given platforms. Our argumentation on this duality will be based on two things. Firstly, the analysis of the data practices of 37 multiplayer online games (e.g., League of Legends), gaming distribution platforms (e.g., Steam), and online community sites (e.g., Discord). Secondly, the discussion of specific cases studies and controversies that surrounded some of studied platforms– such as Riot Game’s launch of the Vanguard Anti-Cheat system. We will situate this discussion within broader debates about digital media governance, arguing that gaming provides a critical lens for understanding how trust is negotiated in data-intensive digital environments.

14:20
Endorsements as a trust mechanism in the context of AI

ABSTRACT. [The PDF has the same text but looks better]

For a good two thousand years, people know that if they see a written text, they can trust that there is a human behind it as author. For a good century we know that there are devices, like microphones and cameras with associated recorders, that can capture and reflect reality and can present us with a reliable reproduction of reality. We are used to think that what people say in such recordings can really be attributed to them. We are aware of movies and recorded speeches containing fabrications, for instance as parodies, but we are not yet used to a world in which images, texts, video and audio show up that look genuine but are fabricated.

Therefore, AI-generated content can have destabilising effects on society. We generally form our judgements and decide our actions based on indirect reproductions of reality. Now that these reproductions can easily be manipulated or fabricated, trust in a broad sense could evaporate. We badly need new trust anchors, especially in the digital world.

The AI Act, which entered into force in 2024, is an ambitious effort to introduce a regulatory framework for AI in the European Union. The EU lawmakers note that AI systems that generate synthetic content ‘have a significant impact on the integrity and trust in the information ecosystem, raising new risks of misinformation and manipulation at scale’ (AI Act, recital 33). In reaction to such risks, the AI Act requires, roughly summarized, that AI providers mark AI-generated synthetic output (Article 50(2)).

Hence, the AI Act requires that AI-generated material contains a digital watermark, an embedded pattern that is spread out through the material, in a practically undetectable manner, and that cannot be removed without seriously distorting the material itself. The Act’s approach makes sense intuitively.

However, the AI Act’s approach has drawbacks. First, watermarking techniques exist for video, images, and audio, but not really for text. Second, if a bad actor wants to fool people in believing that synthetic content is real, it will not mark its synthetic content. Suppose that a hostile nation intends to interfere with the elections of another country. The hostile nation can spread fake videos of a candidate who says racist things. This nation will not mark its deep fake as a synthetic, for obvious reasons.

Third, another drawback of the AI Act’s approach is that it will be increasingly difficult to maintain a yes/no separation between what is AI-generated and what is not. Tech companies are embedding AI into almost all (new) functionality that they offer. This may lead to regulatory challenges. In many cases, it is not practical for regulation to distinguish AI-based and non-AI-based practices. Lastly, watermarking is hardly deployed in practice (Zhang et al., 2023).

This paper proposes an alternative approach to the transparency requirement. The AI Act focuses on making artificially generated content recognisable. Our focus is on making clear who endorses what. The ‘who’ may be an individual, a group of individuals, an organisation, or an institution. The endorsement can happen technically via digital signatures. Such a signature connects the content (the ‘what’) that has been signed to the signer (the ‘who’). Credibility of individuals and organisations (that sign) becomes a key factor.

We shall use endorsement as a general term, with its common meaning, in general without legal effect. Signatures, however, can have a legal role in various contexts, since they can bind legal entities such as individuals and organisations. For instance, when humans use an AI tool to generate a certain contract and then all of them put their signature on it, they legally endorse the message. Is it still relevant to make explicit that the text is AI-generated? Hardly. What is relevant is the endorsement: the human signatures create legal effect and establish commitments and responsibilities, as described in the contract.

An investigative journalist who releases a video documentary on some (controversial) topic with his or her own (digital) signature on the movie creates endorsement. When the signature checks out one can be sure that the documentary really comes from this journalist. This may be reason for some people to have confidence in what is shown. If a video contains malicious misrepresentations, the endorsement also enables others to hold this journalist accountable, possibly even in court. After all, the journalist cannot deny that he or she is behind the video.

Systematic signing of messages can have a stabilising effect on society. Suppose that deep fake video appears online in which a famous politician makes negative remarks about a certain segment of the population (say Muslims or Jews). This is a realistic scenario that can even influence the outcome of an election. Should the politician who was imitated in the video try to prove it was not him or her by pointing to small unnatural signs in the video as proof of the video being a fake? No. The politician should say: I endorse all my messages via a digital signature and if this video does not carry a signature of mine, it is not authentic. End of debate.

The risk that deep fakes are used to influence elections is not hypothetical. To illustrate, a deepfake audio clip was distributed in the UK in 2023, in which Keir Starmer, back then the leader of the opposition party, was abusing employees. In reality, this abuse had never happened (Sky News, 2023). And in Slovakia, a deepfake audio clip was spread online in which Michal Šimečka, the leader of the opposition, discussed how to cheat in the election. Again, this had never happened (Meaker, 2023).

This paper describes the solidifying and stabilising role that signatures can play in a world full of artificially generated content, of unclear origin, with a dubious status. Signatures can help to clarify who says what. In modern form, such signatures will be digital, based on cryptographic techniques. How they work will be sketched in the paper, in general terms. We elaborate in particular on how digital signatures can play a beneficial role with respect to some of the challenges that the AI Act aims to address. We propose that it should become common that signatures are used to endorse artefacts (text, images, video, audio), to claim ownership and responsibility, by robustly establishing who the originator or creator is and what the original version is. Not only individuals, but also institutions and media organisations can digitally sign their messages, giving their audience authenticity guarantees.

Hardening of our IT-infrastructure for a resilient democracy is badly needed, given the destabilising flood of AI-generated material of unclear origin.

Literature

1. Zhang, H., Edelman, B. L., Francati, D., Venturi, D., Ateniese, G., & Barak, B. (2023). Watermarks in the sand: Impossibility of strong watermarking for generative models. arXiv. https://arxiv.org/abs/2311.04378. 2. Sky News. (2023, October 9). Deepfake audio of Sir Keir Starmer released on first day of Labour conference. Sky News. https://news.sky.com/story/labour-faces-political-attack-after-deepfake-audio-is-posted-of-sir-keir-starmer-12980181. 3. Meaker, M. (2023, October 3). Slovakia’s election deepfakes show AI is a danger to democracy. WIRED. https://www.wired.com/story/slovakias-election-deepfakes-show-ai-is-a-danger-to-democracy/

Legislation

1. AI Act (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending various regulations and directives (Artificial Intelligence Act). Official Journal of the European Union, L 1689, 1–60. http://data.europa.eu/eli/reg/2024/1689/oj

14:40
Extended Abstract: If Deceptive Patterns are the Problem, are Fair Patterns the Solution?

ABSTRACT. Researchers and legislators increasingly worry about \textit{deceptive patterns}: common tricks on websites and in apps that make users do things they did not intend to do (previously: dark patterns). If these deceptive patterns are a problem, could \textit{``fair patterns''} be the solution? We highlight several caveats to this approach. First, it is not obvious what it means for a design pattern to be \textit{fair}. What is fair depends on the context --- even within the same context, people disagree on what fairness means. Moreover, one fair design element does not guarantee a fair design. Combining these objections, it may be inappropriate to call a design pattern fair. Second, not all problems are adequately addressed by interventions at the design level. If all possible choices are unfair, design alone cannot make the situation fair. Societal problems must be solved at a societal scale, although design can contribute through incremental improvements. Progress in interface design does not need the concept of fairness: empirically informed solutions for specific problems appear more practical.

15:00
Surveillance Watermarking: Trading Privacy for AI Disclosure

ABSTRACT. Fears around AI-enabled disinformation and AI-generated creative content has led to widespread public interest in knowing when something we engage with has been created with AI and when it hasn’t been. A response has now been enshrined in the EU AI Act without any significant pushback, which states that providers of AI systems must “ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated”. Yet, there is a critical issue here – disclosing AI generation discloses just about nothing, no more than a “software used here” label”. AI can be used in an infinite spectrum of utilities to infinite degrees, including in ways that negligibly affect the content in question or in ways that change content to be more truthful and representative of reality. “AI” holds no qualitative meaning, such that its disclosure could assist with identifying misinformation or in ascertaining levels of human authorship in creative content.

Rather, much more detailed information would need to be tracked across the production of content to make qualitative assessments about the nature of AI-assisted content. Yet is there a way to track the creation of all digital content to that level of detail that does not infringe upon the fundamental EU rights to privacy and data protection? This article outlines the broadly ineffective current-state “watermarking” mechanisms that track AI use in content creation. Currently, these primarily range between three methods: human-visible watermarks, invisible machine-detectable watermarks, and provenance mechanisms that embed metadata within the file. As it stands, none of these methods are robust at ascertaining whether AI has been used at all, let alone in carrying granular enough levels of detail required for qualitative assessments on misinformation or authorship. This article outlines the significant hurdles to the efficacy of each of these methods, dependent on substantial further technological innovation, harmonised international standards enforcement, and international policing and removal of tools which fail to meet these watermarking standards, which may render these methods perpetually ineffective.

In doing so, it analyses how much more information (above an “AI” label) would be required to achieve the ostensible intentions of AI watermarking – both in relation to misinformation and human authorship objectives. It analyses current front-running methods, such as the C2PA protocol, which are reliant on tracking all content, not only AI-assisted content, in order to respond to content which has had its AI-disclosing metadata removed. These methods may require near-ubiquitous protocol interoperability to be robust enough to trust for qualitative assessments of content.

Finally, it outlines critical tensions between detailed content tracking methods that could provide enough metadata for qualitative assessments relating to misinformation or authorship and fundamental rights to privacy and data protection. These tracking methods would need to track the content itself, not solely the technologies used in its production. Inherently, any provenance mechanism that seeks to robustly record all changes to content in order to ascertain its truthfulness would require a level of detail that might enclose identifying information about the individual making the changes. Even simple awareness of detailed content-tracking at all stages of the creative process may have a chilling effect on creators’ freedom of expression. Critically, if there is no means of meaningfully tracking content that is not overly surveillant, then the AI Act’s mandate to mark AI content must lead to mechanisms that are either illegally invasive or functionally inadequate. Further, it entrenches greater reliance on techno-solutionism without clear consideration of the political ramifications over public versus private control when designing responses to societal challenges. In turn, this article foundationally questions the efficacy of mandating a currently unproven techno-solution and outlines the prospective dangers to fundamental European rights that such a mandate poses. In turn, it directs attention towards the necessary safeguards that must be relied upon if we are to accept a world where all digital content production is recorded and tracked to alert us to the use of increasingly ubiquitous AI.

14:00-15:30 Session 3C: Theory session II - on trust around AI and platforms

medium density

Location: White Space
14:00
AI and the Medical Fiduciary

ABSTRACT. The introduction of AI technologies in healthcare invites us to reflect more deeply on the moral value that human agents bring to medical practice. Owing to the vast bioethics literature, it is arguable that we are already well aware what are the moral values which the use of AI in the healthcare context should maintain and promote. However, we argue that greater attention to the character of the patient-physician relationship draws attention to the distinctive value of the manner by which moral values are implemented in medicine. This paper will examine the moral value of physicians beyond their role as content-experts by proposing a fiduciary characterisation of the physician-patient relationship. We argue for a conceptualisation of this bond according to which the physician performs her duty to act in the best interest of the patient by acting as the patient’s interest guardian. The fiduciary trust enacted in the physician-patient relationship is a means of realizing a desirable level of patient autonomy in medical practice. We contend that AI challenges the patient-physician relationship because it introduces the prospect of points in patientcare at which no one is interest guardian for the patient. This brings about a (active) responsibility gap (Santoni De Sio & Mecacci, 2021), whereby a physician is either prevented or discouraged from fully performing her role as interest guardian. Thus, technologies in healthcare must be introduced such that AI supports the physicians’ ability to maintain active responsibility for patient wellbeing.

Despite the fact that legal systems typically diminish or even dismiss a fiduciary characterisation of the physician-patient relationship (Mehlman, 2015), the application of the fiduciary model to this relationship has a long tradition (Ludewigs et al., 2022). By definition, a fiduciary relationship is one in which one party has a duty to act in the other’s best interest, such that the fiduciary (Party A) holds a position of trust and confidence with respect to the beneficiary (Party B). It has been argued that the proliferation of information in the digital age and the enhanced economic power of patients in the developed world especially ought to urge us to move away from the doctrine of the medical fiduciary (Veatch, 2000). We suggest, however, that the fiduciary principle still presents the appropriate conceptual framework for analysing the physician-patient bond. The essential asymmetry of the medical relationship has not changed over time (Rich, 2005). What is needful is a conceptualisation of medical trusteeship which avoids the (justifiably decried) medical paternalism of the past, while nevertheless preserving the insight that physicians do, in fact, know better than their patients in key respects--and thus, that their moral duties flow accordingly. The appeal of patient-centric models of clinical decision-making are grounded in the value of patient autonomy and patient wellbeing. Yet, these values are arguably better realised by the fiduciary characterisation of the physician-patient relationship than by patient co-worker models. 

What is at stake in the physician-patient interaction is not simply that the patient requires that her physician adheres to certain standards of good practice and conduct. It will be shown that, in a significant sense, in submitting to medical care, a patient loses some of her capacity for choice and self-direction. Her commitment to the advancement of own wellbeing requires her to accept treatment which she is not totally able to evaluate and control because she does not understand how to, is not able to, and/or is diminished such that she cannot do this for herself (Mehlman, 2015). The patient is in fact reliant upon the physician to act in her stead where necessary in the promotion of her best interest. 

Physicians should be held to higher standards than those present in other professions because they have special standing in a moral project in which a situation of critical vulnerability occurs. Medical care brings about an acute form of reliance for the patient, during which her ability to self-guard is reduced, and in which she may stand to lose her capacity for future health and autonomy. Physicians are uniquely poised to take advantage of their patients insofar as they are also possessed of the resources with which to help them. Consequently, the situation of reliance created in medical care gives rise to direct moral duties for the physician. Because the constitutive moral goal of medicine is patient wellbeing (Pellegrino, 1990), and the patient is placed so that they cannot adequately do this for themselves, the duty of the physician goes beyond exercising due care and becomes one of guardianship for the patient’s wellbeing.  

It will be argued that fiduciary trust can only properly be directed at a human agent. In part, the physician’s fiduciary responsibility involves the making of individual, discretionary judgements about the best interest of the patient. Thus, a medical fiduciary’s forward-looking duty of care consists in promoting patient autonomy, beneficence and non-maleficence, within the constraints imposed by institutional justice. And, in part, the existence of the fiduciary relationship also serves as assurance to the patient that a human agent (the physician) has considered the patient’s position in a way that the patient might have considered it for herself, had she been able to. Patients are entitled to form the reasonable expectation that, in giving up of some of their agency, a competent caregiver will stand in their stead—that is to say, that there is at least one person upon whom active responsibility for their wellbeing will devolve. When the position of the patient is thus considered for and on behalf of the patient by the physician, the physician serves to preserve the autonomy of the patient in a critical private domain.

We contend that AI stands to undermine the fiduciary relationship because the use of AI technologies can introduce points in patientcare at which no one acts as interest guardian for the patient. It has been made clear in the recent literature about responsibility gaps in AI that such gaps are not restricted to the context of blaming human agents for undesirable outcomes resulting from AI (mis)use: in fact, a fuller analysis of responsibility gaps has been extended to include gaps in what has been termed “active responsibility” (Santoni De Sio & Mecacci, 2021). This is a form of forward-looking responsibility that concerns the duty to achieve a certain goal, value or norm in the future (hence, potentially preventing negative outcomes from occurring) by, for example, fully exercising one’s role. Overreliance on Clinical Decision Support Systems may, for instance, lead a physician to deploy her critical decision-making skills insufficiently, or prevent her from sufficiently exercising her moral agency in the context of a morally crucial decision. This is a significant problem because the fiduciary duty plays an essential part in how the medical profession enacts its moral character and fulfils its constitutive moral mandate.

The practical consequence of the distinctive moral value of the patient-physician relationship is that there are normative restrictions upon the introduction of AI in healthcare. These include placing limitations upon the roles which AI may assume in healthcare, requirements upon the knowledge of AI which healthcare professionals must possess, as well as restrictions upon the degree to which specific medical AI technologies must remain explainable. Further research must also be conducted to determine how medical AI can best be designed to enable physicians to maintain morally salient control of the sort that is compatible with the exercise of their duties of guardianship.

Properly conceptualised, the fiduciary characterisation better enhances patient autonomy and patient wellbeing than alternative characterisations. It spotlights what is the distinctive contribution of the physician to patientcare beyond content expertise, and this is essential to an analysis of the patient-physician relationship in the wake of AI technologies. It is thus imperative that use of AI systems in healthcare supports physicians in retaining active responsibility.

References

Childress, J. F. (1990). The place of autonomy in bioethics. The Hastings Center Report, 20(1), 12–17. https://www.jstor.org/stable/3562967

Ludewigs, S., Narchi, J., Kiefer, L., & Winkler, E. C. (2022). Ethics of the fiduciary relationship between patient and physician: the case of informed consent. Journal of Medical Ethics, 51, 59-66. https://doi.org/10.1136/jme-2022-108539

Mehlman, M. J., J., Frankel, T., Rodwin, M. A., Seipp, D. J., Smith, D. G., & Miller, P. B. (2015). Why physicians are fiduciaries for their patients. Indiana Health Law Review, 12(1), 1–64.

Pellegrino, E. D. (1990). THE MEDICAL PROFESSION AS a MORAL COMMUNITY. New York Academy of Medicine, 66(3), 222–223). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1809760/pdf/bullnyacadmed00014-0025.pdf

Rich, C. (2005). The doctor as double agent. Pain Medicine, 6(5), 393–395. https://academic.oup.com/painmedicine/article/6/5/393/1853657

Santoni De Sio, F., & Mecacci, G. (2021). Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philosophy & Technology, 34, 1057–1084. https://doi.org/10.1007/s13347-021-00450-x

Veatch, R. (2000). Doctor does not Know Best: Why in the new century physicians must stop trying to benefit patients. Journal of Medicine and Philosophy, 25(6), 701–721. https://academic.oup.com/jmp/article/25/6/701/1008953

14:15
Through faith we trust: A tripartite trust model for explaining AI mis- and disuse

ABSTRACT. Faith is trust’s most essential element but remains critically understudied across domains. Increasing evidence on the mis- and disuse of AI systems stresses that a more fine-grained understanding of trust, its antecedents, and its implications for AI acceptance and adoption is required. Building on rich philosophical and sociological foundations, we depart from monolithic notions of trust and develop a novel model of trusting beliefs, ranging from faith to trust and confidence, according to the dominance of objective evidence or compensating affect. While faith is an emotionally charged and resilient acceptance of limited, uncertain knowledge as true, confidence relies on substantial evidence and minimal affect. Trust bridges these poles, willingly accepting and building on uncertainty, containing profound feelings of faith and cognition. This tripartite framework offers a clearer understanding of how imbalances in trusting beliefs can drive the well-documented problems of AI mis- and disuse. Specifically, misguided faith or unfounded confidence can lead to misuse, while withholding trust due to excessive demands for certainty or negative affect can spur disuse. Each dimension – faith, trust, and confidence – therefore warrants distinct calibration approaches, of which we outline five. With our model, we aim to motivate future research to gain a more detailed understanding of trust’s impact on AI acceptance and its calibration.

14:30
Soft Biopolitics: TikTok’s Moderation Codes and The 1881 Ugly Law

ABSTRACT. Abstract In 2020, news coverage of TikTok’s “invisible censorship” revealed how internal moderators were being instructed to suppress “ugly” and “poor” content based on parameters from a leaked 2017 TikTok coding document (Appendix B). The 2017 TikTok moderation codes in many ways closely echo the “ugly laws” which are a group of laws enacted previously in the United States that discriminate against marginalized communities. There is a lack of scholarly research comparing how both these documents promote discriminatory behavior. Focusing on the 1881 Chicago “ugly law” and the 2017 TikTok coding sheet, I use textual analysis to critically compare and assess the language in these documents in order to expose the continuation of an enduring form of discriminatory practices in the technological era. Both of these documents serve as iconographic examples representing some of the larger societal exclusionary measures from their time. Based on the previous research of Cheney-Lippold (2011), I argue that content moderation is a form of soft biopolitics. I take an intersectional approach, or account for the multiple layers of oppression that are a factor, while examining biopolitics rooted in a digital age (Crenshaw, 1991). Additionally, I employ a sociotechnical lens to highlight how each document reflects shifting power dynamics in society in the context of the United States. This research provides a deeper understanding of the bias embedded in societal discourse and exposes systemic injustices in subversive modern forms.

14:50
Breaking Silencing Conspiracies with Technologies of Trust - An Academic Freedom Proposal

ABSTRACT. Imagine academic freedom as an onion (Kováts & Rónay, 2024). If one peels away its layers—legislation, jurisdiction, government policies, institutional autonomy, and the rights of universities and researchers—at its core lies the curious individual, who is often engaged with others rather than isolated. The stronger academic freedom is in a given country, the more robust the institutional guarantees are for the freedom to teach and inquire. Conversely, in countries where academic freedom is severely suppressed, scholars often continue to resist by reading, discussing, sharing, and even smuggling banned books, holding clandestine seminars, and perhaps publishing samizdat literature.

While government-led suppression of academic freedom often finds allies among complicit academics, university administrators, businesses, and government-organized NGOs (GONGOs), efforts to promote academic freedom are frequently hindered by conflicting interests, making cooperation difficult. When suppression is institutionalized through legislation and government action, alleviating oppression from abroad becomes even more challenging.

Education-related relief and advocacy are prominent in global aid efforts, and more than a handful of organizations supporting at-risk scholars have yielded significant results. However, if one were to map out the suppressed researchers and university lecturers worldwide—distinguishing those who receive foreign support from those who do not—a striking pattern emerges: scholars facing the harshest conditions receive the most help, primarily through rescue and relocation operations. While this is a perfectly justifiable response, remote support for scholars who remain in their home countries is comparatively limited. Likewise, major academic freedom research, policy discussions, and advocacy events tend to portray silenced scholars as anonymous figures, representing censorship and self-censorship without addressing their individual struggles.

(I exclude Ukraine-related academic support from this discussion due to its war-related nature, which makes comparisons difficult. However, I acknowledge that distinguishing between different types of war-related academic suppression has become increasingly complex in recent years.)

Scholars at Risk (2025/1), the leading organization supporting affected academics, identifies the following forms of remote support for at-risk scholars: “The universities created remote fellowships through which a SAR scholar can: - receive a university email address; - access library and database resources through this email address/university affiliation; - publish work using the university’s affiliation; - in some instances, connect with mentors or teach remote classes at the host university; and - re-engage with academia after being pushed outside the university space, sometimes for years."

I chose the format of a position paper because, at this conference, I seek feedback and potential partners for an initiative I began developing in 2024: Mesh - Academia Without Borders (see references). This initiative aims to counter academic silencing by fostering international academic collaboration and solidarity.

Silencing and Trust: The Conceptual and Empirical Connection

In countries where academia is suppressed, key policy documents—such as those from the EU and EU-funded initiatives—rarely focus on the silenced themselves. A closer examination of silencing reveals crucial differences: being forcibly silenced in isolation is not the same as enduring silence while knowing that someone, somewhere, is paying close and sympathetic attention. This distinction is at the heart of the Mesh initiative.

Research on at-risk scholars further highlights the importance of supporting those living "under the radar." Spannagel et al. (2020) note that documenting concrete academic freedom abuses does not capture other forms of repression. Moreover, they stress (2022: 2-3) that legal protections for academic freedom often do not align with reality, stating that "close to one-third of countries with the worst performances on academic freedom (i.e., AFi scores below 0.4) have constitutional protections for academic freedom in place." Reports on at-risk scholars frequently use terms such as silence, silencing, chilling effect, suffocation, estrangement, chokehold, intimidation, and asphyxiation—all indicators of the profound psychological toll faced by these scholars.

Despite limited documentation, scholars who benefit from remote support initiatives emphasize their value not just for academic resources but for fostering a sense of attention and solidarity. A Turkish scholar, referring to support received through WLU’s Visiting Researcher-Scholars at Risk program (2024), captured this sentiment in an Inspireurope+ webinar:

"Feel alive academically. Hope. Belonging. Collaboration. Economic support."

Even in stable academic environments, scholars often experience stress, self-doubt, and fatigue. For those in oppressive conditions, the mere knowledge that someone values their work and is aware of their struggles can be a lifeline (Scholars at Risk, 2025/2).

Mesh: Proposed Actions

The following two Mesh initiatives are grounded in the notion of trust.

1. Mentoring

Mesh proposes a global network in which scholars from safe countries mentor at-risk scholars through secure communication channels. The primary goals are: Producing academic content Fostering collegiality and attention for at-risk scholars Creating tangible improvements in scholars' lives Addressing structural inequalities in global academia

Trust is essential—not only for building a working relationship between two strangers but also because reducing isolation can encourage further initiatives within the scholar’s local academic environment.

2. Video Testimonies

In 2024, Mesh began working with scholars who wished to share their experiences regarding academic freedom. We produced two types of videos: one set featuring anonymous scholars and another featuring those who chose to be identified. These videos (Mesh, 2025) represent an experimental phase, with ongoing efforts to develop new formats, including AI-generated animated content, to present anonymous and non-anonymous voices effectively.

Both the mentoring network and the video testimonies rely on different but complementary notions of trust. Mentoring fosters personal support and scholarly communication, making research less vulnerable to isolation. What anonymous video testimonies offer are nothing less than breaking the socio-cultural reproduction of silence, illustrating the observation of Eviatar Zerubavel’s (2006) illuminating small book aimed to “to explore the structure and dynamics of conspiracies of silence”. By learning of each other’s existence, silenced scholars may feel less alone, reducing the asphyxiating effects of repression. Silenced scholars learning about each others’ previously invisible existence may feel less impacted by the asphyxia of silencing.

I look forward to discussing the further ramifications of these ideas at the conference.

References

Kováts, G., & Rónay, Z. (2024). Az akadémiai szabadság tartalma és határai: a hagymamodell [The content and limits of academic freedom: The onion model]. Educatio, 33(3), 277-291. https://doi.org/10.1556/2063.33.2024.3.1

Mesh - Academia Without Borders: https://mesh-initiative.de/

Scholars at Risk (2025/1): Remote support: https://www.scholarsatrisk.org/remote-fellowships

Scholars at Risk (2025/2): Reflections: support:https://www.scholarsatrisk.org/category/news/network-reflections/

Spannagel, J Janika et al. (2020) The Academic Freedom Index and Other New Indicators Relating to Academic Space: An Introduction. Users Working Paper. SERIES 2020:26. The Varieties Of Democracy Institute. University of Gothenburg.

WLU (2024) https://www.wlu.ca/academics/research/research-services/assets/resources/visiting-researcher-scholars-at-risk.html

Zerubavel, E. (2006) Elephant In the Room. Silence and Denial In Everyday Life. OUP.

16:00-17:30 Session 4A: Behaviour session I
Location: Grand Space
16:00
The gendered aspect of establishing digital trust in new-age relationships

ABSTRACT. The proliferation of online dating platforms globally has led to an increasing number of Indians using dating sites and apps to find romantic partners. Over five years, the number of users in India has grown from 20 million in 2018 to 82.4 million in 2023, which is a notable 293% increase (Sharma, 2024). These new forms of dating, driven by advanced technologies, have paved way to the dynamic state of not only social interactions but also the element of ‘trust,’ which has strangely become too convenient and uncomplicated to build on. In this post-modern world, where relationships are built in online spheres, the question of how easily trust is formed takes centre stage.   The number of people resorting to online dating apps surged during the period of COVID-19 due to lack of options to find prospective partners in offline spaces and amidst social distancing scenarios (Yodovich et al., 2025). This led to a spike in the otherwise hesitant women users joining such platforms (Kaur & Iyer, 2021) along with an increase in online harassment and abuse of women with rising cases of catfishing, online stalking, receiving explicit images, blackmailing (Rajan, 2025). While it is the nature of humans to be skeptical when exploring and experiencing the unknown and unexplored, however, when it is about emerging technologies, humans somehow tend to overcome skepticism and formulate trust differently. Dating apps make people trust strangers with their lives, experiences, struggles, and valuable assets. Women’s vulnerability to cybercrimes have increased due to the sharing of personal information and being emotional, they could readily be persuaded and victimised for personal or public advantage (Dutta, 2023). Reports indicate that negative stereotyping and misogyny are also a factor in women being disproportionately targeted by cyber threats (see UN Women & UNU, 2024). The vulnerabilities and risks that women experience differently in the digital realm, make it pertinent to understand what makes a woman trust in the online sphere.

Trust that used to flow upwards to institutions is now flowing horizontally to other fellow human beings and bots (Botsman, 2017). However, the speed and seamless mannerism in which these online platforms operate, enable people to take a trust leap (ibid). Trust as an important factor in online businesses has been studied adequately (e.g. Ba et al., 2003; Hoffman et al., 1999; Keat and Mohan, 2004; Kim and Benbasat, 2003; Lee et al., 2006; Lee and Turban, 2001) However, the experiences of  human negotiations that trust involves specifically in terms of dating apps is seldom discussed. With literature claiming cybercrime, a gendered phenomenon, it makes it even more relevant to understand trust from the perspectives of women. Drawing on 25 in-depth interviews with purposively sampled stakeholders such as women users, technologists, platform developers, psychologists, the study examined how platform design and user behaviour transact with each other and enables women on online dating platforms to take a trust leap. Understanding this transactional communication occurring on online dating platforms and examining the responses, the study found that several critical factors influence trust formation on online dating platforms including platform design interface, digital courtship, technological mediation, societal stereotypes, and peer influence and recommendations.

First, the platform design facilitates trust by including safety features, advanced user verification process, location tracking, and detailed profile setups enabling greater transparency. Women users reported higher interest in profiles with blue checkmarks, comprehensive information, and multiple pictures compared to those with only a single picture and limited information. Second, digital courtship -- the period preceding an offline meeting, during which women and men actively interact and engage using digital platforms to build trust. The user interaction and behaviour build or erode this trust as women reward consistency and respectful communication over inappropriate and negative messages. In a culturally rich and family-oriented country like India, women users were found to be more inclined towards men who showed genuine concern towards their family members and/or shared pictures of family. Third, in consonance with the first point, platform developers highlighted the importance of end-to-end encrypted messaging, masking phone numbers and names, agency to limit information, and real-time support for reporting unpleasant experiences, as technological mediations for building trust in the online sphere. Fourth, layers of societal stereotypes, such as skin color, caste, marriageable age, and religion, especially within the Indian context, make women vulnerable to take impulsive decisions. Some of the interviewed women, particularly those over 30, reported to have succumbed to the societal and family pressure and met men hastily without much deliberation. Fifth, peer influence plays a crucial role in trust formation as women users prefer apps recommended in their social circle. Further, they admitted to sharing profiles with their friends before swiping left or right, and especially before meeting someone offline.

Based on the findings, we propose a framework for understanding the process of trust formation among women users on dating platforms. We suggest the ‘Ladder of Trust,’ reflecting various stages leading to trust formation, leap, recalibration, and establishment or decay. This trust ladder is a framework for technologists, corporates, programmers and platform designers to understand what goes behind when women formulate trust.

(Below is the Ladder of Trust which will be presented diagrammatically in the paper) Online: •First Connect •Resistance •Interaction •Engagement •Social Validation •Digital Courtship (Using multiple online platforms)   Online and Offline: •First Trust Leap •Trust Dwindling •Trust Recalibration •Re-Validation •Trust Decay/ Trust Established

Trust continues to be an element between two people, however, in the digital realm, the pace at which technology leads us to take steps forward, it is also an element that exists within the environment provided by specific technologies. It is important to realize that there are various stages of trust formation process when an Indian woman takes the first trust leap and meets someone in the offline world. The same trust dwindles, alters, re-validates, and then is either eroded or established due to cultural and gendered intricacies. The platform design should account for all these perspectives and create the right culture around the technology (Botsman in Blandino, n.d.) It is indeed the responsibility of the platforms to be accountable in the online as well as offline spaces. Hence, for this accountability, it is relevant to understand trust deeply at various levels through a gendered and cultural lens.

16:20
Trust in the Age of Algorithms: How Generation Z Canadians Navigate News Trust, Skepticism, and Selective Exposure

ABSTRACT. In an era where digital algorithms play a central role in curating the information people consume (Shi and Li, 2025; Swart, 2021), the question of news trust has become increasingly complex. Generation Z, the first cohort to grow up entirely immersed in a digital ecosystem, primarily encounters news incidentally through algorithmically driven feeds (Newman et al., 2023; Papakonstantinidis, 2018; Statistics Canada, 2020a and 2020b;). This environment presents both opportunities and challenges: while it democratizes news production and allows for personal news experiences (Duffy et al., 2018), it also fosters selective exposure, misinformation, and an increasing sense of distrust towards news – both traditional institutions and social media (Karlsen and Aalberg, 2023; Metzger et al., 2020; Thorson and Wells, 2015). This research explores how Gen Z Canadians navigate algorithmic influence and evaluate trust in news by prioritizing first-hand accounts, transparency, and their own verification strategies over institutional credibility. This study contributes to understanding contemporary trust dynamics by examining how digital-native audiences reshape traditional notions of credibility. These insights have critical implications for journalism, digital literacy, and public discourse.

16:40
Trust and Bureaucratic Communication: The Effects of Emotional Appeals on Public Trust in the European Commission

ABSTRACT. Large firms such as Amazon, Google, and Meta have frequently made global headlines for abusing market power and violating antitrust regulations. Within the European Union (EU), the European Commission (EC) has established itself as a regulatory powerhouse capable of counterbalancing the dominance of these firms (Bradford 2020). Antitrust cases involving major corporations often attract significant media attention and become highly politicized, triggering public debate. In response, the European Commission has actively sought to address the politicization of its most contentious competition policy cases (Escalante-Block 2024). However, research has shown that the European Commission predominantly relies on a technocratic communication style in its public statements, particularly in its press releases (Rauh 2023). This approach is characterized by complex language, specialized bureaucratic jargon, and a nominal style that obscures political action, potentially engaging individuals in cognitive information processing and decision-making. Recent scholarship, however, has demonstrated that bureaucratic communications incorporating emotional appeals can shape public perceptions of regulators and the regulatory measures they propose (Mazepus and Rimkutė, 2025). Yet, despite growing recognition of the role of emotion in bureaucratic communications, there remains a critical gap in our understanding of how emotional appeals embedded within highly technical regulatory communications influence public perceptions in regulators and support for their policies (Maor, Rimkutė, and Capelos 2025; Rimkutė 2025). Addressing this gap, this study examines antitrust regulation as a particularly salient and politicized case. Specifically, we address the following research question: What are the effects of emotional appeals (neutral, positive, negative) in bureaucratic communications on EU citizens’ trust in the European Commission and their support for the regulation of Big Tech? To address this question, we integrate both the literature on bureaucratic communications (Carpenter 2010; Busuioc and Rimkutė 2020) and emotional appeals (Blasio and Selva 2020). One emerging area of public administration is affective governance and affective regulation (Richards 2007, Rimkutė 2025), which refers to the strategic management of emotions in public administration to shape interactions between regulators and citizens, ultimately influencing regulatory measures and citizens perceptions of regulators themselves. While emotions are typically understood as individual psychological and physiological responses to stimuli (Stemmler 2004), they can also function as tools of communication and reputational information transfer in a broader societal context (Luo and Yang 2022; Maor, Rimkutė, and Capelos 2025). In regulatory governance, emotions go beyond personal experiences, playing a crucial role in regulator-citizen interactions and offering regulators a means to guide public sentiment and regulatory engagement (Rosas and Serrano-Puche 2018, Rimkutė 2025). Thus, regulatory governance and communication that engage both cognitive and affective information processing processes can foster greater citizens’ engagement and responsiveness to proposed regulations. By strategically managing emotions, governments can foster public trust, enhance citizens’ perceptions of legitimacy, and strengthen the reputation of regulators (Maor, Rimkutė, and Capelos 2025; Rimkutė, 2025). Existing literature highlights two primary roles of emotions in governance: (1) shaping how citizenship is experienced, defined, and enacted (Ahmed 2004), and (2) supporting governments in maintaining social order and reinforcing power structures (Fortier, 2010). Here, the concept of “affective regulation” highlights how regulators not only recognize and regulate personal emotional relationships but also guide how citizens perceive regulatory bodies, regulations and others in public life (Johnson 2010; Jupp et al., 2017; Rimkutė 2025). Within this literature, studies have shown, for example, that emotional governance in China can be an effective strategy for improving public trust and satisfaction (Tong et al., 2024). Meanwhile, others have looked at emotions and how they may be crucial in defining a ‘good citizen’ during the crisis’s times (like the pandemic) (Blasio and Selva 2020). Building on this scholarship, we hypothesize that positive emotional appeals within technocratic communication increase citizens’ trust in the European Commission and enhance their support for its role in antitrust regulation against Big Tech. Conversely, negative emotional appeals are expected to diminish trust and reduce support. To empirically assess the effects of emotional appeals in bureaucratic communication on citizens’ trust in the European Commission, we will conduct a pre-registered survey experiment using a between-subjects design. Participants will be randomly assigned to one of four experimental groups: a control group or one of three experimental treatment groups featuring neutral, positive, or negative emotional appeals. The study will collect data from 3,000 participants, including a nationally representative sample from France, Germany, and Ireland (N = 1,000 per country). The experimental manipulation consists of a hypothetical press release from the European Commission announcing an investigation into a fictional Big Tech company, SearchSphere, for potential antitrust violations. The key manipulation varies the comments attributed to the DG Competition Commissioner, randomly assigning statements that incorporate neutral, positive, or negative emotional appeals. Following the manipulation, we will measure trust in the European Commission as our outcome variable, assessed through perceived competence, integrity, and benevolence (Grimmelikhuijsen & Knies, 2017), as well as support for the Commission’s proposed regulatory measures against a fictional Big Tech company, SearchSphere. This study contributes to ongoing debates on bureaucratic politics, communication, and affective politics in three key ways. First, it aims to demonstrate how emotional appeals can strengthen or weaken trust in technocratic institutions, even when embedded a fact-based communication style. Second, it highlights the significance of emotional framing in bureaucratic discourse. Third, it provides valuable insights for EU policymakers and communication strategists on framing regulatory decisions to maximize public trust and public engagement.

16:00-17:30 Session 4B: Safeguards session III
Location: Red Space
16:00
Trustworthy Signals in Data Governance: Reframing Benevolence as Seamfulness, Transparency with Mutual Vulnerability, and Consensus-Building

ABSTRACT. In the early days of computing, user interaction with a computer was relatively straightforward and the capabilities of computer systems were strictly limited. Increasingly, however, users find themselves interacting with a variety of separate systems that are oriented on extracting yet more volumes of data in great granularity that is then analysed, combined and reconfigured through complex information processing to produce ‘insight’ and ‘output’. While the user is generally not able to observe how their data is extracted and then used, transferred, stored and disclosed, they are also increasingly aware of how vulnerable they are to weak data governance frameworks, a wanton collection of data and analysis, and poor data storage practices. As such, users are losing trust in many of these data-focused systems. The absence of trustworthiness in these systems diminishes the likelihood of the desired user interaction. Accordingly, one of the many challenges of designing such complex, hybrid systems is making them trustworthy. Trust in such complex, hybrid systems is often limited because of the opacity of the systems and the fact that it is now increasingly challenging for the user to understand the way in which data is being collected from them, how it is being stored, analysed, represented, and how this data is then turned back onto the user through automated decision-making systems. Although we encounter such complex, hybrid systems in many domains from business to entertainment, in the following paper we consider what design measures should be in place to ensure the trustworthiness of the macro flows of data that sit above these fragmented computer systems that offer a plethora of services. Much of the prior research into trustworthiness has involved developing models in the context of individual human interactions, primarily in sociology, psychology, and management. The most successfully validated model of this type is one developed by Mayer, Davis, and Schoorman (1995). They consider that when a trustor assesses the trustworthiness of a trustee, three factors of the trustee are taken into account namely: benevolence, ability, and integrity. One interpretation of benevolence is that it represents an assessment by the trustor of the intention on the part of the recipient of trust (the trustee) to act in the best interests of the person giving trust (the trustor) (Mayer, Davis & Schoorman, 1995). Such assessments, by the trustor will be based on cues or signals. The trustor may, or may not, be consciously aware of these cues. Attempts have been made to interpret ability and integrity in the context of human interactions with technology in general and digital technologies in particular. However, this has proved particularly challenging with respect to benevolence. While the user may (and often does) ascribe intentions to a particular piece of technology, such ascriptions often blind the user to the human hand behind the development, training and deployment of the piece of technology. Yet this assessment by the user that an complex hybrid systems takes their interests and needs would seem to be a fundamental basis for considering such systems trustworthy. This is critically important in the context of how we approach data governance and the creation of policies, processes and frameworks around ensuring how data is accurate, kept secured, and maintained achieves appropriate quality standards. We argue that complex hybrid systems must be guided by data governance principles that actively demonstrate benevolence towards the individual by promoting inclusivity and participatory decision-making (Wang & Burdon, 2021). Such demonstrations cannot simply be performative and are likely to include both the use of participator design approaches and the active engagement of the users of the system. Thus, this paper advances a new approach to trustworthiness in data governance, centring it on benevolence and conceptualising it through three interrelated properties that designers and implements should consider when devloping large complex data systems: seamfulness, transparency with mutual vulnerability, and the provision of consensus-building mechanisms. It is further posited that the benevolent structuring of governance mechanisms allows for the emergence of value consensus, which in turn informs regulatory integrity (i.e., the creation of standards and regulation) and how ability is defined in briefs, technical specifications and project requirements (Wang, 2021). As such, benevolence in data governance frameworks is about starting with human participation—ensuring it by arguing against seamlessness in decision-making processes, and in fact embracing seams in automated and AI processes that allow for autonomy and human oversight. Simultaneously, these seamful processes must be transparent – exposing not just the data flows (i.e., how it is extracted, used and disclosed), but the thinking behind why data is collected, how it is being used, and why it is being used. Moreover, there needs to be a clear forum for how different views and values are given the opportunity to be in contest which each other, to allow for true genuine value consensus to form. Although all three of the properties identified above are deserving of detailed attention the following section provides a more extensive discussion of each concept, beginning with seamfulness. Seamfulness as a Countervailing approach to Seamlessness If we turn our attention towards the increasingly datafied urban environment, smart city technologies and automated decision-making systems frequently prioritise seamless data integration to enhance efficiency and predictive capabilities. However, this pursuit of seamlessness often obscures the mechanisms by which data is collected, used, analysed, and disclosed, reducing the ability of affected individuals to contest decisions (Wang, 2021). Seamfulness, by contrast, deliberately introduces moments of friction within automated decision-making to allow human discretion, contestation, and self-determination (Chalmers & Galani, 2004). This could be through citizen panels on large technology projects to provide public deliberation on the direction and execution of decision-making on these sensitive projects. Seamful data governance frameworks ensure that data subjects retain agency by revealing and allowing interaction with the underlying decision-making processes (Wang, 2021). Within the context of datafied cities, the notion of seamfulness aligns with the democratic ‘right to the city’ by fostering participatory governance and protecting individual selfhood against the encroachment of automated dataveillance (Lefebvre, 1968; Wang, 2021). Transparency with Vulnerability Transparency has long been advocated as a trust-enhancing mechanism in data governance. However, transparency alone is insufficient when it is unidirectional, focusing solely on making systems legible to the public without reciprocal openness from governing institutions (Wang, 2021). This paper builds on prior work to argue that transparency must be accompanied by an explicit acknowledgment of vulnerability on the part of both trustors and trustees (Burdon & Wang, 2021). Mutual vulnerability fosters trust by demonstrating that power holders are equally exposed to the risks and failures of data governance decisions (Wang, 2021). Case studies of COVIDSafe in Australia reveal that transparency efforts that failed to incorporate vulnerability—such as the government’s reliance on moral compulsion rather than shared risk acknowledgment—were ineffective in securing public trust (Wang, 2021). To be genuinely benevolent, transparency mechanisms must move beyond compliance-oriented disclosure and engage in meaningful, reciprocal openness about data limitations, decision-making trade-offs, and institutional fallibility (Greenleaf, 2021; Wang, 2021). To expand on the issues of transparency further. In this context, benevolence requires a specific type of transparency - one that is about revealing your own hand, and saying - this is the information I’m after, this is why I need this information, and I’m willing to take on obligations because of my collection, use, storage and potential disclosure of your information. Consensus-Building Mechanisms for Democratic Data Governance Benevolence in data governance also requires active processes for reaching value consensus. These processes are likely to provide for the active participation of users and provide for them to investigate and, to some extent experiment with the decision algorithms used by the system. Without these provisions the use of ADS by complex hybrid systems risks perpetuating epistemic injustices where certain stakeholders are excluded from shaping the norms that govern data use (Fricker, 2007). Indeed, even in the case where certain stakeholders are NOT excluded from the shaping of norms this needs to be demonstrable to the user should the user has concerns or assumes that this is so. This paper advocates for the incorporation of deliberative consensus-building mechanisms within ADS governance to enable fairer and more democratic outcomes (Wang, 2021). Consensus-building mechanisms must be embedded at multiple levels, from smart contracts that incorporate consensus oracles that allow for multiple human(s)-in-the-loop to policy-level participatory frameworks that ensure broad-based input into algorithmic decision-making (Wang, 2021). Smart city governance, for instance, requires more than technical interoperability standards; it must incorporate mechanisms that allow communities to articulate and contest the values that underpin automated decision processes (Wang, 2021; Wang, 2023). Without these consensus-building mechanisms, trust in ADS remains fragile and susceptible to breakdowns in legitimacy, particularly as the Overton windows shift and these trust decision-making frameworks are not able to keep pace with these value changes (Wang, 2021). Concern with consensus building mechanisms suggests further study of the use of participatory design and further investigation of techniques to enhance digital engagement. As data structures become increasingly automated and opaque, the need for trustworthiness in data governance frameworks in which benevolence plays a key role become more urgent. Taking into account the need to provide for seamfulness, transparency coupled with vulnerability, and consensus-building mechanisms collectively offer a reoriented approach to designing complex hybrid systems such as those supporting automated decision-making, ensuring that data governance structures remain accountable, contestable, and aligned with democratic principles. Such approaches provide the user with appropriate signals allowing for the assessment of the trustworthiness of the system at hand. Thus, this framework provides a path towards more trustworthy digital societies by shifting the focus from technical compliance towards participatory and benevolent governance structures.

References Burdon, M. & Wang, B.T. (2021). ‘Automating Trustworthiness in Digital Twins.’ International Journal of Law and Information Technology, 29(2), pp. 110-135. Chalmers, M. & Galani, A. (2004). ‘Seamful Interweaving: Heterogeneity in the Theory and Design of Interactive Systems.’ Proceedings of the ACM Conference on Designing Interactive Systems, 243-252. Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press. Greenleaf, G. (2021). ‘Phase III of Privacy Protection: Towards Greater Accountability.’ Journal of Information Privacy & Security, 17(1), pp. 1-20. Lefebvre, H. (1968). Le Droit à la Ville. Anthropos. Mayer, R.C., Davis, J.H., & Schoorman, F.D. (1995). ‘An Integrative Model of Organizational Trust.’ Academy of Management Review, 20(3), pp. 709-734. Wang, B.T. (2021). Implementing COVIDSafe: Trust, Transparency, and the Challenges of Value Consensus in Digital Governance. Queensland University of Technology. Wang, B.T. (2023). ‘An Updated Model of Trust and Trustworthiness for the Use of Digital Technologies and Artificial Intelligence in City Making.’ Proceedings of the Media Architecture Biennale 2023, Toronto, ON, Canada. Wang, B.T. (2024). ‘Prompts and Large Language Models: A New Tool for Drafting, Reviewing and Interpreting Contracts?’ Law, Technology and Humans, 6(2).

16:20
Trust and the Principle of Purpose Limitation in EU Information Exchange in the Area of Freedom, Security and Justice

ABSTRACT. Purpose limitation, one of the fundamental principles of data protection, governs information-sharing between EU agencies (e.g. Europol, Eurojust, Frontex, OLAF etc.) and between them and EU information systems (e.g. SIS, CIS, VIS and Eurodac) in the Area of Freedom, Security, and Justice (AFSJ). However, this nebulous concept and loosely defined legal principle allows room for interpretation, which leads to its flawed implementation in practice. This raises concerns about the EU’s overall trustworthiness as a recipient of sensitive information and its ability to fulfill its duties as a trustee to its Member States and its citizens.

Franziska Boehm argues that the current EU legal framework allows for “broad derogations” from the principle of purpose limitation, which is “susceptible to abuse” in its current form. [1] For instance, in the Framework decision on the protection of personal data in police and judicial cooperation in criminal matters (FDPJ), “[t]he purpose limitation principle is extended to a point where the authorities processing the data can decide about the change of the purpose. The initial aim of the purpose limitation principle, which is the protection of individual rights against the indiscriminate use of personal data, is therefore reversed”. [2] Another example would be the excessively broad interpretation on subsequent use of data under Article 8 (1) of CIS Decision 2009/917, which allows Member States including Europol and Eurojust to use the CIS (Customs information system) data not only to achieve the aim of the CIS, but also for “administrative purposes or other purposes”. [3] With the interoperability of large-scale IT systems in the EU coming into force, the boundaries between different purposes, as defined by different users that have access to IT systems managed by eu-LISA, are about to become even more blurry.

At the same time, interorganisational and interstate trust in the social sciences (politics and organisational studies) is normally predicated on the assumption that information may be used only for the specific purpose for which it was shared. Trust in the EU as a recipient of sensitive information is therefore only as strong as the legal rules that constrain actors not to share information outside of that originally prescribed purpose. The intention behind the proposed paper is to build up on the existing legal analysis of information-sharing between EU agencies in the AFSJ, [4] to show how the flawed implementation of the purpose limitation principle in this context undermines the trustworthiness of the EU as a trustee to the EU Member States.

Let us explore precisely how the EU’s trustworthiness could be undermined by the flawed implementation of the purpose limitation principle. The trustworthiness of the Union in this context relies on Russell Hardin’s conceptualisation of trust (rational choice theory), in which trust is predicated on the Union’s ability to encapsulate and then effectively serve the interests of its trustors – the EU Member States and the EU citizens. On the one hand, the loose interpretation of the purpose limitation principle ensures a quicker and more intensive exchange of information between EU agencies, thus bringing them closer together, united by their commitment to provide better protection against transnational security threats for EU Member States. [5] On the other hand, the EU Member States sharing the information might be less likely to share it, if they fully understood that the information may be shared with other EU agencies without proper authorisation.

Thus, doubt as to the EU’s trustworthiness lingers: has the EU failed in its duties as a trustee to its Member States by not devising more stringent legal rules (or not implementing the law more faithfully) to constrain EU agencies in terms of purpose limitation? Or has the EU prioritised its own interest over that of its trustors in achieving greater integration by more rapid advancement of technological capabilities of EU information systems and accumulation of more power through more information? Could that be said to be Union’s own interest, if the Member States are also benefiting from the improved efficiency of information sharing within AFSJ? The mere fact that the EU has allowed for such ambiguous interpretation of its conduct might suffice to conclude that it is not a trustworthy actor. To remove any doubt, the EU must ensure the correct implementation of the purpose limitation principle through stricter legal rules which are more effectively enforced. Otherwise, the mere perception of untrustworthiness would be enough to discourage EU Member States from sharing information. The paper will offer some concrete examples by highlighting some areas for improvement in the existing legal framework and contemplate their effects on trust.

[1] Franziska Boehm (2012), Information Sharing and Data Protection in the Area of Freedom, Security, and Justice, Berlin Heidelberg: Springer, p. 133, 161. [2] Franziska Boehm (2012), Information Sharing and Data Protection in the Area of Freedom, Security, and Justice, Berlin Heidelberg: Springer, p. 171. [3] Franziska Boehm (2012), Information Sharing and Data Protection in the Area of Freedom, Security, and Justice, Berlin Heidelberg: Springer, p. 296. [4] Franziska Boehm (2012), Information Sharing and Data Protection in the Area of Freedom, Security, and Justice, Berlin Heidelberg: Springer, 321-370. [5] Kartalova, Sofiya (2024), Trust and the Exchange of EU Classified Information: The Example of Absolute Originator Control Impeding Joint Parliamentary Scrutiny at Europol, German Law Journal, 25(1), pp. 70-93.

16:40
Defending Cyber Resilience in Democracies: Learning from Coordinated Vulnerability Disclosures

ABSTRACT. Global cyberattacks amidst rising geopolitical tensions have heightened governments’ focus on cybersecurity governance. However, research how different governmental systems might impact the implementation of those strategies is limited. Using two case studies of Coordinated Vulnerability Disclosure (CVD) efforts from the 1.6 million disclosures by the Dutch Institute for Vulnerability Disclosure between 2019 and 2024 and validating them with third party scanning data, this study examines the response times of CVD notifications between autocratic and democratic regimes. Our findings suggest that democracies, though slower to enact policies, more effectively remediate vulnerabilities through strong public-private collaboration and trust-building. By contrast, while autocratic regimes may formulate policies rapidly, are faced with remediation challenges due to rigid governance structures. This finding challenges assumptions about governance speed and suggests that horizontal policymaking and flexible norms on CVD can enable democracies to reduce exploitation risks and maintain geopolitical competitiveness. Based on these outcomes, the authors suggest a CVD triad as framework to foster mutual legitimacy, transparent processes, and structured participation through social contract theory, building trust and managing digital security risks. By adopting horizontal policymaking and flexible norms on CVD, democratic governments can harness citizen and corporate engagement, reduce exploitation risks and level the geopolitical playing field with autocracies. This study contributes to understanding how Rousseau’s social contract theory can be used to identify, understand and overcome the differences between governmental systems, offering actionable insights for policymakers to strengthen national cybersecurity within democratic frameworks by applying the appropriate risk management strategies.

17:00
BEYOND DIGITAL PESSIMISM: HOW A FOCUS ON TRUST CAN ENHANCE EU DIGITAL LAW

ABSTRACT. Over the past decade, the European Union (EU) has enacted a series of high-profile regulations and directives aimed at governing the digital environment. From the General Data Protection Regulation (GDPR) in 2016 to the Artificial Intelligence Act (AI Act) in 2024, the EU Commission consistently cites trust as a foundation for consumer uptake of new technologies. In the EU Commission documents, the logic is clear: if the EU adopts laws to prevent harm and negative consequences of new technology, individuals will be more likely to embrace digital services. This, in turn, is expected to foster innovation and growth in the EU’s digital market.

On its face, this premise is difficult to dispute. Trust—the willingness to accept vulnerability to the actions of others—matters; it lowers perceived risks and encourages the adoption of new platforms and services. Yet, current EU digital law is premised on avoiding negative events—such as data breaches, unlawful profiling, algorithmic discrimination, or manipulation. By emphasizing risk reduction, the Commission seems to assume that trust already exists and merely needs to be protected or maintained. This paper challenges that assumption. It argues that many consumers do not, in fact, trust companies’ digital practices because companies are not honest with them. As a result, consumers (to the extent that they have a meaningful choice) refuse to adopt certain services that feel intuitively scary, and they install blockers and VPN to prevent tracking. Rather than strengthening trust as it intends, the law’s near-exclusive focus on preventing negative consequences overshadows the need to actively build and cultivate trust in the first place.

This chapter contends that while EU regulators are correct in identifying trust as the backbone of a flourishing digital market, their reliance on a risk-avoidance paradigm underestimates just how fragile consumer trust really is. To bolster genuine trust, lawmakers and businesses alike must move beyond defensive strategies and adopt affirmative duties of confidentiality, transparency, security, and loyalty. Only then can the EU achieve the sustainable, trust-based digital economy it envisions.

Our argument proceeds in two parts. In Part I, “The EU's trust preservation strategy,” we examine the European Union's current approach to maintaining consumer trust in digital products through three key mechanisms. We begin by analyzing the EU's data privacy express consent framework, which requires companies to obtain voluntary affirmative consent from users. While this approach improves upon basic notice-and-choice models, we demonstrate how it fails to address fundamental issues of user comprehension and meaningful choice, particularly given the overwhelming volume of policies users encounter. We then examine the EU’s risk mitigation framework, showing how its limitations prevent it from fully addressing trust concerns. Finally, we explore the EU's attempts to regulate manipulative practices, revealing how these well-intentioned regulations fall short of their trust-building goals.

In Part II, “From prevention to production: building trust through a principle-based approach,” we argue that merely preventing negative outcomes is insufficient to build meaningful consumer trust. Trust is not simply the absence of harm but a positive expectation of fair and ethical treatment in the face of one’s own vulnerabilities. It requires recognizing that individuals do not engage with digital services based solely on the elimination of risks or rational actor trade-offs, but also on the presence of affirmatively trustworthy practices. Trust has a normative dimension—it reflects the moral and social expectation that companies will act with integrity, honesty, fairness, and respect for users’ interests, rather than merely avoiding regulatory violations. Without this affirmative commitment, trust remains fragile, as consumers may perceive compliance as a legal necessity rather than a genuine commitment to their well-being. To strengthen trust in the EU digital ecosystem, we propose that European data protection law could be strengthened by incorporating additional principles. We examine four key principles: confidentiality reconceptualized as discretion, transparency understood as honesty, security reframed as protection, and the introduction of substantive loyalty obligations. For each principle, we analyze current EU legal frameworks, identify their limitations, and demonstrate how these concepts could enhance their effectiveness. We pay particular attention to loyalty, exploring its roots in U.S. law and demonstrating its potential application in European digital regulation. We conclude by offering a roadmap for incorporating these trust-building elements into EU digital law, creating a framework that not only prevents trust erosion but actively cultivates it.

16:00-17:30 Session 4C: Trust Dynamics around Emerging Technologies: Distrust and the Erosion of Trust

This track explores how AI and digital technologies are eroding public trust in foundational institutions - media, science, or courts - by disrupting credibility, accountability and authority.

Location: White Space
16:00
AI and the Erosion of Institutional Trust in Public Discourse

ABSTRACT. AI, and in particular generative AI, worsens preexisting risks to public discourse (and by extension democratic dialogue ) due to the unique way in which it changes the creation and flow of information. Combining AI’s ability to personalize content in content curation systems with generative AI’s unprecedented ability to produce human-like text, images, audio, and video leads to merging (and therefore worsening) two pre-existing risks. This essay explores how (generative) AI amplifies these two pre-existing risks exponentially, which society previously faced in isolation, by combining them. The first risk is legitimacy distortion: the outcome of a new scale of mis/disinformation, rather than believing wrong things, is not trusting legitimate information. While human-made fake news and distrust of legitimate sources has long existed, generative AI worsens it because it (a) crosses mediums and (b) exponentially increases the amount of non-malicious false information. The breakdown of institutional trust through the dissemination of objectionable, false, or misleading information in different formats (text, audio, images, and video), beyond the eventual belief of false information, leads to the disbelief of true information -- people mistrust anything the read on text, hear in audio, or see on video. Generative AI in particular poses an increased challenge to the trustworthiness of knowledge-producing institutions, such as professional journalism, due to the extent to which it reduced the cost of fabricating convincing yet inaccurate content of that type, such as investigative news articles. The dissemination of false content undermines public trust in legitimate sources of information that provide different (true) information, such as large news organizations. The proliferation of generative AI-generated content, including fabricated narratives, can distort public perception and undermine the credibility of authentic information sources. The proliferation of AI-generated fake news undermines the credibility of traditional news sources. As these false or deceiving stories become increasingly sophisticated and difficult to differentiate from authentic reporting, the public's ability to distinguish between credible journalism and misinformation is compromised. Consequently, individuals become more distrustful of mainstream media outlets, fueling a climate of skepticism and uncertainty surrounding the veracity of (true) news content. The second risk is a new exacerbation of societal divides by combining two elements: an enhancement of radicalized factual narratives with new, hidden dogmatization. The first element is driven by content personalization in platforms, extended by AI to a point where information caters to each individual – a worldview just for us. This, by being personalized, magnifies pre-existing echo chambers and filter bubbles in their exacerbation of societal divides by pulling people apart on the political spectrum. AI enables widespread factual disagreements that worsen polarization because people consume content through engagement-driven algorithms, which have incentives to display emotionally triggering content. So widespread disagreements extend more easily than before from normative views to facts, like which political party won an election. The second element, new hidden dogmatization, is the progressive narrowing of the ranges of moral views held by others that people find acceptable or non-offensive. AI content curation worsens judgment over moral disagreements because it shrinks echo chambers through the algorithmic curation of content tailored to individuals' past content consumption behaviors and perceived beliefs. What is new is how the increased distance between viewpoints combines with normative narrowing. This leads to a critique to the marketplace of ideas based on information as a credence good, where the breakdown of true/false distinctions and institutional trust becomes fatal because information develops an adverse selection problem. That problem develops because of the credence good nature of information, where people don’t know whether information is good as they consume it. Contrary to the marketplace of ideas, in this institutional context, sensationalized and polarizing content tends to be amplified more than other types of content. This happens because AI-powered content curation systems prioritize engagement and click-through rates. Engagement-driven algorithms, more than merely recommending content among other abundant content, shape information consumption behavior, nudging users toward content that triggers stronger reactions (controversy, outrage, sensationalism) rather than balanced, high-quality suggestions. Engagement-driven algorithms often prioritize content that drives the most intense user reactions. Psychology literature on cognitive biases’ effect on the spread this misinformation notes how confirmation bias and the illusion of truth effect make individuals susceptible to false information that fits their prior beliefs. Doing away with the merged risk to public discourse (which is foundational to democratic dialogue) is unfeasible, but several measures can mitigate it. At the regulatory level, there are underrated transparency measures that would help. Watermarks for AI-generated content would mitigate misinformation, even if they don’t resolve all forms of harm. Similarly, algorithmic impact assessments for high-risk uses of generative AI would provide transparency over processes that are difficult to predict. At the governance level, incentive alignment has room for improvement. Companies must be encouraged to produce AI detection systems that can identify and flag synthetic content and to prioritize the development of AI systems that promote diverse content consumption. Governments can align incentives for those purposes either through subsidies or through liability-inducing new obligations, such as information fiduciaries. Relatedly, at the cultural level, the importance of digital literacy for equipping people to discern between credible and suspicious information in video, audio, and images is more pressing than it was before—at it had been for text in the past.

16:20
Judicial AI: How Use of AI Decision-Making Tools May Degrade Trust in the Courts

ABSTRACT. Note: This Paper was produced in collaboration with the U.N. Special Rapporteur for Judicial Independence to inform her 2026 Report on the use of AI tools in global human rights courts.

Much attention has been given to lawyers’ increasing use of generative AI products in advocating for their clients, inviting public scrutiny where this use has fabricated legal precedents from whole cloth and compromised credibility. While this narrative has so far cast judges as the arbiters of reason, judges and judiciary workers are not immune to AI developers’ promises of increased efficiency and access to justice through these tools. This Paper builds on the small body of work examining judge’s reliance on AI in decision making contexts, including large language model (LLM)-powered products that attempt to summarize decades (and even centuries) of case law into outputs that may influence judges’ legal reasoning in concerning–and unethical–ways. In the United States in particular, this phenomenon may further erode the public’s declining trust in the judicial system by undermining judicial independence and, by extension, the rule of law.

This concern is not new, although most of the academic scrutiny has focused on legal advocates and caseworkers’ use of these tools instead of judges. Presciently, the French government banned the publication of all data analysis relying on public information to assess, analyze, compare, or predict how judges may make decisions in 2019. But more recently, the widespread commercial roll-out of products like ChatGPT and more tailored legaltech makes this a pressing and timely research area.

The integration of AI tools into judicial systems promises to increase efficiency and access to justice by streamlining courts’ internal processes and tasks. Most judiciaries currently employing AI use it primarily for administrative or procedural tasks, like speech recognition and document classification. The rising trend, however, is for courts to adopt applications that intend to assist judges’ decision-making processes. Unlike AI used for purely ancillary or administrative tasks, decision-support tools (hereinafter “decision-making AI”) can affect or influence judges’ adjudication process, including their legal reasoning and outcomes. In the case of predictive tools, which are also outcome-determinative, the extent to which they can affect judges' decision-making risks undermining fundamental judicial values like judicial independence and the right to a fair trial.

Most existing literature on decision-making AI has centered on risk-assessment applications used by judges to evaluate criminal offenders’ risk of re-offending. Until now, there has been limited exploration of AI systems used by judges to predict the outcome of legal cases generally, in part because these tools and products are emergent. Decision-making AI tools aim to assist judges' decision-making by suggesting legal remedies based on analogous past decisions or estimating how other judges would decide a similar case. Considering the increasing availability of AI-based predictive functionalities, this Paper analyzes the risks these tools represent to core values that promote trust in judicial institutions.

We begin by distinguishing between the different types of AI applications used by the judiciaries and the degree of risk each type of AI system poses to fundamental judicial values. We differentiate between administrative or procedural tools and decision-making AI, which can, in turn, be separated into purely decision-support and outcome-determining tools. While the former may offer helpful assistance in organizing case information or doing initial legal research, the latter directly determines and shapes judges’ decision-making process. Considering that AI systems can have multiple functions, we draw the distinction based primarily on how these tools can be used in practice and their connection with the decision-making process, rather than their technical functionalities.

We then survey various predictive AI tools implemented by judiciaries worldwide, as well as general-purpose or legal-specific generative AI products that judges could also use for predictive purposes. We focus on their potential to influence judges to align with the opinions of the majority of their colleagues and to homogenize legal outcomes, which could undermine independent and case-by-case judicial judgment. We also analyze the opaqueness of decision-making AI and how this could impact parties' fair trial rights: their right to know about relevant case materials, the principle of equality of arms, the right to a reasoned judgment, and the presumption of innocence. Finally, we assess how biased training data and AI’s lack of reliability could affect the fairness of legal outcomes and the potential of predictive tools to affect parties' and judges’ personal data protection.

Ultimately, we argue that predictive tools should not be the basis of legal outcomes due to their negative effects on judicial values such as impartiality, transparency, and accountability. These tools can compromise the right to a fair trial by influencing or even replacing human judgment with automated predictions that may not fully account for the nuanced, context-dependent nature of legal decisions. Furthermore, the widespread use of decision-making AI and the public’s knowledge of such use can further diminish public trust and confidence in the judiciary–leaving the public with one less avenue of recourse when their rights are violated and emboldening political actors to contravene judicial authorities without consequence.

Related works:

Bell, F., Bennett Moses, L., Legg, M., Silove, J., & Zalnieriute, M. (2023). AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (SSRN Scholarly Paper 4162985). Social Science Research Network. https://papers.ssrn.com/abstract=4162985

Steponenaite, V. K., & Valcke, P. (2020). Judicial analytics on trial: An assessment of legal analytics in judicial systems in light of the right to a fair trial. Maastricht Journal of European and Comparative Law, 27(6), 759–773. https://doi.org/10.1177/1023263X20981472

16:40
Digital Innovations and Trust in Science: Paradoxes, Problems, and Possibilities

ABSTRACT. Digital innovations have markedly improved access to scientific knowledge, democratizing the scientific process from start to finish. Non-scientists can now follow scientific developments through specialized journalism, access published research through Open Access journals and shadow libraries, review experimental data on hosting platforms, and engage in global discussions about scientific findings on social media. Proponents of digital science anticipated that these advances would enhance public trust in science. Paradoxically, despite unprecedented openness, trust in science has declined to troubling levels globally. This paper examines this paradox through a three-part analysis: first, conceptualizing scientific trust and distrust across multiple dimensions; second, analyzing how digital innovations have inadvertently undermined scientific trust; and finally, exploring how existing and emerging technologies, including artificial intelligence, might be leveraged to rebuild trust in the scientific enterprise.

17:00
Learning to Distrust: Pedagogical Interactions in Anti-5G Movement Recruitment Processes

ABSTRACT. Despite normative perspectives that frame institutional distrust as a ‘bad’ thing in democratic societies, social movement scholars have noted that institutional distrust can help activists to target specific opponents and develop a sense of collective identity. While that work has shown what activists do with an existing sense of distrust, this article explores the formation of institutional distrust within a movement context. Using 18 months of digital ethnographic research on the anti-5G conspiracy movement, it argues that distrust formation is a key component of micromobilization in which new recruits internalize the movement’s culture of distrust through interactive pedagogical processes. These pedagogies help to shape a distrustful disposition in new recruits that sustains their participation by transforming movement perspectives and practices into a lifestyle. This research shows that distrust formation is a highly social process that has implications for how institutional distrust is understood, both in radical movement contexts and beyond.

17:20
Meta’s Reckless Cash Grab: AI-Driven Impersonation Scam Ads Exploiting Trust for Financial Fraud

ABSTRACT. The online fraud market has experienced significant growth worldwide, fueled by technological advancements such as instant payment systems, microtargeting advertising and generative Artificial Intelligence (AI) (Dhaliwal, 2025; Silverguard & SOS Golpes, 2024; Sumsub, 2024; Volkova, 2025). Indeed, recent scholarship shows that online scam ads take advantage of digital platforms’ lack of oversight and accountability (Anderson et al., 2024; Andrejevic et al., 2025; Leong et al., 2024). In violation of both local laws and platform policies, fraudsters are profiting with ads served on behalf of the government, exposing vulnerabilities in Meta’s advertiser verification processes. In fact, scammers continually seek new high-profile public figures to impersonate, leveraging trust in these individuals to enhance the appeal of their schemes (Algarni et al., 2014). Against this backdrop, this study aims to examine how scammers leverage digital advertising infrastructures to undermine public trust in regulatory frameworks, governmental institutions and financial governance by impersonating public figures and serving microtargeted scam ads. Based on ads served on Instagram and Facebook, this study investigates how these platforms, functioning as technological trust mediators (Bodó, 2020), facilitate the spread of fraudulent financial narratives. As a case study, we examine a political crisis in Brazil in January 2025, where the government introduced measures requiring greater transparency in instant payment transactions to combat financial crimes. In response, right-wing parliamentarians and pundits launched a disinformation campaign claiming the government would impose new fees on instant financial transactions (UOL, 2025). Among them, congressman Nikolas Ferreira, elected with the highest number of votes in Brazil’s history in the 2022 elections, fueled public outrage with populist rhetoric, contributing to tensions that led to the government’s retreat from the measure. The Brazilian case is particularly critical, providing a key example of how trust is shaped in digital environments. This is partly due to the population’s evolving patterns of digital content consumption. According to 2024 data, news consumption in the country has increasingly shifted away from traditional media outlets toward online sources, including social media platforms (Newman et al., 2024). Additionally, like other Global South countries, Brazil faces significant challenges in enforcement, regulatory advancements, and researchers’ access to platform data (Santini et al., 2024). In this context, Brazil serves as a testing ground for the critical effects of an information environment with weak oversight, highlighting the commercial dynamics of disinformation, the role of platforms in amplifying this market, and the extent to which it impacts citizens’ lives. To collect advertising data from Meta’s platform ecosystem, we used the company’s Ad Library user interface (Meta, n.d.-a). In Brazil, this process is particularly challenging because systematic data collection is only available for ads about social issues, elections or politics, while other ads are only displayed while they are being served to users. Previous research shows that harmful advertisers do not identify themselves correctly, even if their ads fall in this category, to evade verification processes and public scrutiny (Santini et al., 2023). Since researchers must find ways to archive harmful content while it is available, turning the process into a sort of a Whac-A-Mole game, we developed a web scraper to extract data on non-political ads from the library during their active period. We identified scam ads about public policies aimed at financial inclusion between January 10 and 21, 2025, covering both the peak of these discussions and their aftermath. Our archive includes unique ad identifiers, the start date of ad delivery, the platforms where the ads were displayed, their various versions, the advertiser’s page name and unique identifier, the ad’s text content and other associated media, and the ad’s redirect page. We then conducted a systematic content analysis of each ad to identify the public policies, real or fabricated, exploited by fraudsters, the public figures depicted, and the brands they tried to associate with. Moreover, we specifically examined ads in which advertisers impersonated official government channels. Finally, by identifying patterns such as audio generation errors, robotic speech, and noticeable pronunciation mistakes, as well as by retrieving the original material manipulated by the advertisers, we assessed whether the ad content had been generated or manipulated using generative AI tools. We identified 1,770 scam ads referring to financial public policies, boosted by 151 advertiser pages. The majority of the fraudulent ads promoted alleged entitlement to government payouts: 95.5% (1,656) of the ads falsely promised the release of funds in exchange for a prepayment of a service fee. Pages impersonating official Federal Government profiles, either by using institutional names or by displaying profile pictures associated with the federal administration, ran 718 ads (40.5%). The credibility of federal public institutions was also exploited through ads featuring logos of the public bank Caixa Econômica Federal (9.1%, or 162 ads), the Central Bank (17.8%, or 315 ads), and the Federal Revenue Service (15.5%, or 275 ads). Advertisers also mentioned media outlets and private financial institutions, including traditional banks and fintechs. A trend emerged after the repeal of financial regulations on January 15. Between January 16 and 21, 1,018 (57.5%) ads were promoted, a 35.4% increase compared to the previous six days. While the new regulation was still in effect, the ads followed similar scripts: public figures, particularly journalists and politicians, informed the population about an alleged withdrawal of forgotten funds, emphasizing urgency and warning that unclaimed money would soon be “returned to government coffers”. Their images were manipulated by generative AI models and tools, used in 70.3% (1,244) of the analyzed ads. None of the ads presented a disclaimer about generative AI use, which is required by Meta for political ads in Brazil (Meta, n.d.-b). The repeal of the Federal Revenue Service regulation on January 15 triggered a shift in irregular content promoted by advertisers. Within hours, ads featuring deepfakes of Nikolas Ferreira began circulating intensely, reflecting his prominent role in the public pressure campaign for the repeal (Firpo, 2025). By positioning regulatory scrutiny as a barrier to financial autonomy, these ads manipulate users into perceiving unregulated fintech schemes as legitimate alternatives to traditional banking. Manipulated content featuring Ferreira went spread, illustrating how scammers exploit trending topics to attract new victims. These ads were not entirely manipulated: their opening segment was taken from a video the congressman posted on social media after the repeal announcement, which he celebrated. This was followed by a manipulated audio clip falsely claiming the government had introduced a new measure granting partial reimbursements to anyone who had recently used a credit card. Ads featuring deepfakes of Ferreira increased 234% after the regulation repeal, from 129 to 432 in six days. Our data reveal a shift in the strategy of scam ads: as the government’s image became increasingly destabilized (Luciano & Fleck 2025) and Ferreira’s profile gained prominence through his campaign against the administration, scam ads capitalized on his rising influence. These ads reflect an understanding that, in a chaotic information environment, public trust can be continuously redirected based on shifting power dynamics and popularity, benefiting from this instability to adapt strategies for targeting victims. Our research reveals illegal activities, inconsistencies in Meta’s policies, and how scammers and platforms profit from disinformation at the expense of users and public policies. Furthermore, our evidence highlights the latent contradictions of the online scams industry: on one hand, it takes advantage of public figures to gain audience trust and lend credibility to irregular content; on the other, it generates informational noise that foster skepticism about public policies and contributes to disinformation on matters of public interest. Scam ads do not merely sell fraudulent products; they exploit distrust in mainstream media, governments, and financial institutions to lure users into deceptive schemes. These ads mimic the language of political dissent, presenting their offers as revelations that challenge a corrupt system – when, in reality, they reinforce the very structures of economic exploitation they claim to resist (Andrejevic et al., 2025). Building upon Andrejevic et al. (2025), the fusion of populist rhetoric and digital marketing creates an ecosystem where fraudulent financial solutions appear more credible than regulatory safeguards. That is, the convergence of political populism, conspiracy theories, and digital marketing represents a broader assimilation of politics into commercial logics. This process ultimately distorts the public’s ability to distinguish between trustworthy financial services and exploitative scams, exacerbating financial insecurity and reinforcing skepticism toward institutions.