View: session overviewtalk overview
Keynotes panel
AI: Panacea or pain? ABSTRACT. TBA |
12:00 | Information Ethics Education for Business People PRESENTER: Kiyoshi Murata ABSTRACT. This study addresses how information ethics education for business people should be established. Despite the fact that a large majority of computing activities — the development, operation and use of information and communication technology (ICT)-based information systems — are carried out by business organisations, an effective and well-organised educational programme of information ethics for business people, including engineers, non-technical users and administrators, have not been developed and implemented, whereas there are many attempts to propose proper computer ethics education programmes for computer engineers and computer science and engineering students (see, e.g., Reich et al., 2020; Stavrakakis et al., 2022; Brown et al., 2024). Worse still, business people seem not to be well aware of the meaning and social significance of ethical initiatives in a business setting. This means an information ethics education programme for business people would not necessarily work well unless it is developed taking such a reality into consideration. In Japan, since the end of 1990s, the practices of business ethics including "compliance" and "corporate social responsibility (CSR)" have been engaged in by many companies centred on large corporations to mainly avoid public criticism about their business activities. Recently, reflecting the increasing social interest in environmental sustainability, "environmental, social and corporate governance (ESG)" and "the sustainable development goals (SDGs)" have been added to the agenda of business ethics. It is often argued that companies' approaches to these are important as a response to those investors who are sensitive to environmental concerns by business consultants and news media. "Ethical, legal and social issues/implications (ELSI)" and "responsible research and innovation (RRI)" are increasingly emphasised as issues that organisations involved in technology development and deployment should address to avoid blame for them. On the other hand, the boundary of business ethics has blurred as various issues like diversity and inclusion in the workplace and data protection become considered business ethical issues. Actually, in Japan, many companies have depended on "business ethics solution packages" that consulting firms provide them to properly address such issues, demonstrating their vague or trivialised understandings of ethics in a business context. Nonetheless, most people don't cast doubt on the significance of business ethics for their companies' reputation management and profitability. However, if business people believe that they should act based on enlightened self-interest or that corporate ethical behaviour is necessary for ensuring (at least long-term) profitability of their companies through responding to social expectations and demands (regarding a logic of this sort, see, e.g., Porter and Kramer, 2002, 2006; Kotler and Lee, 2005; Lawrence and Weber, 2013), they may consider that ethics is not an end but a means in business, and business ethics would no longer deserve to be called ethics. If this is the case, it is hard to expect that business organisations, which play a pivotal role in the development, operation and use of ICT-based information systems, proactively address issues related to information ethics and maintain responsible attitudes and behaviour towards computing. Actually, there are many ICT companies which maintain the attitude of "innovative first, consider consequences afterwards". What features should an information ethics education programme for business people have, then? The results of the questionnaire survey of three respondent groups of Japanese people — professional engineers, who were certified professional engineers working for companies; ordinary employees, who were full-time, non-executive company employees without any professional qualifications; and business students, who were 3rd- and 4th-year undergraduate business students — the authors conducted online in July and August 2023 show that: (a) Professional engineers were statistically more aware of the business ethics-related initiatives except ELSI and RRI than business students and ordinary employees. This presumably owed much to continuing professional development programmes including engineering ethics instruction courses they were obliged to participate in to retain and improve their knowledge, abilities and skills as professionals. (b) Ordinary employees equivalently understood compliance and business ethics to business students, whereas business students understood CSR, ESG and the SDGs better than ordinary employees. The educational level of ordinary employee respondents was found to affect the level of their awareness of business ethics-related initiatives. They tended to more underestimate the importance of each business ethics-related initiative compared to professional engineers and business students and be devoid of understanding the rationale for ethics in a business setting. They lacked a sense of ownership over ethical issues in a business context. (c) There was no statistically significant difference in the awareness of ELSI and RRI among the three samples. The majority of respondents had not heard these terms in each sample, even though many of current business activities are technology-dependent. (d) Most of respondents in each sample considered that a company should address business ethics-related initiatives as long as dealing with such initiatives did not worsen the company’s long-term profit structure. Meanwhile, ordinary employees and business students tended to emphasise securing short-term profit rather than business ethics-related initiatives, whereas a majority of professional engineers expressed opposing views. These suggest that the development and implementation of a suitable information ethics education programme for ordinary business employees is an urgent necessity. Given that ordinary employees have already become computing practitioners thanks to the advancement of user-friendly applications including AI-based ones and their lack of awareness of business ethics, effective in-house information ethics education for them has to be provided to make them sensitive to existing or potential ethical issues accompanied by the development, operation and/or use of ICT-based information systems and let them have sufficient knowledge on and a correct understanding of the meaning of information ethics in a business setting. This education should also cultivate their capacities and skills to consider and act ethically in a business setting as an organisational member and simultaneously as a citizen. A case method approach may be helpful for them to acquire these capacities, if there is a good mentor. Because declarative and procedural knowledge necessary for perceiving and considering ethical issues related to computing should be updated on a constant basis, continuing education opportunities have to be provided. Ethical principles may have to be mentioned as part of the knowledge. However, it should also be taught that those principles are like a perfume: they should not be swallowed, but just be smelled. Any kind of indoctrination have to be avoided. The educational content should flexibly and appropriately be set according to the knowledge level of the target audience. Sufficient on- and off-the-job training opportunities for learning information ethics need to be provided to them. Moreover, professional engineers are expected to play a pivotal role in fostering an ethical mindset among ordinary employees in their companies. Business organisations have to establish necessary institutions within organisations to encourage and support employees to consider and behave ethically, such as the appointment of a chief ethics officer who is in charge of developing an ethical corporate culture and setting up well-organised information ethics education programmes. In terms of development and operation of information systems, building and operating the ICT governance system of "DevEthicsOps" — the unification of them and ethical practices — should be addressed beyond Ethics by Design (Brey and Dainow, 2023), which can give the misconception that ethical approaches to technology are completed at the time of design. Such a governance system would be helpful in preventing the occurrence of threatening effects on ethical organisations Kaptein (2023) pointed out. References Brey, P. and Dainow, B. (2023). Ethics by design for artificial intelligence. AI and Ethics. Brown, N., Xie, B., Sarder, E, Fiesler, C. and Wiese, E. S. (2024). Teaching ethics in computing: A systematic literature review of ACM computer science education publications. ACM Transactions on Computing Education, 24(1), 1-36. Kaptein, M. (2023). A Paradox of ethics: Why people in good organizations do bad things. Journal of Business Ethics, 184, 297-316. Kotler, P. and Lee, N. (2005). Corporate Social Responsibility: Doing the Most Good for Your Company and Your Cause. Hoboken, NJ: John Wiley & Sons. Lawrence, A. and Weber, J. (2013). Business and Society: Stakeholders, Ethics, Public Policy (14th ed.). New York: McGraw Hill. Porter, M. E. and Kramer, M. R. (2002). The competitive advantage of corporate philanthropy. Harvard Business Review, 80(12), 78-92. Porter, M. E. and Kramer, M. R. (2006). Strategy and society: the link between competitive advantage and corporate social responsibility. Harvard Business Review, 84(12), 73-75. Reich, R., Sahami, M., Weinstein, J. M. and Cohen, H. (2020). Teaching computer ethics: A deeply multidisciplinary approach. SIGCSE ’20: Proceedings of the 51st ACM Technical Symposium on Computer Science Education, 296-302. Stavrakakis, I., Gordon, D., Tierney, B., Becevel, A., Murphy, E., Dodig-Crnkovic, G., Dobrin, R., Schiaffonati, V., Pereira, C., Tikhonenko, S., Paul Gibson, J., Maag, S., Agresta, F., Curley, A., Collins, M. and O’Sullivan, D., (2021). The teaching of computer ethics on computer science and related degree programmes. A European survey. International Journal of Ethics Education, 7, 101–129. |
12:30 | Social Risks of Brain Machine Interface Usage: Questionnaire Survey for People with and Without Disabilities PRESENTER: Yohko Orito ABSTRACT. In recent years, brain-machine interfaces (BMIs) have been developed and have found practical applications, in particular, for people with disabilities. However, ethical concerns, such as the protection of mental privacy and other social issues related to such BMI usage remain underexplored and are often overlooked. To fill the gaps, a questionnaire survey of people with and without disabilities was con-ducted. This study attempts to identify the overall tendencies and differences be-tween the responses of the two groups in terms of the usefulness, risks and ethi-cal concerns of BMIs based on the survey results. |
13:00 | Beyond Regulation and Moderation: a Forster-Inspired Framework for Machine Evolution PRESENTER: Leah Rosenbloom ABSTRACT. We discuss two technological catastrophes—one imagined in a brilliant fiction, The Machine Stops, written more than a century ago by E.M. Forster; the second having played out in real life in the past two decades culminating in the resurrection of a tech billionaire-backed authoritarian to one of the most powerful executive offices in contemporary world government. Forster’s premonitory intuition concerning the fragility of massive and complex technological systems provides a rich retrospective on problems in computer and information ethics. The symbiotic alliance between billionaire social media owners and authoritarians, which Rosenbloom and Fleischman named the propaganda-industrial complex (PropIC) [19], has escalated over the past several years. Platforms have moved from a performative and exploitative treatment of content moderation, through the cynical abandonment of content moderation, towards outright support for fascist White supremacist capitalist cisheteropatriarchy. This pivot has openly and explicitly cemented platform support for authoritarian propaganda. We discuss The Machine Stops and its place in the curriculum of computer ethics. We identify a connection between The Machine in Forster’s story and contemporary social media platforms that forms a bridge to the second IRL catastrophe. We analyze the current state of the PropIC and propose a pivot of our own: from ineffective top-down regulatory interventions to human-scale abolitionist interventions, which can grow resistance and resilience to The Machine from the bottom up. |
12:00 | AI Ethics in Higher Education: a Review of Ethical Challenges PRESENTER: Laercio Cruvinel ABSTRACT. The integration of artificial intelligence (AI) in higher education, encompassing both traditional AI systems (automatic or semi-automatic learning processes) and generative AI (such as large language models like ChatGPT), has transformed teaching, learning, and administration. While adaptive learning systems and gen-erative AI tools enable personalized education and improved efficiency, they also raise ethical concerns, including privacy, bias, and over-reliance. However, there is a lack of guidelines to ensure transparency and equity in automated evaluation processes. Current studies focus on the pedagogical benefits of AI, but end up leaving aside important issues, such as algorithmic bias and the impact on aca-demic integrity. The central contribution of this manuscript is to articulate a com-prehensive ethical and security-oriented perspective on the integration of compu-ting technologies and the Internet, offering evidence-based practices and policy suggestions to guide institutions toward more accountable and human-centered digital strategies. |
12:30 | Students' Perception of the Integration of GenAI in Academic Paper Assignment Preparation PRESENTER: Katerina Zdravkova ABSTRACT. The rapid rise of generative artificial intelligence (GenAI), particularly large language models (LLMs), has transformed education, prompting educators to explore ways of integrating these tools into academic workflows. While much literature discusses their potential and challenges, little research has focused on students’ experiences with its integration into university environments. This study presents a case from a Computer Ethics course at FSCE, where students completed three assignment types: fully student-written, fully AI-generated, and hybrid. A total of 100 students participated in a post-course survey explor-ing their experiences with this structured use of GenAI. The survey included both quantitative and qualitative components, capturing student engagement, challenges, ethical concerns, and perceptions of GenAI’s role in learning. Findings show a generally positive attitude toward GenAI, with 62.79% ex-pressing satisfaction with the structured integration. However, concerns about academic integrity, hallucinations, and process complexity remain. The study highlights the need for transparent guidelines, ethical considerations, and clear instructional support when implementing GenAI tools. By centering student voices, this research provides empirical insights into the operationalization of GenAI in higher education and offers practical recommendations for ethically sound and pedagogically meaningful integration. Future research will explore long-term effects and disciplinary variations in student engagement with AI-assisted learning. |
13:00 | Challenging AI as Critical Thinking ABSTRACT. Generative AI systems, such as LLMs, have rapidly become popular tools for students and professionals alike. While the results of these systems can often be useful due to their underlying statistical properties, these tools can produce erroneous and harmful results if not evaluated critically. In this work, we discuss the nature of LLMs, their potential benefits for education, and their flaws. In particular, we highlight both epistemological and moral concerns that users should heed when using them. To address these concerns, instructors should model how to interact with these tools as a form of critical thinking. Our goal in this article is to provide the necessary context and potential directions for using these tools successfully to augment the social process of education. |
12:00 | Research-Based Theater as an Engaging and Effective Method for Promoting AI Literacy? – Insights from Audience Responses ABSTRACT. AI’S SOCIETAL IMPLICATIONS & THE NEED FOR AI LITERACY Generative artificial intelligence (AI), especially chatbots powered by large language models (LLMs), enables individuals to increasingly rely on automation to support their decisions by answering questions, offering information or even providing con-crete moral guidance (Aharoni et al., 2024), for instance, in the contexts of healthcare, criminal justice or personal matters (Awad & Levine, 2020). Such sys-tems, used to support decisions – especially those that ultimately affect core human values such as ‘safety’ or ‘fairness’ – can have far-reaching societal implications, including issues like responsibility gaps or moral deskilling (Poszler, 2024). Given the societal significance of these developed AI applications, it is particularly important to inform the public about the results of pertinent scientific research, which is also acknowledged by regulatory initiatives. For example, the Pact for Research and In-novation IV (2021–2030) emphasizes science communication as a key objective for research institutions at the national level in Germany (Wissenschaftsrat, 2021), while the EU AI Act stresses the importance of taking measures to ensure AI literacy on a global level (European Commission, 2024). Universities and research institutions, as key knowledge contributors, hold a unique responsibility in this endeavor (Wissen-schaftsrat, 2021). LIMITS OF TRADITIONAL SCIENCE COMMUNICATION & INNOVATIVE METHODS OF KNOWLEDGE TRANSFER Science communication, especially in technical domains, faces various challenges, such as the often difficult-to-understand technical language in academic publica-tions, which can reduce the public’s interest in scientific texts (Dietrich et al., 2024). To make scientific debates accessible beyond the academic community, innovative methods are needed through which civil society can stay informed about up-to-date research and engage with scientists (WBGU, 2019). In this endeavor, arts – an im-portant reference for social knowledge and inclusion – can become a key enabler to ensure human-centric, participatory discussions around the design of pertinent tech-nological innovations (Guryanova et al., 2020). More specifically, arts-based meth-odologies such as research-based theater, ethnodrama or the playwright-approach “combine research and theater to create novel opportunities for inquiry and knowledge translation” (Nichols et al., 2022; p.1). Especially in the context of AI, the use of theater is valuable to facilitate enlightenment about and the human-centricity of emerging technologies (Fukuyama, 2018). First pertinent efforts are made in the field of human-computer interaction, where researchers have started to utilize immer-sive theater performances to test prototypes, rapidly learn about future technology and design it accordingly (Luria et al., 2020). Although initiatives like these are fundamentally aimed at expanding societal knowledge in the domain of AI, the explicit evaluation of the effectiveness of knowledge transfer is often neglected. Without evaluating the actual knowledge gained by the audience, the contribution of such arts-based approaches to fostering AI literacy remains merely an assumption. Therefore, this study aims to assess the effectiveness of an exemplary research-based theater initiative by conducting ac-companying research through post-performance audience surveys and interviews to ultimately address the following key question: How effectively can research-based theater be used as a tool to promote a participatory approach in AI ethics research and enhance AI literacy among the audience? THE RESEARCH-BASED THEATER INITIATIVE & ITS ACCOMPANYING RESEARCH The initiative implements a creative approach to conducting, educating, and com-municating AI ethics research through the lens of the arts (i.e., research-based thea-ter). The core idea revolves around conducting qualitative interviews and user studies on the impact of large language model(LLM)-based chatbots on human ethical deci-sion-making. It focuses specifically on exploring the potential opportunities and risks of employing these systems as aids for ethical decision-making, along with their broader societal impacts and recommended system requirements. Generated scien-tific findings will be translated into a theater play, which will be performed in the the-ater hall in Germany. This performance seeks to effectively educate civil society on up-to-date research in an engaging manner and facilitate joint discussions (e.g., on necessary and preferred system requirements or restrictions). The insights from these discussions, in turn, are intended to inform the scientific community, thereby facilitat-ing a human-centered development and use of LLMs as moral dialogue partners or advisors. To evaluate the effectiveness of the performance, accompanying research will be conducted by capturing audience responses through a post-performance survey in-cluding questions scored on a Likert scale with a space for open-ended comments as well as through semi-structured interviews via Zoom (adopted from Belliveau & Nichols, 2017). The questions posed to the audience will center on how their under-standing, learning, and awareness of the responsible use of LLM-based chatbots as moral advisors have changed after watching the play. Additionally, the questions will explore their views on the effectiveness of the play as a tool for promoting AI litera-cy, their preferred system requirements for developing LLM-based chatbots as well as demographic details such as age and occupation. The collected data will be analyzed using a mix of quantitative statistical analyses and qualitative content analysis (Gi-oia et al., 2013). CONCLUSION To summarize, through post-performance audience surveys and interviews, this study aims to shed light on how much the performance influenced the audience’s opinions, knowledge, and awareness regarding the societal implications of pertinent AI systems as well as on their preferred system requirements and the relevant values underlying the development of LLM-based chatbots. Generated findings can inform technology companies about appropriate and socially preferred system requirements to consider in order to ensure the development of trustworthy systems. For scholars and theater practitioners, the findings generated and the evaluation measures used in this study can serve as a foundation and proof of concept to legitimize and encourage the adoption of similar arts-based research projects and creative methods for science communication in the future. Overall, the initiative and its accompanying research may constitute the beginning of transforming scientific communication and inquiry in the field of AI ethics, with the humanities and cultural sciences serving as critical methodologies to bridge the gap between academia and society and promote AI literacy. References 1. Aharoni, E., Fernandes, S., Brady, D. J., Alexander, C., Criner, M., Queen, K., ... & Cre-spo, V. (2024). Attributions toward artificial agents in a modified Moral Turing Test. Scien-tific Reports, 14(1), 8458. 2. Awad, E., & Levine, S. (2020). Why we should crowdsource AI ethics (and how to do so responsibly). Behavioral Scientist. 3. Belliveau, G., & Nichols, J. (2017). Audience responses to Contact! Unload: A Canadian research-based play about returning military veterans. Cogent Arts & Humanities, 4(1), 1351704. 4. Dietrich, A., Liepelt, R., & Sperl, L. (2024). Die Wissenschaftsautor* innen Ihres Vertrau-ens-Über die Hürden von Wissenschaftskommunikation. The Inquisitive Mind. 5. European Commission (2024). REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence. Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689 6. Fukuyama, M. (2018). Society 5.0: Aiming for a new human-centered society. Japan Spot-light, 27(5), 47-50. 7. Gildin, M., Binder, R. O., Chipkin, I., Fogelman, V., Goldstein, B., & Lippel, A. (2013). Learning by heart: Intergenerational theater arts. Harvard Educational Review, 83(1), 150. 8. Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organizational research methods, 16(1), 15-31. 9. Guryanova, A. V., & Smotrova, I. V. (2019). Transformation of worldview orientations in the digital era: humanism vs. anti-, post-and trans-humanism. In International Scientific Con-ference “Digital Transformation of the Economy: Challenges, Trends, New Opportunities” (pp. 47-53). Springer, Cham. 10. Luria, M., Oden Choi, J., Karp, R. G., Zimmerman, J., & Forlizzi, J. (2020). Robotic Fu-tures: Learning about Personally-Owned Agents through Performance. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (pp. 165-177). 11. Nichols, J., Cox, S. M., & Guillemin, M. (2022). Road Trips and Guideposts: Identifying and Navigating Ethical Tensions in Research-Based Theater. Qualitative Inquiry, 10778004221099972. 12. Poszler, F. (2024). Integrating ethical principles into AI systems: Practical implementation and societal implications (Doctoral dissertation, Technische Universität München). 13. Wissenschaftlicher Beirat der Bundesregierung Globale Umweltveränderungen (WBGU) (2019). Unsere gemeinsame digitale Zukunft - Empfehlungen. Available at: https://www.wbgu.de/fileadmin/user_upload/wbgu/publikationen/hauptgutachten/hg2019/pdf/WBGU_HGD2019_Empfehlungen.pdf 14. Wissenschaftsrat (2021). Wissenschaftskommunikation: Positionspapier. Available at: https://www.wissenschaftsrat.de/download/2021/9367-21.pdf?__blob=publicationFile&v=10 |
12:30 | Artificial Intelligence in Scientific Research: the Missing Link of Literacy for Security Concerns ABSTRACT. In May 2024 the Council of the European Union issued the Council Recommendation on enhancing research security stating that “with growing international tensions and the increasing geopolitical relevance of research and innovation, the Union’s researchers and academics are increasingly exposed to risks to research security when cooperating internationally” (Council of the EU, 2024, 2). The Council of the EU deems it necessary to act at the EU level in order to “protect the integrity of the ERA [European Research Area], while respecting the competences of Member States for going further, for example by developing regulatory frameworks” (Council of the EU, 2024, 9). In addressing these concerns, the Recommendation emphasises the importance of balancing the openness of international collaboration (Paseri, 2024a) with robust measures to safeguard the security of research. The Recommendation supports the establishment of common standards and guidelines for all Member States to address risks such as intellectual property theft, misuse of research results and interference by foreign actors. The Council aims to promote a resilient ERA that can thrive in an increasingly complex and competitive global landscape, avoiding “risk of undesirable transfer of critical knowledge and technology […] affecting the security of the Union and its Member States” (Council of the EU, 2024, 4). The Council Recommendation on enhancing research security identifies artificial intelligence (AI, hereinafter) as a matter of priority with other three critical technology areas (i.e., semiconductors, quantum, and biotechnologies). This is the case both when considering an AI system as a tool and as a result of a research project. In the former case, an AI system is used by researchers in order to implement a specific research project, becoming instrumental in obtaining the results (Paseri, 2024, 59). In the second case, artificial intelligence is the area of investigation and an AI system the result of a research project (Paseri, 2024, 60). Both the use and development of AI systems in the context of scientific research represent sensitive scenarios due to the high stakes involved. Thus far, several studies have been conducted on the impact of AI and generative AI (gen AI, hereinafter) on openness and collaboration in science (Beck et al., 2022; Wang and Barabási, 2021). The positions on the issue tend to be polarized – in Umberto Eco’s words – between apocalyptic and integrated intellectuals (Eco, 1964). On the one hand, over-enthusiastic scholars have gone so far as to claim that “by harnessing the power of AI we can propel humanity toward a future where groundbreaking achievements in science, even achievements worthy of a Nobel Prize, can be fully automated. We believe that this is achievable by the year 2050” (King et al., 2024, 716). On the other hand, more and more scientific journals have updated their ethical guidelines identifying the use of undeclared AI or gen AI as forms of scientific misconduct (Kumar et al., 2024). In order to handle this sensitive matter and to guarantee research security, the Council first recalls the framework of the EU Security Union Strategy, which proposes a three-pronged approach, on “promotion of the Union’s economic base and competitiveness; protection against risks; and partnership with the broadest possible range of countries to address shared concerns and interests” (EU Security Union Strategy, 2023). Further, the Council proposes a series of recommendations addressed to research entities; to the Member States called upon to act at the regulatory level; and to the European Commission in its coordinating role in the field of scientific research policies (Council of the EU, 2024, 13-16, 19, 23). Against this backdrop, what is lacking in the debate on research security and use of AI in science is literacy, intended as the ability “in both the human and technological dimensions of AI, understanding how it works in broad terms, as well as the specific impact of Gen AI” (UNESCO, 2023, 24). In addition to the shortcomings of the Artificial Intelligence Act (AI Act, 2024) (Pagallo, 2024, 64-65), it is worth noting that the European lawmaker pays attention to the issue of AI literacy. As stated in Article 4 of the AI Act, “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used”. This provision is particularly relevant for the use of AI in scientific research, where the complexity of AI systems demands a high level of expertise among researchers and operators. The problems are both epistemic and normative (Durante and Paseri, 2005). Ensuring AI literacy within research teams helps to enhance the reliability and validity of scientific findings, as well as to address ethical considerations and biases related to AI models. By aligning with the requirements of the AI Act, research institutions can foster responsible and informed use of AI technologies, ultimately advancing innovation while safeguarding integrity and accountability in scientific inquiry. The paper points out the lack of consideration about the AI literacy in the context of research security and intends to investigate the potential inequities in research resulting from the AI illiteracy. Strengthening research security aspects and adopting a polarised approach about the role of AI in science risks acquiring inequities and undermining inclusiveness in research, a key factor in EU policies on science and a priority frequently evoked by scholars (Rafols et al. 2024). As Stefano Rodotà emphasised with regard to the risks associated with the digital divide, selectively benefiting from technological innovation “leads to a «human divide»” (Rodotà, 2012, 198), which poses an even greater risk when applied in the context of scientific inquiry. REFERENCES Beck, S., Poetz, M., and Sauermann, H. (2022) “How will Artificial Intelligence (AI) influence openness and collaboration in science?”, Elephant in the Lab, https://research.cbs.dk/en/publications/how-will-artificial-intelligence-ai-influence-openness-and-collab. Council of the European Union, Council Recommendation on enhancing research security, 2024/0012(NLE), Brussels, 14 May 2024. Eco, U. (1964), Apocalittici e integrati: comunicazioni di massa e teorie della cultura di massa, Bompiani, Milano. EU Commission, Joint communication to the European Parliament, the European Council and the Council on “European Economic Security Strategy”, JOIN(2023) 20 final, ELI: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52023JC0020. King, R.D., Scassa, T., Kramer, S., Kitano, H., “Stockholm declaration on AI ethics: why others should sign”, Nature 626, 2024, pp. 716-716. Kumar, R., et al. (2024) “Academic integrity and artificial intelligence: An overview”, Second Handbook of Academic Integrity, pp. 1583-1596. Pagallo, U. (2024) “Introduction to a Theory of Legal Monsters: From Greco Roman Teratology to the EU Artificial Intelligence Act”, i-lex 17.1, pp. 53-73. Paseri, L., Durante, M. (2025) “Examining epistemological challenges of large language models in law”, Cambridge Forum on AI: Law and Governance, pp. 1-13. Paseri, L. (2024a), Scienza aperta. Politiche europee per un nuovo paradigma della ricerca, Mimesis edizioni, Milano-Udine. Paseri, L. (2024b) “Science and Technology Studies, AI and the Research Sector: Questions of Identity”, Elliott, A. (ed.), The De Gruyter Handbook of Artificial Intelligence, Identity and Technology Studies, De Gruyter, Berlin, pp. 55-72. Rafols, I., et al. (2024), “The multiversatory: fostering diversity and inclusion in research information by means of a multiple-perspective observatory”, Conference on Advancing Social Justice Through Curriculum Realignment, pp. 1-15, https://osf.io/preprints/socarxiv/dn2ax. Rodotà, S., Il diritto di avere diritti, Editori Laterza, Bari, 2012. UNESCO. (2023). Guidance for generative AI in education and research. https://unesdoc.unesco.org/ark:/48223/pf0000386693. Wang, D., Barabási, A. L. (2021) The science of science, CUP, Cambridge. |
13:00 | Just Hallucinations? the Problem of AI Literacy with a New Digital Divide PRESENTER: Ugo Pagallo ABSTRACT. The paper examines the normative and epistemic challenges of AI literacy with problems of applied ethics, open issues in the legal domain, and how they are related to digital divide concerns. The lack of skills, knowledge and understanding of how technology functions may depend on previous driv-ers of social inequality and digital division, and yet, today’s AI illiteracy can trigger a new digital divide much harsher than the first wave of the late 1990s and early 2000s. Threats of AI illiteracy are systemic, exponential, cumulative, and can be worsened by the lawmaker’s response, or lack there-of. The open issues of the digital divide are unpromising. Although LLMs hallucinations are only a part of a more intricated problem, it would be a first great step forward to increase the number of people capable of under-standing they are just dealing with such hallucinations. |
15:00 | A Case Against the Feasibility of AI Consciousness (AIC) PRESENTER: Anne Gerdes ABSTRACT. This paper argues against the feasibility of AI Consciousness (AIC), discussing recent claims that large language models and advanced AI architectures could one day possess phenomenal consciousness. While proponents draw on computational functionalism and models like Global Workspace Theory to support the idea of AIC, we challenge this view by highlighting the ontological and phenomenological conditions of consciousness. We argue that consciousness, as viewed through classical phenomenology and critical accounts of embodiment and embeddedness, is a transcendental precondition for expe-rience. Thus, consciousness cannot emerge in silicon-based systems because it is irreducibly lived, embodied, and first-personal. |
15:30 | User Engagement and Barriers in the Standardization Processes of Digital ID Architectures PRESENTER: Yoshiaki Fukami ABSTRACT. Digital identity systems are increasingly central to both public and private services, yet the standardisation processes underpinning them remain fragmented. This paper examines the ethical, institutional, and governance challenges that arise from the distributed nature of digital identity specification development. Drawing on an interpretive case study approach, we analyse five key communities—namely IETF, W3C, OpenID Foundation, the Decentralized Identity Foundation, and the Internet Identity Workshop—to understand how technical norms are created and how engineers and public actors engage with them. Our findings show that while formal standardisation bodies provide structured processes, public institutions face significant barriers to participation, including technical complexity, fragmented venues, and limited institutional coordination. Informal forums like Internet Identity Workshop (IIW), though inclusive in ethos, pose accessibility challenges due to their high-context structure. We argue that effective governance of digital identity standards requires new forms of coordination, interdisciplinary collaboration, and shared institutional capacity across technical and policy domains. Rather than advocating for centralised administration, we call for constructive interoperability between stakeholder communities to ensure that evolving standards align with public values and democratic oversight. |
16:00 | Beyond Deletion: the Afterlife of Images in Algorithmic Governance PRESENTER: Klara Källström ABSTRACT. 1.0 Introduction Contemporary forms of algorithmic governance increasingly challenge established assumptions about autonomy, particularly in relation to how personal data and images are processed, circulated, and acted upon. Within public administration, the integration of algorithmic systems has transformed the role of visual material: images, once conceived as stable representations for human interpretation, are now processed, fragmented, and recomposed as data points within computational infrastructures of classification and control (Amoore, 2020). This ontological shift marks the industrialization of images, whereby they are no longer encountered as visual artefacts but operate as functional components in automated decision-making systems (Farocki, 2004). In such environments, images and personal data are rendered equivalent—both reduced to informational units circulating through networks. This abstraction has significant implications for understandings of privacy, as the dispersal of visual data across algorithmic infrastructures renders it opaque, untraceable, and resistant to withdrawal (Dewdney & Sluis, 2023). This paper investigates ontological and epistemological shifts through the case of the Swedish police’s use of Clearview AI’s facial recognition technology. It examines how images, once absorbed into algorithmic systems, are reconstituted as dispersed computational entities, challenging the legal and technical viability of regulatory mechanisms such as the Right to Be Forgotten (Kosta, 2022). When images are embedded within predictive and classificatory infrastructures, their withdrawal reveals the always-already unstable nature of archival systems—where erasure is not only technically complex but historically contingent. This raises the research question: How does the industrialisation of images shape the ability to uphold the Right to Be Forgotten in archival practices? Through an analysis of official statements and administrative records, the study interrogates how visual data persists within algorithmic governance and considers what it means to “forget” in a system where the image is not a visible artefact but a distributed signal structured by data flows and institutional power (Bowker, 2005; Dewdney & Sluis, 2023; Jasanoff, 2004). In this context, archival practices must be thought of differently—not as systems for storing and retrieving static records but as dynamic infrastructures entangled with algorithmic processes of extraction, recomposition, and circulation. As the archive becomes operational rather than representational, new frameworks are needed to address how memory, visibility, and erasure function under computational governance. 2.0 Theoretical foundation Informational privacy is often framed as the individual’s capacity to dissociate from persistent digital traces (Floridi, 2015; Richards, 2022; Véliz, 2020). This framing assumes a coherent, autonomous subject capable of controlling personal data. Gilles Deleuze’s concept of the dividual challenges this, proposing that identity in computational systems is fragmented and operationalized through data points, behaviours, and metrics (Deleuze, 1992). Within this framework, privacy concerns shift from ownership of discrete data to the aggregation and circulation of informational fragments used for classification and prediction (Bowker & Star, 1999; Amoore, 2024; Lyon, 2003). A similar ontological shift characterises the treatment of images: once processed algorithmically, images are transformed into dispersed data points embedded within computational infrastructures (Dewdney & Sluis, 2023; Paglen, 2016). Like personal data, the image becomes fragmented, recontextualised, and ultimately inaccessible as a coherent object. As algorithmic systems increasingly shape how data is processed and operationalised within governance frameworks, the inherent instability of autonomy—central to rights such as the Right to Be Forgotten—comes to the fore (Ananny & Crawford, 2018; Pasquale, 2015). While the GDPR’s Right to Be Forgotten seeks to mitigate the persistence of digital traces, it rests on the assumption that deletion restores privacy. This falters in distributed and predictive infrastructures where data—visual or otherwise—is continuously replicated, recombined, and embedded in classificatory systems. Deletion becomes not a definitive act but a contingent intervention shaped by archival logics and infrastructural resistance (Bowker, 2005; Kosta, 2022; Mantelero, 2013). These challenges demand critical engagement with algorithmic infrastructures that govern behaviour through data flows (Jasanoff, 2004; Latour, 2005), emerging from a genealogy of classificatory and surveillance practices shaped by historically contingent regimes of power and knowledge (Foucault, 1977). Archival systems have long functioned as instruments of institutional power, shaping access, memory, and control (Bowker, 2005; Geoghegan, 2023), with photography playing a central role in biometric surveillance and criminological classification (Lyon, 2014; Sekula, 1986)—logics that persist in contemporary facial recognition technologies (Ananny & Crawford, 2018; Crawford, 2021). The shift from physical archives to digital repositories—and more recently to algorithmically driven infrastructures—has intensified these dynamics: unbound by spatial constraints, algorithmic systems enable indefinite data retention and circulation, complicating regulation around deletion, access, and classification (Bowker & Star, 1999; Paglen, 2016). Regulatory strategies must therefore confront the entanglement of algorithmic infrastructures with archival practices, surveillance, and the politics of data retention. 3.0 Research Design 3.1 Setting Clearview AI’s platform, built on a database of over three billion images scraped without consent, exemplifies the transformation of images into biometric data within algorithmic surveillance infrastructures (Rezende, 2020; Shepherd, 2024). Images become functionally opaque, mobilised not as representations but as operational inputs in automated recognition systems. The controversy intensified in 2020 when BuzzFeed News leaked Clearview’s client list, revealing use by law enforcement agencies in the U.S., Canada, and several European countries, including Sweden. In Canada, Finland, and Sweden, officers accessed the system without institutional approval, blurring boundaries between state oversight and commercial infrastructures (Amoore, 2020; Eneman et al., 2022; Shepherd, 2024). In 2021, the Swedish Authority for Privacy Protection (IMY) concluded that the Police Authority had violated the Swedish Criminal Data Act by using Clearview AI without a required data protection impact assessment (Dnr 4756-21). The ruling highlights the difficulty of enforcing protections like the Right to Be Forgotten when images are embedded in distributed systems that resist deletion (Mantelero, 2013). IMY ordered affected individuals to be notified and transferred data erased (Eneman et al., 2022; Kosta, 2022). 3.2 Document Collection and Analysis This study draws on public records from the IMY, the Police Authority, and the Administrative Court in Stockholm, obtained under Sweden’s Principle of Public Access to Information, grounded in the Freedom of the Press Act (1949:105). While this principle ensures transparency, it is increasingly strained by digital infrastructures where traceability and accountability are diminished (Ananny & Crawford, 2018; Bowker, 2005). The analyzed documents (Bowen, 2009) reveal how facial recognition technologies intersect with legal reasoning and classification regimes. They show how images, once housed in institutional archives, now circulate through algorithmic systems that challenge conventional understandings of regulation, erasure, and control (Eneman et al., 2022). Authority Type of Document The Swedish Authority for Privacy Protection (IMY) Investigation into the use of Clearview AI (Dnr DI-2020-2719, date: 2020-03-05) Request for supplementation (Dnr DI-2020-2719, date: 2020-03-30) Decision after the inspection (Dnr DI-2020-2719, date: 2021-02-10) The Swedish Police Authority Investigation into the use of Clearview AI (Dnr A126.614/2020, date: 2020-03-19) Request for supplementation (Dnr A126.614/2020, date: 2020-05-07) Appeal regarding the IMY’s decision (Dnr A126.614/2020, date: 2021-03-01) Completion of previously filed appeal regarding the IMY’s decision (Dnr A126.614/2020, date: 2021-03-05) The Administrative Court in Stockholm Decision of the Administrative Court (Dnr 4756-21, date: 2021-09-30) The Court of Appeal in Stockholm Decision of the Court of Appeal (Dnr 7678-21, date: 2022-11-07) Table 1. Overview of the collected documents 4.0 Concluding Remarks This analysis of the Swedish police’s use of Clearview AI’s facial recognition technology highlights the difficulty of upholding individual autonomy in the face of data industrialisation. As state institutions integrate proprietary algorithmic infrastructures into their operations, images and personal data alike are transformed into operational components—no longer discrete, retrievable records, but dynamic inputs for classification, recognition, and prediction (Amoore, 2020). This shift challenges the legal and conceptual foundations of rights such as the Right to Be Forgotten. The issue is not merely whether data can be deleted, but whether withdrawal remains meaningful once data is mobilised within computational infrastructures. Once processed, images are fragmented, distributed, and rendered invisible—no longer stored but used, and deeply entangled with material, geopolitical, and logistical networks of appropriation and circulation (Crawford & Joler, 2021). In this context, the image becomes part of a global supply chain of data operations to be mined, reformulated, and propagated. In this Farockian sense, archives cease to function as static repositories and instead become industrialised systems of capture and recomposition, where images are not preserved but enacted upon (Amoore, 2024; Bowker, 2005; Farocki, 2004). In such a paradigm, deletion becomes a technically and epistemologically unstable intervention. Legal mechanisms premised on the coherent data subject and the reversibility of processing collapse under infrastructures that retain, recombine, and act on data without fixed boundaries. The concept of autonomy proves increasingly untenable in environments where individuals are parsed into dividual fragments, legible only through patterns and predictions. Rather than enabling control, autonomy is revealed as a fragile construct—outpaced by systems that operate beyond the grasp of intentionality. The findings of the Swedish Authority for Privacy Protection (IMY) underscore this disjunction. The directive to delete and notify presumes a subject who can be located, addressed, and restored to privacy—assumptions incompatible with algorithmic infrastructures where data persists not as content, but as action. The industrialisation of data thus demands not only structural adjustments but a rethinking of the foundations of privacy governance—one that accounts for the performative and distributed life of data beyond the moment of capture. |
15:00 | Who Am I in the Age of AI? Exploring the Epistemic and Ethical Implications of Automation on Academics’ Identity and Intellectual Virtues PRESENTER: Tara Miranović ABSTRACT. Work has long been a defining factor in shaping human development and identity [6,1,21,22,10]. This is especially true in knowledge-based professions, where work is not just a matter of productivity but also an expression of expertise, creativity, and intellectual autonomy [17,18]. Nowhere is this more evident than in academia, where intellectual labor is central to both professional identity and the pursuit of knowledge [16,9]. However, as AI is increasingly employed for literature reviews, data analysis, and academic writing [8,13,12,7], the boundaries between human and machine-generated contributions are becoming increasingly blurred. This extended abstract presents a small-scale qualitative study exploring how these shifts in academic work shape academics’ professional self-concept and intellectual virtues, such as curiosity, creativity, and critical thinking. Through in-depth interviews with junior academics across disciplines, the study reveals a growing tension between traditional academic values and AI’s expanding role in research and teaching. While AI can enhance efficiency and support personalized learning, uncritical use risks homogenizing intellectual output, reinforcing the ’publish or perish’ culture, and stifling creativity and critical thinking. To help institutions navigate these challenges, we propose strategies—including AI literacy training, clear ethical guidelines, and structured governance frameworks—to ensure that AI strengthens rather than erodes the core values of academia. |
15:30 | Ethical Aspects of Distributed Extended Reality Training PRESENTER: Olli I. Heimo ABSTRACT. This study explores the ethical implications of distributed Extended Reality (XR) in the context of remote education and training. As XR technologies evolve and become increasingly integrated into learning environments, particular-ly in technical fields requiring hands-on experience, new ethical questions arise. The research applies a deductive method within the framework of Virtuous Just Consequentialism, an extension of Moor’s Just Consequentialism, to evaluate whether distributed XR systems can be implemented without amplifying existing ethical concerns. The analysis focuses specifically on the unique ethical character-istics of distributed XR. Key findings indicate that, under certain conditions, distributed XR solutions can be deployed in ethically sound ways that do not exacerbate existing problems. In some cases, distributed XR may even offer improvements over traditional re-mote education by enhancing engagement, reducing resource waste, and enabling broader access to complex technical training. However, challenges related to usa-bility, equality, and technological limitations remain, particularly for novice users and vulnerable groups. The use of a multi-ethical analytical framework proved helpful for exploring these layered issues, though its complexity can pose inter-pretive difficulties. The study concludes that distributed XR holds promise as an ethical training technology, provided that implementation is carefully designed with attention to inclusive access, technological fairness, and the cultivation of virtue in both learn-ers and systems. |
16:00 | Corporate Financial Statement Analysis in Education 4.0 PRESENTER: Alvaro Carrasco-Aguilar ABSTRACT. Financial statement analysis is essential in business education, enabling professionals to assess a company's financial health and make strategic decisions. With Education 4.0, integrating AI tools like ChatGPT enhances learning by automating calculations and providing real-time insights. This paper explores the synergy between traditional teaching and AI, highlighting how it improves data interpretation and critical thinking. While AI optimizes analysis, human judgment remains crucial to ensure accuracy and contextual understanding. Combining both approaches fosters a more interactive and effective learning experience, preparing students for the complexities of modern finance. |
15:00 | A Realistic Approach to the Obviousness Concept Within the AI Act: a Matter of AI Literacy ABSTRACT. The European Union's Artificial Intelligence Act (AI Act) is a landmark regulatory framework that seeks to establish comprehensive guidelines for the development, deployment, and use of artificial intelligence (AI) technologies within the EU. As AI systems become increasingly pervasive across various sectors, ensuring their ethical, safe, and transparent use is crucial. One of the foundational elements of achieving these goals is fostering AI literacy—the ability to understand, interpret, and critically evaluate AI systems and their impacts. The concept of AI literacy is embedded within the AI Act, not only as an essential skill for stakeholders interacting with AI but also as a vital tool for ensuring public trust and accountability. This extended abstract explores the relevance of AI literacy in the context of a key provision of the AI Act, namely the obligation to disclose AI generated contents, with the purpose of minimizing the “new risks of misinformation and manipulation at scale, fraud, impersonation and consumer deception” and restore “the integrity and trust in the information ecosystem” (recital 133). Drawing on the experience of many AI ethics charters and guidelines, setting transparency, articulated as the duty to make an object or entity knowable, as a pillar principle for any AI deployment, Article 50 para. 1 establishes a duty of disclosure concerning “providers” placing on the EU market or putting into service AI systems or general-purpose AI models, as defined in Art. 3(3). A provider shall ensure that “AI systems intended to interact directly with natural persons are designed and developed in a way that the natural persons concerned are informed that they are interacting with an AI system”. This provision directly concerns a myriad of virtual influencers, avatars, chatbots, and other non-human digitally created characters populating social media, sharing content, and engaging in interactive communications with consumers. They may have many different forms to the point that some authors developed a taxonomy based on their similarity to human appearance, also known as anthropomorphism, and their placement on the reality-virtuality continuum, ranging from unimaginable to hyper-realistic characters that can be nearly impossible to distinguish from humans. The duty to disclose their artificial nature is not an absolute requirement. Under Article 50, natural persons should be notified that they are interacting with an AI system “unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.” In a nutshell, the AI Act acknowledges that in some cases, the nature of AI systems is so apparent that it doesn’t require disclosure duties. Therefore the concept of "obviousness" plays a crucial role in determining the scope and applicability of compliance obligations. "obviousness" is used to delineate situations in which the risks associated with AI systems are sufficiently apparent or self-evident to stakeholders (e.g., users). This can influence which kind of transparency measures are considered necessary. For AI providers, the understanding of whether their system is "obviously AI" can significantly affect the resources and efforts required for compliance. If an AI system is deemed recognizable based on its obviousness, it must undergo less rigorous transparency measures. To this extent, obviousness is a significant determinant of regulatory obligations. This approach is problematic because it leaves any evaluation to the providers and adds a layer of subjectivity and uncertainty. For example, AI systems having a long lasting history may be more easily recognizable as such than novel AI technologies operating in new or unforeseen contexts. The concept of "obviousness" is fraught with ambiguity and presents challenges in its application, given the dynamic and complex nature of AI technologies as well as the very uncertain level of AI literacy of the average consumer. Users—whether individuals, businesses, or public sector entities—may have a different level of AI literacy. This involves a combination of skills, knowledge, and awareness related to artificial intelligence systems, including technical understanding such as the knowledge of the fundamental principles behind AI technologies, such as machine learning, data processing, and algorithmic decision-making. At the same time, AI literacy involves the ability to recognize the ethical implications of AI, the capacity to assess the potential impacts of AI on individuals as well as the awareness of existing legal frameworks that govern the use of AI. The legislator does not clarify the level of AI literacy of the average consumer, nor which aspects of AI literacy are relevant for the obviousness test. This uncertainty may be particularly prejudicial also in consideration of Article 50(7) which encourages “the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labeling of artificially generated or manipulated content”. Any ambiguity in the interpretation of the obviousness test may cause self-regulation authorities to set totally different standards at the national level. At the same time, platforms which are already taking matters into their own hands may develop different standards of implementation of the EU rules. The European Committee of the Regions pointed out some of these concerns in its Opinion over the AI Act proposal, inviting to remove the exception on the grounds that “natural persons should always be duly informed whenever they encounter AI systems and this should not be subject to interpretation of a given situation. Their rights should be guaranteed at all times in interactions with AI systems”. As the exception has been maintained, it is critical to start reflecting on it. While challenges exist in fostering widespread literacy, particularly among non-expert stakeholders, the problem of the obviousness test has not been already tackled by scholars. It is therefore urgent to start developing a common approach to obviousness, to ensure that it accurately captures risks while promoting innovation and safeguarding fundamental rights. This is the main aim of the proposed paper. Since AI systems have the potential to impact fundamental rights, such as privacy, discrimination, and fairness, it is important to develop a clear, not overly simplistic, nor superficial understanding of what constitutes "obviousness" in the context of AI. At the same time, the adoption of a flexible, adaptive approach to defining “obviousness”, might be better rather than relying on narrow or rigid conceptions. |
15:30 | Normative Issues of AI Literacy in Construction Project Risk Management PRESENTER: Fasiha Sajjad ABSTRACT. 1. Introduction The construction industry is increasingly recognizing the transformative potential of Machine Learning (ML) to revolutionize Project Risk Management (PRM) (Yao and Soto, 2022). With complex risks and uncertainties in construction sector, ML offers promising solutions for more accurate risk prediction and mitigation. However, the successful integration of ML in construction PRM hinges on a critical, yet often overlooked factor: AI literacy among industry professionals (Pillai and Matus, 2020) While technical advancements in ML continue to progress (Sultan and Gao, 2024; Datta et al, 2024; Uddin et al, 2022), this study reveals an unexpected but critical challenge: the normative issues surrounding AI literacy among construction professionals. This research initially aimed to investigate the integration of ML technologies in construction PRM across different industry segments. However, during the exploration, AI literacy emerged as a significant theme, profoundly impacting the effective integration and utilization of ML in PRM. 2. Research Questions This study addresses the following research questions: • What are the key normative challenges and barriers in AI literacy that impact the adoption of machine learning in construction project risk management? • How do perspectives on AI literacy challenges differ across different stakeholders (IT professionals, academia, and industry practitioners) in the construction sector? • What strategies and frameworks can be developed to enhance AI literacy and address the knowledge gaps in the construction industry's ML adoption? The research utilizes the Technology Acceptance Model (TAM)(Davies, 1998) and Rogers' Diffusion of Innovation theory (2003) to analyse the emergent themes, providing a theoretical foundation for understanding AI literacy challenges in the context of technology adoption. By focusing on these unexpected findings, this study contributes to the broader understanding of the human factors influencing ML integration in construction PRM (Liu et al., 2018; Nnaji et al., 2023), moving beyond technical considerations to explore the crucial role of AI literacy in successful technology adoption (Rane, 2023) 3. Methodology This study employed a qualitative approach, conducting semi-structured interviews with professionals from three key sectors of the construction industry: IT, Academia, and Industry practitioners. The rational to include these participants was to help mitigate bias and to provide valuable insights essential for addressing the complex challenges faced in the construction sector (Li and Cheung, 2020). The initial research design focused on ML adoption in construction PRM, with interview questions covering current risk management practices, perceptions of ML, potential applications, safety implications, adoption barriers, integration opportunities, and strategies for implementation. A total of 9 participants, evenly distributed across the three sectors (3 from each), were selected from an initial pool of 73 contacted professionals. The selection criteria ensured a balanced representation of expertise and experience with ML in construction PRM. Data analysis was conducted using thematic analysis (Braun and Clarke, 2006), allowing for the identification of both anticipated themes related to ML adoption and unexpected themes, particularly those related to AI literacy challenges. This emergent nature of the findings adds value to the research, as it represents unprompted concerns and observations from industry professionals. The application of TAM and Rogers' Diffusion of Innovation theory in analysing the emergent themes offered a structured framework for understanding these challenges from individual as well as systematic perspective (Sarkeshikian et al., 2021). 4. Findings The thematic analysis revealed significant normative challenges in AI literacy that emerged organically during discussions about ML integration in construction PRM. These challenges transcend basic technical understanding, encompassing complex ethical, social, and professional dimensions. The analysis identified following key normative challenges in AI literacy • Ethical decision-making literacy emerged as particularly crucial, with professionals across sectors expressing concerns about balancing automated and human judgment in risk-related decisions (Carnet, 2024) • Social responsibility awareness highlighted the digital divide between SMEs and large organizations, raising questions about industry-wide equity in AI adoption (Agostini and Nosella, 2020). • The evolution of professional roles emerged as a significant concern, with participants emphasizing the need to understand and adapt to changing responsibilities in AI-enhanced environments. • Rights and responsibilities literacy focused on data privacy, security implications, and legal accountability frameworks, while data ethics and quality awareness focused on ethical data collection and validation responsibilities (Zhang et al., 2022). • Cross-sector collaborative ethics emphasized the importance of knowledge sharing and balanced resource distribution across industry segments (Yu and Yang, 2018). These findings, analysed through TAM and Rogers' theoretical frameworks, demonstrated that normative challenges significantly influence both the perceived usefulness and adoption patterns of ML systems in construction PRM. 5. Discussion The emergence of normative challenges in AI literacy represents a fundamental shift in understanding ML integration requirements in construction PRM. This study reveals that successful implementation demands attention to both technical competencies and normative understanding. The findings suggest three critical areas requiring immediate attention: • Educational framework development must extend beyond technical training to incorporate comprehensive AI ethics and normative considerations, tailored to sector-specific needs. • Professional practice evolution requires a fundamental redefinition of competencies, emphasizing ethical decision-making and knowledge management in AI-enhanced environments (He et al., 2018) • Industry-wide initiatives should focus on creating robust cross-sector collaboration mechanisms and support structures, particularly for SMEs, to ensure equitable AI literacy development across the industry. The study recommends both short-term actions and long-term strategies to address these challenges. Short-term actions include developing targeted AI literacy programs, establishing ethics committees, and creating cross-sector forums for knowledge exchange. Long-term strategies focus on developing industry-wide ethical standards, implementing continuous professional development frameworks, and creating collaborative knowledge-sharing platforms. While the study's limitations include sample size and geographic scope, the depth of insights provides valuable foundations for advancing AI literacy in construction PRM. The findings emphasize that addressing normative challenges is not merely an ethical imperative but a practical necessity for successful ML integration in construction risk management. 6. Conclusion This study reveals that normative challenges in AI literacy, while not initially targeted, represent crucial factors in successful ML integration in construction PRM. The identification of key normative challenges provides a framework for understanding and addressing AI literacy beyond technical competencies. The research contributes to both theory and practice by: • Identifying previously unexplored normative dimensions of AI literacy • Providing sector-specific insights into literacy challenges • Offering practical recommendations for addressing these challenges 7. Future Research Directions This study's findings suggest several key directions for future research. Quantitative validation of the identified normative challenges would provide statistical verification across a broader industry sample. Development of measurement frameworks would establish standardized methods for assessing AI literacy in construction organizations. Cross-cultural studies could reveal how these challenges vary globally, while longitudinal research would evaluate the effectiveness of literacy programs over time. These research directions would strengthen our understanding of AI literacy's role in successful ML integration within construction project risk management. These findings suggest that addressing normative challenges in AI literacy is essential for successful ML integration in construction PRM, requiring a balanced approach to both technical and ethical considerations. Statement “I would like to start a broader conversation with ETHICOMP Community. As someone who has recently completed their master’s degree and is pursuing PhD programs. I am eager to both contribute my insights and learn from others. I see this as an opportunity to share what I can offer to the community while acknowledging that I am at the beginning of my academic journey and would value growing alongside this community.” References: Agostini, L. and Nosella, A. (2020) The adoption of Industry 4.0 technologies in SMEs: results of an international study. Management Decision, 58(4), pp.625-643. Braun, V. and Clarke, V. (2006) Using thematic analysis in psychology. Qualitative Research in Psychology. 3(2), pp. 77-101. Carnat, I., 2024. Human, all too human: accounting for automation bias in generative large language models. International Data Privacy Law, p.ipae018. Davis, F. D. (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly. pp. 319-340. He, Y., Chen, Q. and Kitkuakul, S. (2018) Regulatory focus and technology acceptance: Perceived ease of use and usefulness as efficacy. Cogent Business & Management, 5(1), p.1459006. Liu, D., Lu, W. and Niu, Y. (2018) Extended technology-acceptance model to make smart construction systems successful. Journal of Construction Engineering and Management. 144(6), p. 04018035 Li, K. and Cheung, S. O. (2020) Alleviating bias to enhance sustainable construction dispute management. Journal of Cleaner Production. 249, p. 119311. Nnaji, C., Okpala, I., Awolusi, I. and Gambatese, J. (2023) A systematic review of technology acceptance models and theories in construction research. Journal of Information Technology in Construction. 28. Pillai, V.S. and Matus, K.J., 2020. Towards a responsible integration of artificial intelligence technology in the construction sector. Science and Public Policy, 47(5), pp.689-704. Rane, N., 2023. ChatGPT and similar Generative Artificial Intelligence (AI) for building and construction industry: Contribution, Opportunities and Challenges of large language Models for Industry 4.0, Industry 5.0, and Society 5.0. Opportunities and Challenges of Large Language Models for Industry, 4. Rogers, E. M. (2003) Diffusion of innovations (5th ed.). Free Press. Rogers, E. M., Singhal, A. and Quinlan, M. M. (2014) Diffusion of innovations. In: An integrated approach to communication theory and research. Routledge, pp. 432-448. Sarkeshikian, A., Zakery, A., Shafia, M. A. and Aliahmadi, A. (2021) Stakeholders’ consensus on organizational technology acceptance: Using thematic analysis and SEM. Kybernetes. 50(6), pp. 1873-1899. Sultan, A. and Gao, Z. (2024) Machine learning for improving construction productivity: A systematic literature review. Proceedings of 60th Annual Associated Schools. 5, pp. 558-565. Uddin, S., Ong, S. and Lu, H. (2022) Machine learning in project analytics: A data-driven framework and case study. Scientific Reports. 12(1), p. 15252. Yao, D. and de Soto, B.G., 2022. A preliminary SWOT evaluation for the applications of ML to Cyber Risk Analysis in the Construction Industry. In IOP Conference Series: Materials Science and Engineering (Vol. 1218, No. 1, p. 012017). IOP Publishing. Yu, D. & Yang, J. (2018) Knowledge management research in the construction industry: A review. Journal of the Knowledge Economy. 9, p.782-803. Zhang, Y. and Yuen, K. V. (2022) Review of artificial intelligence-based bridge damage detection. Advances in Mechanical Engineering. 14(9), p. 16878132221122770. |
16:00 | GenAI and Proportionality: European and Portuguese Ethical-Legal Framework PRESENTER: Nuno Silva ABSTRACT. Generative AI systems are changing society, the way we relate with each other and the way we perceive time, space, and reality. The research problem we now focus on addresses the still unsettled and yet-to-be-found Human-AI relation keystone. Firstly, how AI systems regulation complies with explainability, trans-parency and accountability demand as well as with a responsible, educated, con-scious, ethical and trustworthy usage. Secondly, how the definition of “high-risk” AI systems may impact Human-AI sustainable development. Thirdly, ponder the proportionality principle framework, within the scope of the EU AI Act and the GDPR, considering special categories of personal data. Although the focus of this research is on Portuguese and European Union ethical-legal framework solu-tions on all these issues, we have adopted comparative law methodology with the dialogue and contributions from other ethical-legal orders. |
17:00 | Sustainable Fashion Trends on Second-Hand Shopping: the Role of Social Media PRESENTER: Orlando Lima Rua ABSTRACT. This study aims to analyse the relation between sustainable fashion trends and second-hand shopping while keeping in mind the importance and role of social media. A qualitative methodology approach was adopted to analyse the research propositions and form a better understanding of the research gaps and further investigations. The main results highlight the influence of social media engagement, endorsement by influencers, transparent sustaina-bility communication, and circular business models on second-hand shop-ping behaviours. The conclusion drawn underscore the significance of con-sidering these factors in shaping consumer attitudes and behaviours towards sustainable fashion and second-hand shopping, while also acknowledging the limitations of qualitative research and offering suggestions for future re-search directions. Overall, this study contributes to a deeper understanding of the evolving landscape of sustainable fashion consumption and its impli-cations for the fashion industry and society. |
17:30 | The Nexus Between Artificial Chatbots and Individual Creativity PRESENTER: Orlando Lima Rua ABSTRACT. This study aims to analyse the relationship between the use of artificial in-telligence Chatbots and individual creativity. A quantitative methodological approach was adopted using an online survey with 186 students from the higher education institution of the Porto School of Accouting and Business (ISCAP), of the Polytechnic Porto (Portugal). The results indicate that stu-dents who considered themselves to be creative don’t show intention of us-ing AI chatbot services, despite finding them useful. This study seeks to en-courage further academic and scientific exploration in a fast-growing and emerging field that is increasingly gaining relevancy in our society. As artifi-cial intelligence tools become more common in our daily lives, the results of such research will continue to change. Consequently, future studies in this area are likely to produce different findings, highlighting the ever-evolving nature of this emerging field. |
18:00 | Systematic Review on AI Ethics in Privacy for V2X Communication PRESENTER: Héctor Orrillo ABSTRACT. The advent of fully autonomous vehicles (AVs) has the potential to reshape ur-ban mobility by improving both efficiency and safety. However, the dependency of AVs on Vehicle-to-Everything (V2X) communication networks introduces substantial cybersecurity vulnerabilities, including remote intrusions and data ma-nipulation. This systematic review investigates strategies designed to safeguard data privacy and integrity within V2X communications, emphasizing the incorpo-ration of ethical principles derived from Artificial Intelligence (AI) Ethics. Fol-lowing PRISMA guidelines, we identified 64 peer-reviewed articles published between 2022 and 2025 across IEEE Xplore, SpringerLink, and Scopus. The analysis indicates that while technical approaches, such as Blockchain (BL) and Federated Learning (FL) are extensively adopted, the integration of ethical frameworks remains limited. In response, we propose twelve normative ethical principles aligned with international standards (e.g., IEEE, European Commis-sion) to inform the responsible development and governance of AI systems in AVs. Our results demonstrate that the combination of BL and FL presents strong potential for developing secure, decentralized, and privacy-preserving V2X communication infrastructures. However, notable gaps remain concerning trans-parency, accountability, and human oversight. Future research should prioritize scalable implementations of these technologies and conduct comparative analyses with alternative approaches, such as Differential Privacy. |
17:00 | AI Ethics in Higher Education Content Creation PRESENTER: Álvaro Carrasco Aguilar ABSTRACT. Artificial Intelligence (AI) is rapidly transforming higher education, particularly in content creation, offering tools for personalized learning and automated re-source generation. While this presents significant opportunities, it also raises crit-ical ethical concerns regarding academic integrity, intellectual property, data pri-vacy, bias, and the professor-student relationship. Addressing these challenges requires a robust framework to validate AI-generated educational content. This paper proposes a Delphi method-based methodology to achieve expert consensus on the ethical validation of such content. The Delphi method, with its iterative, anonymous, and feedback-controlled structure, is well-suited for navigating com-plex, novel issues lacking definitive answers. This study outlines the application of the Delphi process, from problem definition and expert panel selection to the development of questionnaires centered on key ethical criteria identified from the literature. The primary objective is to establish a validated set of guidelines and practical indicators for educators and institutions to ensure the responsible and ethical adoption of AI in educational content creation. This research aims to fill a critical gap by providing a structured approach to integrating AI responsibly into academic practices, safeguarding core educational values. |
17:30 | Rethinking Educational Assessment in the Age of Artificial Intelligence PRESENTER: Mario Arias-Oliva ABSTRACT. As artificial intelligence (AI) is rapidly transforming the landscape of education, its influence on assessment practices has become both transformative and controversial. This paper analyzes the potential benefits and concerns of AI driven educational assessment, with particular attention to the ethical, pedagogical, and social implications of its widespread adoption. Emerging technologies such as Large Language Models (LLM) and machine learning algorithms are enabling new, efficient forms of feedback, personalization, and scalability that were unimaginable a few years ago. Nevertheless, many unresolved challenges should be considered, such as algorithmic bias, academic integrity, or legal issues. It is a necessary and urgent reframe of AI-based assessment through a more ethical and human-centered perspective. By foregrounding values such as fairness, transparency, AI literacy, or inclusivity among other aspects, this work outlines a roadmap for integrating AI tools in ways that enrich and improve assessment experiences, avoiding at the same time drawbacks. The paper proposes ideas to design future AI assessment systems that are both innovative and deeply responsible for students, educators, technologists, and policymakers, finding an ethical perspective for the future deployment of smart assessment technologies. |
18:00 | Comparative Analysis of Instructor and AI Assessments: Objectivity, Biases, and Impact on Academic Grading ABSTRACT. This article examines the role of artificial intelligence in academic assess-ment, with particular focus on its implications for grading accuracy. It pre-sents a comparative analysis of 46 undergraduate reports, each evaluated in-dependently by a human instructor and an AI-based system. The results re-veal discrepancies between the two sets of scores, which suggest that AI sys-tems fail to adequately assess qualitative dimensions such as originality, crit-ical thinking, and contextual relevance. While AI offers advantages in con-sistency, scalability, and efficiency, it often fails to capture the pedagogical depth and interpretive insight provided by human evaluators. In light of these findings, the article proposes a hybrid assessment framework that combines the precision of AI with the interpretive competences of human instructors |
17:00 | Adopting Moral Sensitivity for Evaluation of Trustworthy AI Frameworks PRESENTER: Adrian Gavornik ABSTRACT. Recent developments in artificial intelligence have caused growing recognition of the need to develop robust ethical frameworks and governance mechanisms that aim to ensure that the risks related to the development, deployment and use of these technologies is limited. This growing recognition is evident across various sectors, from governmental initiatives to corporate efforts and international organizations. In response, numerous sets of ethical guidelines and principles for trustworthy AI have been developed by governments, companies, and academic institutions. Meta-analyses of guidelines for trustworthy AI reveal overlap around core principles such as transparency, justice and fairness, non-maleficence, responsibility, and privacy (Fjeld et al., 2020; Floridi et al., 2018; Jobin et al., 2019; Mittelstadt, 2019). Still, it remained a challenge to implement these high level principles into practices, resulting in the so-called “principle-to-practice gap” in AI ethics. In response to the identified gap, several tools and frameworks have been developed to operationalize AI ethics principles. For instance, the Assessment List for Trustworthy Artificial Intelligence (ALTAI) provides a checklist for self-assessment, aiming to guide developers in implementing ethical principles (European Commission. Directorate General for Communications Networks, Content and Technology., 2020) Despite these efforts, questions remain about the actual usage, target audience, and impact of these tools. Ayling & Chapman (2021) note that many frameworks lack clarity regarding their intended users and the specific contexts in which they should be applied. Moreover, there is limited empirical evidence assessing the effectiveness of these tools in fostering ethical AI practices. Analyses of ethics toolkits and frameworks suggest limited practical impact (Morley et al., 2020). Even widely adopted assessment tools like algorithmic impact assessments have been criticized for potentially becoming bureaucratic exercises that fail to meaningfully influence development practices (Metcalf et al., 2021). The problem extends beyond the tools themselves to questions of measurement and evaluation. While considerable effort has gone into developing operational frameworks, little attention has been paid to assessing their actual use and effectiveness. As Yeung et al. (2019) note, there is often lack of empirical evidence about whether existing approaches actually improve ethical outcomes or enhance practitioners' capacity for ethical decision-making. If we want to avoid falling into the trap that AI ethics becomes useless (Munn, 2022) or toothless (Rességuier & Rodrigues, 2020), we should find meaningful and sustainable ways to assess the effects these tools have on their users, in this case the AI practitioners. To contribute to a more grounded and empirical assessment of the actual use and effect of existing AI ethics frameworks and guidelines we propose to adopt the concept of moral sensitivity for evaluation of the effectiveness of existing ethical frameworks and tools. Rather than focusing solely on adoption or compliance, this approach examines how well tools and practices develop practitioners' capacity to recognize, evaluate, and address ethical issues in their work. In specific it examines whether tools and practices enhance practitioners' ability to: 1) identify moral problems ; 2) evaluate the relative importance of different ethical concerns ; 3) frame the moral problems within a broader landscape of ethical principles and values 4) consider diverse stakeholder perspectives; 5) navigate principle or value conflicts and trade-offs; 6) understand potential moral consequences of actions and 7) make contextually appropriate decisions. This approach also takes into account the complex, collaborative nature of AI development, where moral considerations often emerge from team interactions rather than individual decisions. It also provides a way to assess whether tools support what Selbst et al. (2019) term "situated" ethical decision-making - the ability to recognize and address ethical issues within specific development contexts. The adoption of moral sensitivity has particular relevance given recent findings about the limitations of current approaches which suggest that existing ethics tools often fail to account for the organizational and social contexts in which they're used (Rakova et al., 2021) and that practitioners often struggle to connect abstract ethical principles with concrete development decisions (Vakkuri et al., 2021). We aim to further the research agenda with two complementary approaches. First, we conduct a scoping survey to understand stakeholder’s usage of existing AI ethics tools and frameworks. Building onto these insights, we propose further empirical and qualitative investigations into how the frameworks and tools help foster moral sensitivity in AI practitioners in various contexts, while also paying attention to potential major blockers that prevent them from using and implementing these tools more effectively. Second, we build onto the insights gathered from the scoping survey on the most widespread and used AI ethics frameworks in order to analyse them through the lens of moral sensitivity. We want to ask to what extent are the individual components of moral sensitivity addressed in these existing methodologies. This will provide insight into both what their actual effects on AI practitioners could be but also create space for further improvements, so that trustworthy AI frameworks are not mere checkbox exercises but rather create and foster meaningful changes in AI practice. Ayling, J., & Chapman, A. (2021). Putting AI ethics to work: Are the tools fit for purpose? AI and Ethics. https://doi.org/10.1007/s43681-021-00084-x European Commission. Directorate General for Communications Networks, Content and Technology. (2020). The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment. Publications Office. https://data.europa.eu/doi/10.2759/791819 Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (SSRN Scholarly Paper ID 3518482). Social Science Research Network. https://doi.org/10.2139/ssrn.3518482 Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations (SSRN Scholarly Paper 3284141). https://papers.ssrn.com/abstract=3284141 Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), Article 9. https://doi.org/10.1038/s42256-019-0088-2 Metcalf, J., Moss, E., Watkins, E. A., Singh, R., & Elish, M. C. (2021). Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 735–746. https://doi.org/10.1145/3442188.3445935 Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. https://doi.org/10.1038/s42256-019-0114-4 Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5 Munn, L. (2022). The uselessness of AI ethics. AI and Ethics. https://doi.org/10.1007/s43681-022-00209-w Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. Proc. ACM Hum.-Comput. Interact., 5(CSCW1), 7:1-7:23. https://doi.org/10.1145/3449081 Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), 205395172094254. https://doi.org/10.1177/2053951720942541 Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598 Vakkuri, V., Kemell, K.-K., Jantunen, M., Halme, E., & Abrahamsson, P. (2021). ECCOLA — A method for implementing ethically aligned AI systems. Journal of Systems and Software, 182, 111067. https://doi.org/10.1016/j.jss.2021.111067 Yeung, K., Andrew Howes, & Pogrebna, G. (2019). AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing (SSRN Scholarly Paper 3435011). Social Science Research Network. https://doi.org/10.2139/ssrn.3435011 |
17:30 | Towards a Value Sensitive Design (VSD) Based Learning Tool to Facilitate Ethical Considerations When Developing Responsible Technologies PRESENTER: Anisha Fernando ABSTRACT. EXTENDED ABSTRACT The world has changed over the past decades due to rapid advancements in technol-ogy, globalisation, and shifts in social, cultural, and environmental landscapes. These changes have led to a renewed focus on ethics and human values in all human endeavours. With the increased reliance on technology comes a heightened aware-ness of ethical challenges that arise, such as privacy concerns, digital surveillance, misinformation, and the potential for deepfakes and data breaches. As diverse cul-tures regularly interact in globalised environments, it is crucial to promote mutual respect, inclusivity, and understanding. Furthermore, advances in artificial intelli-gence (AI)-driven healthcare, telemedicine, and personalised medicine have raised ethical concerns regarding privacy, access to healthcare, and fairness in treatment. Responsible innovation incorporates ethical considerations into the development of new technologies, products, services, and systems (Grunwald 2014). Such innovation ensures that ethics is applied practically and is relevant to the innovation context and encourages stakeholder participation when developing technologies (van den Hoven 2014). Responsible technologies are necessary to ensure that technological progress benefits everyone, reduces harm, and is aligned with societal values. Such technolo-gies refer to the development and application of technology in ways that prioritise ethical considerations, sustainability, social impact, and long-term consequences. An example of unethical conduct when relying on technologies is the Robodebt au-tomated debt recovery program implemented by the Australian government between 2016 and 2019 (Commonwealth of Australia 2023). The scheme aimed to identify and recover overpayments made to welfare recipients by comparing reported income to tax office data. The program used a flawed algorithm leading to incorrect debt calculations, especially for people with variable incomes across a financial year. People were issued automated debt notices without human review, leading to stress and financial hardship for the affected individuals. A class action was filed, and the government agreed to settle for $1.2 billion. The Robodebt scandal highlights the risks and ethical challenges related to automated decision-making processes (Clarke, Michael & Abbas, 2024). Incorporating ethical considerations in the design and development of technology is crucial to ensure that responsible technologies contribute positively to society with-out causing harm. Such ethical considerations include, amongst others: • Fairness: Provide equal access and opportunities for all, regardless of background or identity. • Well-being: Prioritise the mental, emotional, and physical well-being of users. • Autonomy: Give users control over how they interact with technology and how their data is used. • Security: Minimise risks like cybersecurity threats, misinformation, and abuse. Value Sensitive Design (VSD) is predicated on the principle that technology should not only meet functional needs, but also reflect and respect the values of individuals and communities impacted by the technology. VSD therefore considers the core values to uphold (such as fairness and equity, privacy and security, transparency, as well as sustainability) when designing and using technology. VSD offers opportuni-ties to consider ethical implications such as through moral imagination techniques when developing responsible innovations (Umbrello 2020). Technology design in-fluences user behaviour and enables users to practice ethics through the human val-ues embedded within the design (Verbeek 2011; Vallor 2018). VSD considers values that may be in conflict in the design of technology, referred to as ‘value tensions’. These value tensions lead to ethical dilemmas, where choosing one value may com-promise another (Friedman & Hendry 2019). Recognising and addressing these ten-sions can lead to more ethical outcomes in technology design. Learning tools that embed values enable participants to partake in ethical decision-making, which is particularly important when creating responsible technologies. The harm caused by the unethical conduct involving technologies can be reduced when individuals are empowered to persuasively voice values tensions. For this reason, it is important to empower students to observe and voice value tensions as they be-come more professional in their conduct. Learning tools that enable students to ob-serve, discuss and knowingly apply values relevant across professions are scarce. Therefore, there is a need for teaching and learning resources that afford opportuni-ties to identify, consider and discuss values that are relevant to the appropriate ter-tiary education context. In this study we conduct a scoping review and use a mixed methods action research strategy, using the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) approach, on VSD-based learning tools that embed values and consider ethical perspectives during the design and development of responsible technologies. In this review, we aim to describe ethical considerations when devel-oping responsible technologies and how they can be embedded in learning tools. We investigate the use of a learning tool in the form of VSD conversation cards to explore the value tensions between social and market-based norms at play through online interactions (Fernando, 2020; Fernando & Scholl 2020). The VSD conversa-tion cards were developed to enable information technology (IT) students to observe, reflect, and discuss value tensions in the Australian tertiary education context. These cards are refined to extend value tensions by embedding Vallor’s (2018) techno-moral virtues (social values) and Zuboff’s (2019) analysis of systemic drivers of innovations (market values). In this paper, we present the results of a preliminary scoping review, describe the development of the VSD conversation cards and present the findings from a pilot study. The pilot study is part of a repeated measures empirical study. The pilot study is conducted in three phases: pre-intervention measurement, an intervention using the conversation cards, and post-intervention measurement. The participants are undergraduate and postgraduate students from a wide range of disciplines. A factori-al vignette survey to measure value competencies has been designed. The pre- and post-intervention measurements use the survey to gauge the values competencies of participants. Following the pre-intervention survey, respondents undertake a group-based learning activity (intervention) in tutorials specific to their course to embed a values-based ethics lens to their specific discipline. Students ex-amine scenarios to articulate and explore the tensions in the context of their learning activity. Respondents then complete the survey again, thereby providing a post-treatment measure. An analysis of variance (ANOVA) is used to gauge the efficacy of the conversation cards to develop values competencies and ethical decision-making skills, and to identify factors carrying the most weight. Selected representative students partici-pate in semi-scripted, videoed focus groups. Thematic analysis is used to elicit key themes and insights from the qualitative focus group data. After the pilot study, further empirical research will be conducted across several courses and discipline to validate the conversation cards. This research will enable the development of teaching and learning resources (the learning tool) for different disciplines. We also aim to propose an innovative teaching and learning approach to develop students’ ethics competencies by observing and discussing value tensions through VSD. This approach aims to engage multi-disciplinary educators to co-design learning activities with ethical dilemmas related to the design and develop-ment of IT within their professions using the VSD conversation cards. These co-designed learning activities and the VSD conversation cards will be made available to educators via a shared repository. The project aims to provide future students with greater access to diverse multi-disciplinary scenarios for technology design, devel-opment, and use. An online version of the validated conversational cards will also be developed to extend the accessibility to online students via an open educational re-source (OER). Responsible innovation is concerned with the appropriate application of ethical con-siderations when developing technologies. Value sensitive design offers an im-portant avenue for students to learn about, consider and discuss ethical dilemmas that may arise in creating these technologies. The study uses mixed methods action research to conduct a scoping review and a repeated measures empirical study to identify important ethical considerations that need to be included in the development of learning tools. A key proposed outcome of this study is to develop an online ver-sion (OER) of the VSD conversation cards by incorporating Vallor’s (2018) techno-moral virtues (social values) and Zuboff’s (2019) analysis of systemic drivers of innovation (market values). References 1. Clarke, R., Michael, K. & Abbas, R.: Robodebt: A Socio-Technical Case Study of Pub-lic Sector Information Systems Failure. Australasian Journal of Information Systems, 28, 1-42 (2024) 2. Commonwealth of Australia, Royal Commission into the Robodebt Scheme. Final Re-port vol 1-3 (2023) 3. Fernando, A.: Exploring How Value Tensions Could Develop Data Ethics Literacy Skills. In: Pelegrin-Borondo, J., Arias-Oliva, M., Murata, K., Lara Palma, A.M. (eds.), ETHICOMP 2020 CONFERENCE: 18th International Conference on the Ethical and Social Impacts of ICT (pp. 194 - 197). Universidad de La Rioja (2020) 4. Fernando, A.T. & Scholl, L.: Towards using value tensions to reframe the value of data beyond market-based online social norms, Australasian Journal of Information Sys-tems, 24 (2793), 1-9 (2020) 5. Friedman, B., & Hendry, D.G.: Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press, Cambridge, MA (2019) 6. Grunwald, A.: Technology Assessment for Responsible Innovation. In: van den Hoven, J., Doorn, N., Swierstra, T., Koops, BJ., Romijn, H. (eds.) Responsible Innovation 1. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-8956-1_2 (2014) 7. Umbrello, S.: Imaginative Value Sensitive Design: Using Moral Imagination Theory to Inform Responsible Technology Design. Science and Engineering Ethics, 26(2), 575–595. https://doi.org/10.1007/s11948-019-00104-4 (2020) 8. Vallor, S.: Technology and the virtues: A philosophical guide to a future worth want-ing. Oxford University Press, New York (2018) 9. van den Hoven, J.: Responsible Innovation: A New Look at Technology and Ethics. In: van den Hoven, J., Doorn, N., Swierstra, T., Koops, BJ., Romijn, H. (eds.) Responsible Innovation 1. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-8956-1_1 (2014) 10. Verbeek, P.-P.: Moralizing technology: understanding and designing the morality of things, University of Chicago Press, Chicago (2011) 11. Zuboff, S.: The age of surveillance capitalism: the fight for the future at the new fron-tier of power. Profile Books, London (2019) |
18:00 | On the Current (Im)Possibility to Achieve Public Value Through the EU Digital Strategy: an Ethics Method to Seek a “Collectual” Equilibrium ABSTRACT. This paper has two aims, one theoretical and one practical. On a theoretical level, the goal is to highlight the criticalities and the ultimate impossibility to achieve public value by/through digital technologies, based on the current regulatory framework of the European Union’s (EU) digital strategy. On a practical level, the goal is to redress such criticalities by advancing a complementary practice to the EU’s normative framework, which takes the form of a “problem-opening” transdisciplinary method for questioning, exposing, and redressing the effects of digital technologies’ use in complex scenarios. The method, discussed and exemplified in the paper, was tested from 2021 to 2024 at the Delft University of Technology in a course titled “Ethics for the Data-driven City”. |