View: session overviewtalk overview
09:00 | Ethics of Personal Data Economy: Unpacking of Dilemmas and Quandaries in Modern Digital Economy PRESENTER: Minna Rantanen ABSTRACT. Data have gradually become a central element of many different sectors, making an increasing number of companies part of the data economy - a modern form of economy rooted in the use of data [1]. In practice, all major companies are now investing in statistics, data science, and artificial intelligence (AI) to identify trends, patterns, and opportunities to gain a better competitive position [2]. However, data economies are still mainly created and controlled by a few big tech companies [3]. Research about data economies is still evolving and fragmented due to the complexity and multidisciplinary nature of the field. As a concept, data economy is still relatively new, and there is no consensus about its definition, and new definitions constantly emerge [4]. Various terms, such as data ecosystem [4] and data economy ecosystem [5] describe the same thing. In particular, there are also more critical terms used to describe data economies, such as “data capitalism” [1] and “surveillance capitalism” [6]. Essentially, data economy is an umbrella construct covering a diverse set of phenomena in which data is collected, analysed, and utilised to create (business) value. Ethical issues of data collection and use can be perceived either as specific ones of computational operations or from a wider societal perspective. For instance, data ethics focus on the ethical challenges posed by data science and its procedures [7] From an broader ethical and societal perspective, data economies offer a lot to study, especially if we consider personal data and how that data can be used to influence us and whole societies. Research is still scattered in the ethics of data economy, although many ethical issues concerning data and personal data being debated in the applied and practitioner circles. In the full paper, we are aiming to study the ethical issues of data economy from a holistic perspective by conducting a scoping review. In this extended abstract we present a brief overlook of ethical themes that have been handled in literature that should present a good basis for the whole study. As a starting point, a literature study on the ethical governance of data ecosystems [8] noted that ethical issues that are not about research ethics can be roughly divided into five categories: • privacy • accountability • ownership • accessibility • motivations. In addition, consent, security, trust, and transparency were frequent themes. However, the authors note that handling ethical topics was superficial and rarely included ethical justifications. [8] Although this research was limited to the governance of data ecosystems, it shows that there is an interest in the ethics of data economy, and some themes occur repeatedly, thus providing a sufficient starting point. Next, we are elaborate these themes and examine their connections to set a basis for the scoping review. Privacy is one of the most fundamental issues concerning the collection and use of personal data. Our notions of information privacy have been changing over time [9]. However, in general, it has referred to individual's desire to control or have some influence over the data about themselves [10]. Information privacy is a popular topic, which can be defined in various ways and analysed at multiple levels ranging from individual privacy to social privacy. Privacy can be seen as a right, commodity, state or control, and it overlaps with concepts such as confidentiality, secrecy, anonymity, and security. [11.] For instance, constant surveillance through smart devices threatens privacy, as it removes the possibility of being free from observation in places previously seen as private, such as homes [12]. Notably, people find privacy important, but still rarely try to protect their data [13]. One way to enhance privacy is through transparency, which allows individuals to control the data collection. Transparency has an evolving meaning as a concept, but it is often used as a synonym for open decision-making. It can be seen as a way to ensure organisations' accountability and counter corruption. [14] Transparency is vital in controlling privacy, as informed consent is required to disclose data. Consent should be freely given and informed. Thus, consent is made possible through transparency. Therefore, transparency about the reasons for data collection and how the data is used are important from a privacy perspective, as well as ensuring accountability of the organisations. Ethical issues concerning accountability are often seen as a question of responsibility in case of harmful events and their prevention. Moral responsibility and accountability and their relations have been widely discussed, but there is little consensus on their definitions or relations [see, e.g. , 15]. In addition, the terms are often used interchangeably. The possible differentiation is that accountability implies instrumentality and external control, while responsibility is more about morality and inner controls [16]. In other words, responsibility is more like an assigned duty, whereas responsibility is chosen. In a data economy setting, questions about accountability and responsibility can be contemplated at the individual and organisational levels. However, it must be noted that although organisations are often seen as legally responsible entities and, thus, accountable for their actions, the people within them are the ones making the decisions. Moral agency cannot be extended to artefacts, as its idea is that an individual is primarily responsible for their intended and voluntary behaviour and therefore, artefacts cannot be considered moral agents [17]. In addition to ensuring accountability, transparency is often seen as a means to create trust. Trust essentially results from a process in which an individual evaluates trustworthiness. In the case of data economies, trust is institutional rather than personal. In institutional trust, the trustee places trust in the rules, roles, and norms of an institution instead of another human being [18]. Trust and trustworthiness play a key role in adopting and using technology [19]. In the case of personal data economies, trust towards the institutions collecting and using data is even more relevant, as it affects the decisions about enclosing personal data [20]. Also, trust between the institutions with a data economy is important, as the whole system depends on cooperation. Traditionally, ownership has been used to describe the relation between physical property and its owner. This becomes problematic, as personal data are not a physical artefact and can be in the possession of multiple people at a time. It can also be used multiple times without consuming it. In the data economy, however, it is an asset that can create value for its possessor by utilising or selling. Yet, it is personal data if it can be used to identify the person about it, and they should have rights over it despite the lack of ownership. This dilemma can be solved by separating legal ownership from ethically justified control over data individuals should have [21]. In data economy, accessibility is a vital issue as without access data cannot be used and value created. Accessibility is about what information an individual or organisations has a right or privilege to obtain, but also about the conditions and safety of the data [22]. Thus, accessibility is not only about who can use the data, but also about ensuring that it can be used and that is stays secured from malicious purposes. Rules, roles, and norms are connected to accountability and responsibility but also clarify what actions are allowed and what are not. Official rules are often enforced by penalties and require governance and supervision. In data economies, there is no hierarchical control within a system, but it does not mean there are no control mechanisms [5]. For instance, laws and policies regulate the actions of different parties. Notably, laws and policies are not necessarily ethical but should be based on ethical considerations, and constant technological development creates policy vacuums that could be alleviated with ethics [23]. This calls for ongoing criticality towards the policies and conceptualisations in a complex data economy ecosystem. Motivations as an ethical issue require consideration of why people do what they do. For instance, Conger et al. [24] used motivation as an ethical category to raise awareness of the beneficiaries of unethical acts and those who might suffer. They divide this ethical category into identifying classes of potential victims, rationales behind actions, and direct and indirect beneficiaries. The consideration of intentions and avoidance of harm are rather general ethical issues, but they should also be kept in mind when considering more specific ethical issues. As it can be noted, there are a lot of ethical issues in data economies that can be categorised in these five themes. While focusing on specific issues is important, we should also aim to understand the bigger picture of ethical issues and their relations in personal data economy. Thus, in the full paper, we will systematically map the existing literature using scoping review guidelines by Arksey and O’Malley [25] using these themes as the basis. We find scoping review a suitable methodology as it allows us to map the currently scattered research and identify the need for further research in the field. Thus, this review would aid current and future research in data economy ethics. |
09:30 | “There Are so Many” - Harms of Smart Homes ABSTRACT. In this paper, harms facilitated by smart homes are considered. The study aims to provide an overview of the research area by contributing a map of smart home-facilitated harm and supporting future research. A systematic mapping study, including 48 published and peer-reviewed articles, identified eight preliminary categories of harm. These results were then presented to a focus group with smart home users to brainstorm additional categories, producing a final map of smart home-facilitated harm. |
10:00 | Smart Doorbells in a Surveillance Society PRESENTER: Anjela Mikhaylova ABSTRACT. Smart doorbells are not merely convenience devices, they are domestic surveillance tools that capture audio-visual data in and around private homes. Promoted as instruments of security and peace of mind, they have quietly introduced new forms of everyday monitoring that blur the boundaries between private and public space. This paper presents findings from the first phase of a two-stage qualitative study exploring how stakeholders perceive and navigate the ethical, legal, and social implications of smart doorbell surveillance. Drawing on pilot survey data from a residential neighbourhood in Leicester, UK, the study identifies five emergent themes: shifts in community trust, surveillance used in neighbour conflict, domestic safety applications, incidental recording and consent issues, and discomfort with the normalisation of surveillance. These findings are interpreted through the theoretical lenses of Foucault’s Panopticism and Zuboff’s Surveillance Capitalism, highlighting how smart doorbells reshape behaviour, visibility, and power at the resi-dential threshold. In response to these concerns, the discussion incorporates normative frameworks, in particular Privacy by Design, Ethics by Design, and Responsible Research and Innovation (RRI), to outline practical pathways for more accountable and democratic technology design and regulation. |
09:00 | Integrating Ethics and Gender Equality in Artificial Intelligence Education: a Study of Higher Education in Portugal PRESENTER: Camila Marques ABSTRACT. Artificial Intelligence (AI) systems have the potential to replicate and even intensify existing social inequalities, particularly those related to gender. As AI technologies become ubiquituous, it is essential that developers are equipped to consider their ethical and societal implications. This study ex-amines how ethics and gender equality are addressed in AI education within higher education institutions in Portugal. Through a qualitative, multi-phase methodology—including a desk-based review, an online questionnaire, and interviews—it investigates whether, where, and how these themes are inte-grated into AI-related university curricula, and how academic programs pre-pare future professrionals to critically engage with these crucial issues. It an-alyzes the pedagogical approaches employed and the contents addressed, while also identifying institutional initiatives aimed at promoting ethical and gender-sensitive AI education. Preliminary findings reveal that only 24% of AI-related programs explicit-ly incorporate a gender perspective, often indirectly through broader con-cepts such as fairness, bias, or justice. Explicit references to gender are scarce and typically limited to recommended bibliographies. Furthermore, these topics appear more prominently at the undergraduate level, with a marked absence in the doctoral programs analyzed. These findings highlight critical gaps in AI education and the pressing need for more systematic integration of ethics and gender perspectives. By analysing how these topics are addressed in AI education in Portugal, this study contributes to the ongoing dialogue about ethical AI development. It calls for a comprehensive, forward-thinking approach that equips students not only with technical skills but also with the capacity to navigate the broader societal impact of the technologies they create. |
09:30 | Gender and Emerging Digital Technologies in Education PRESENTER: William Steingartner ABSTRACT. The integration of digital technologies, content and processes is reshaping all sectors of education: from curriculum reform to teacher assessment and development. Among these technologies, applications of artificial intelligence (AI) present significant opportunities to improve the learning experience, especially in secondary and higher education. In this study, we focus on the potential impact of AI on teaching and learning by examining how students perceive AI as a learning tool. We also examine how AI applications are integrated into academic processes and activities. Building on previous research and in line with the European Education Area (EEA) initiative, the study uses a cross-sectional analysis to assess the benefits and challenges of AI in educational contexts. The findings provide insights into the effectiveness of AI in supporting communication, programming, content creation and explaining concepts, while highlighting the importance of developing skills for educators and students to navigate the evolving AI landscape. |
10:00 | Towards a Gender-Inclusive Tech Landscape in Portugal: Women4Digital’S Insights on Gender and Digital Transformation PRESENTER: Mariana Santos ABSTRACT. This paper presents preliminary findings from the Women4Digital project, which analyze public policies in Portugal aimed at promoting women’s participation in the Information and Communication Technologies (ICT) sector. Despite national initiatives such as Engineers for a Day and the integration of gender equality goals into broader strategies like ENIND and INCoDe.2030, women remain significantly underrepresented in ICT careers and leadership positions. In the article, we present some results from the diagnosis of the situation in Europe and Portugal, the normative European framework and some analysis of our mapping of national policies to promote gender equality in the digital age in Portugal. The study highlights the need for systemic and intersectional political approaches, with the full integration of a gender intersectional perspective, because the inequality problem is not only a matter of fixing women but implies profound systemic gender transformations in education and labour market systems. |
09:00 | Artificial Moral Discourse and the Future of Human Morality ABSTRACT. The moral norms, values, and practices of homo sapiens have not always been as they are now. There was a causal process by which humans came to care about values like peace, generosity, privacy, and justice, as well as values like honor, chastity, pious-ness, and purity, and, more generally, the question of what one ought to do. Like-wise, there was a causal process by which we developed our various morality-related practices—things like justification, scolding, mercy, promising, and so on. These aspects of human moral life are not set in stone—they may change in the future. (For general work on the topic of moral change, see FitzPatrick 2021, Sterelny 2021, Baker 2019, Tomasello 2016). We may acquire new values and norms and we may lose or come to care less about some of those that we currently have. Similarly, it’s possible that the moral practices that humans currently engage in will look dramati-cally different in the future. One striking new way in which human morality might change is through human interactions with AI systems. In particular, we can think of interactions with chatbots like ChatGPT—conversational systems, based on large language models (LLMs), that are easily accessible to the public and growing in use—and, more broadly, multimod-al AI systems based on foundation models. Many publicly accessible large language model (LLM)-based chatbots readily and flexibly generate outputs that look like moral assertions, advice, praise, expression of moral emotions, and other morally-significant communications. We can call this phenomenon “artificial moral discourse.” In the first part of this talk, I supply a char-acterization of artificial moral discourse. Drawing from existing empirical studies (e.g., Howe et al. 2023, Fisher et al. preprint), I provide examples of several varieties of artificial moral discourse, and I propose a definition for the concept. On my view, to engage in artificial moral discourse is for a computer system to: exhibit a pattern of response to inputs that resembles some human pattern of response to similar inputs, where the response contains something (terms, sentences, gestures, facial expressions, etc.) that the human interlocutor (or an observer) views as communicating a moral message or would have viewed as communicating a moral message if the exchange had occurred between humans. Why does artificial moral discourse matter? Among other things, interactions with artificial moral discourse could influence human moral values, norms, and practices, for good or ill. In the second part of the talk, I make a case for the claim that artificial moral discourse is likely to influence human morality in ways that past technologies have not. Namely, I propose that regular interaction with LLM-based chatbots can influence human morality via mechanisms that resemble modes of social influence on morality. This includes influence via advice and testimony, influence via example, and influence via norm enforcement. The first mechanism I discuss is chatbot influence resembling moral advice and testimony (Wiland 2021, Krügel et al. 2023). Humans may seek out this advice and testimony, or they may simply encounter it in the course of their exchanges with the system. Outputs resembling testimony could influence human norms and values via deference, in which the human accepts the statement of the chatbot on its apparent say-so, or by altering human perceptions of what typical views are, inclining them to alter their own views to be more in alignment. Influence could also come about via human’s oppositional reactions to the systems’ outputs: especially when humans perceive the chatbots’ view as imposed on them from above or as a view espoused by a source they distrust, they may modify their own view to be more opposed to the view they encounter from the chatbot. Outputs resembling moral advice may influ-ence human morality in a slightly more indirect way. Humans may simply accept the moral advice of the chatbot or they may act in a contrarian way; either way, the deci-sions they make in turn will influence the values they hold and will influence others’ perceptions of what norms prevail and what actions are acceptable. The second mechanism I discuss is behavioral influence (Henrich 2016, Lockwood et al. 2025, Köbis et al. 2021). With the capacities of current text systems, we can already imagine people modifying their own way of communicating and their own word choices based on what they’ve encountered from chatbots; meanwhile, the voice capacities of the systems are also now advanced enough that one can imagine people imitating the system’s ways of talking. In the future, if video capacities improve, we could imagine people imitating patterns of facial expressions or other mannerisms that the systems use. This might not seem morally significant, but often humans use such indicators to convey things like their level of concern or their lack of caring, some cruel sarcasm, or the message that something’s just a little off in what another person just said or did. These are all things where we can imagine that if humans routinely interact with the chatbots, humans’ own dispositions for what to do and say and how to react might change. The third mechanism I discuss is automated norm enforcement. In sum, there are some properties and actions that some humans condemn as wrong, which humans are liable to think that computers could detect on the basis of imagery or sound, given the ability of the systems to imitate patterns of human moral language usage. For instance, in 2023, Iranian officials announced they would use AI in public spaces to detect women whose hair is uncovered. Officials could subsequently target those women for fines, arrest, or “morality schooling” (Johnson 2023). This news item is a dramatic instance of a larger trend. With advancements in computer vision systems have come efforts to adapt those systems to identify a variety of norm violations, including violence and aggression (Sernani et al., 2021), loitering (Shanmughapriya et al., 2022), littering (Brandon 2022), use of force by police officers (Farooq 2024), or simply “unusual behavior” (Fickenscher 2023). Given the important role that sanc-tion and reward play in human normative life, the use of AI systems to identify norm violation and compliance—whether to feed into human decisions about sanction and reward or to automatically mete out such responses—is highly significant. If AI sys-tems are harnessed in this way, we can imagine people changing their behaviors to evade the system’s artificial gaze, which may correctly or incorrectly classify them as engaging in a particular immoral action. Changing behavior can lead to changing norms and eventually values. Although the phenomenon of artificial moral discourse bears a resemblance to the ideas of moral machines and artificial moral advisors, which have previously been discussed in the philosophical literature (Wallach and Allen 2008, Anderson and An-derson 2011), the concepts and theoretical frameworks developed for those hypothet-ical phenomena are not enough on their own to help us get a grip on the nature and risks of artificial moral discourse, nor will they suffice to guide our response to it. Among other things, it is not at all safe to assume that what the systems are doing is genuine moral reasoning, advice-giving, and so on, nor that these systems are reliable sources of moral advice or moral judgments. Instead, given their complexity and opacity, we have a very poor idea of the behavioral dispositions of these systems, and the conditions that will elicit particular behaviors; there currently exist no well-validated tests or standards for evaluating their morally-relevant capacities across a range of contexts. In the third part of the talk, I suggest some further research ques-tions for future empirical, technical, and philosophical investigation on how artificial moral discourse may influence human morality and what the ethical implications of that influence may be. Citations 1. Anderson, M. and S.L. Anderson (eds.) (2011) Machine ethics. Cambridge University Press. 2. Baker, R., 2019. The structure of moral revolutions: Studies of changes in the moral-ity of abortion, death, and the bioethics revolution. MIT Press. 3. Brandon, E.M. January 12, 2022. England has an £850 million problem with litter. This startup is using AI to fix it. Fast Company. https://www.fastcompany.com/90711804/england-has-an-850-million-problem-with-litter-this-startup-is-using-ai-to-fix-it 4. Farooq, U. Feb 2, 2024. Police Departments Are Turning to AI to Sift Through Mil-lions of Hours of Unreviewed Body-Cam Footage. ProPublica. https://www.propublica.org/article/police-body-cameras-video-ai-law-enforcement 5. Fickenscher, L. Feb 12, 2023. Retailers busting thieves with facial-recognition tech used by MSG’s James Dolan. New York Post. https://nypost.com/2023/02/12/retailers-busting-thieves-with-facial-recognition-tech-used-at-msg/ Johnson, K. Jan 18, 2023. Iran says face recognition will ID women breaking hijab laws, Wired. https://www.wired.com/story/iran-says-face-recognitionwill- id-women-breaking-hijab-laws/. 6. Fisher, J., Feng, S., Aron, R., Richardson, T., Choi, Y., Fisher, D.W., Pan, J., Tsvetkov, Y. and Reinecke, K., preprint, 2024. Biased AI can influence political decision-making. arXiv preprint arXiv:2410.06415. 7. FitzPatrick, William, "Morality and Evolutionary Biology", The Stanford Encyclope-dia of Philosophy (Spring 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2021/entries/morality-biology/>. 8. Henrich, J., 2016. The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press. 9. Howe, P.D.L., Fay, N., Saletta, M. and Hovy, E., 2023. ChatGPT’s advice is perceived as better than that of professional advice columnists. Frontiers in Psychology, 14, p.1281255. 10. Köbis, N., Bonnefon, J.F. and Rahwan, I., 2021. Bad machines corrupt good mor-als. Nature human behaviour, 5(6), pp.679-685. 11. Krügel, S., Ostermaier, A. and Uhl, M., 2023. ChatGPT’s inconsistent moral advice in-fluences users’ judgment. Scientific Reports, 13(1), p.4569. 12. Lockwood, P.L., van den Bos, W. and Dreher, J.C., 2025. Moral learning and decision-making across the lifespan. Annual Review of Psychology, 76(1), pp.475-500. 13. Sernani, P., Falcionelli, N., Tomassini, S., Contardo, P. and Dragoni, A.F., 2021. Deep learning for automatic violence detection: Tests on the AIRTLab dataset. IEEE Ac-cess, 9, pp.160580-160595. 14. Shanmughapriya, M., Gunasundari, S. and Bharathy, S., 2022, April. Loitering detec-tion in home surveillance system. In 2022 10th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (ICETET-SIP-22) (pp. 1-6). IEEE. 15. Sterelny, K., 2021. The Pleistocene social contract: Culture and cooperation in hu-man evolution. Oxford University Press. 16. Tomasello, M., 2016. A natural history of human morality. Harvard University Press. 17. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press. 18. Wiland, E., 2021. Guided by voices: Moral testimony, advice, and forging a 'we'. Ox-ford University Press. |
09:30 | Stakes, Context, and AI: How AI Challenges Human Ways of Knowing ABSTRACT. Please see attached file for extended abstract |
10:00 | Identifying AI Challenges in Research Practices Through Research Ethics Reviews PRESENTER: Konstantina Giouvanopoulou ABSTRACT. The rapid advancement of technology and the spread of Artificial Intelligence (AI) across various sectors of our society are reshaping our daily lives and imply a series of emerging challenges. Given this fact, it is vital to detect and diagnose the nature of these challenges at an early stage, adopting a proactive approach to the design, development and evaluation of AI systems; an ethics-by-design approach permeating all research phases. The challenges posed by the widespread use of AI systems impacting the research ethics review process that aims to ensure ethical and responsible research practices. Therefore, it is urgent to understand the impact of AI in the research area as well as the arising social implications in order to increase the effectiveness of the research ethics review process for achieving more ethical AI research outcomes. This study attempts to identify challenges related to the use of AI technologies across different sectors and to detect if and how these are impacting research ethics and the research ethics review process. For this purpose a systematic literature review was conducted and the initial findings of this review led to the creation of a classification framework that provides a structured basis for more in-depth research. This framework is based on and strongly related to the belief/perception that identifying and categorizing the types of arising challenges can enable more effective responses. Thus, future policies to meet these challenges can be more targeted and impactful. However, this study highlights the need for further research, including deeper and more extensive literature review in the field, and for empirical validation of the classification framework through qualitative feedback from members of Research Ethics Committees (RECs). |
11:00 | Digital Transformation of Care: Navigating Ethical Landscapes PRESENTER: Mineko Wada ABSTRACT. Care in our Daily Lives Care surrounds us constantly in our daily lives. Although it is a loan word from English, the term care (ケア, kea) has become deeply ingrained in Japanese society. It is a word used so often that people understand the different aspects of the act without needing a specific explanation. For example, the following terms are commonly used in Japan: elder care, child care, self-care, care management, care plans and terminal care. As these examples illustrate, care is very closely tied to our lives and it is something essential to human existence, regardless of age, disability or residential place. To take an extreme example, from the moment a fertilised egg is implanted in its mother's womb, it is cared for by that mother. And when people reach old age, they receive terminal care as they approach the end of their lives. From birth to death, humans exist in a mutual exchange of care - being cared for and giving care (Asai, 2024). In general, care, which is indispensable to our lives, has three main meanings (Kawamoto,1995; Hiroi, 2000). The first is care centred on the mental or emotional act of caring for others, which can also be expressed by words such as ‘consideration’, ‘concern’ and ‘thoughtfulness’. The second is care centred on actions actually performed for the sake of others, which can be expressed by phrases such as doing for someone' and 'looking after someone'. The third is even more specific than the second and refers to care that requires specialised and practical knowledge and skills. This third meaning of care brings to mind professional care provided by medical professionals and care workers. In any case, care can encompass not only physical care, but also emotional support, attention and concern. And we live our lives caring and being cared for, combining these three types of care in complex ways. The Care Crisis and the Promise of Digital Solutions While care is essential for our lives, the way care is provided, especially care that requires action, is greatly affected by an increasing number of nuclear families and dual-income households, as well as the declining birthrate and escalating aging population. In other words, modern lifestyles are shifting care from an unpaid act of love within the family to an act performed by professionals in the field of care. According to discussions on the ethics of care by Gilligan and subsequent researchers, care is essential for human beings and indispensable for their survival. However, it has traditionally been socially positioned as a role exclusively for women, and issues have been pointed out regarding the value and morality of the act of caring (Gilligan, 1982; Tronto, 1993; Held, 2013; Okano, 2024). In other words, the perspective of caring for care has been socially neglected. However, the rapid ageing of the population, combined with a declining birth rate, has highlighted the importance of care, which has traditionally been seen as a predominantly female task. In other words, there is an overwhelming shortage of human resources to care for the ever-growing elderly population. Currently, Japan is facing a serious shortage of nurses and caregivers, and is therefore accepting caregivers from abroad. However, despite its social demand and importance, care work is demanding and the salary is not high, so there is a constant need to secure personnel and human resources for care work. In particular, there is a shortage of caregivers, and the Ministry of Health, Labour and Welfare estimates that about 2.8 million caregivers will be needed by 2040, an increase of about 690,000 from 2019 (Ministry of Health, Labour and Welfare, 2024). Therefore, there has recently been a search for the provision of efficient and effective care services through introducing and utilising digital technology in the care sector, as seen in the development of care robots. This is the promotion of the so-called digitalisation of care. In Japan, where there are strict restrictions on accepting foreign workers, even as care workers, the introduction of digital technology in the care sector is expected to become even more active in the future. In this study, based on the results of qualitative study by Wada et al. (2024) on how elderly people perceive digital care, we consider the ethical issues involved in it. Perceptions and Concerns: Older Adults' Voices on Digital Technologies According to a study by Wada et al. (2024) based on interviews with community-dwelling older adults, which explores how Japanese community-dwelling older adults perceive the use of digital technologies in daily life, older adults have the following four concerns about care in their increasingly digitalised daily lives. The first is a concern that care will be hampered by mechanisation or robotization, leading to dehumanisation. Dehumanization of care raises concerns about the potential loss of meaningful human interaction and also the risk of over-reliance on technology. This concern can be explained by the ongoing debate about the replacement of jobs by AI. According to Frey and Osborn (2017), it is considered difficult to immediately replace humans and professions that provide care with technology or AI. Care needs to be provided in accordance with the individual circumstances of those who need it, and because care requires the consideration of life safety and security, it places not only a physical but also a psychological burden (emotional labour) on those who provide it (Hochschild, 1983). Furthermore, it is currently difficult to replace the presence of care, where the experience of sharing a place and being with the caregiver is itself care for the cared for, with healthcare technology (Kleinman, 2015). Kleinman further defines care as 'a personal and collective human practical experience that includes protection, practical assistance, and a sense of solidarity that is physical, emotional, interpersonal, moral, and human support' (Kleinman, 2015:122). The continued use of technology for care, which is inherently a human act, needs to be designed to play a role in human relationships through care, rather than replacing the human caregiver themselves with machines. The second is concerns about privacy and security that may arise from the use of digital technology. As data has become a valuable asset, it's increasingly difficult for individuals to grasp the social and economic risks associated with data breaches or misuse of personal information. These concerns of older adults are not limited to Japanese older adults, but have also been identified in research studies in Europe (Klaver et al., 2021). Furthermore, without a clear understanding of how to ensure privacy and security, these concerns make older adults reluctant to use digital technology for care. This, combined with the third concern of low digital literacy, leads to anxiety about how to use digital tools. Under the lack of digital literacy, older people easily face difficulties navigating interfaces or understanding online safety. Consequently, even in a society where digital devices are widely used, gaps in digital literacy can lead to a digital divide between people. Low digital literacy of older people can cause unexpected socio-economic losses. The problem of the digital divide in modern society needs to be understood not only as a problem of economic inequality due to digitalisation as in the past, but also as a problem that can lead to isolation of older adults and mental health problems (loneliness) in society. This can also be caused by the fourth concern, which is the lack of access to immediate support for issues relating to digital adaptation, which makes it difficult to accept digitalisation. How should IT companies that develop and disseminate digital tools respond to the situation of older adults who are isolated in a digitalised society? These issues can be considered from the perspective of the social responsibility of digital companies and the IT industry, as well as from the perspective of the professionalism of IT professionals who design and develop digital tools. On the other hand, these concerns can also be seen as the flip side of older adults' motivation to use digital tools and their expectations of their benefits (potential for expanding social connection, sustaining health, and avoiding using coins). It is possible to raise expectations of wanting to connect with friends, communities or society through digital tools, the desire to improve the convenience of life through digital tools, or the motivation to use them to maintain health. Introducing digitalised care into actual care practice for the sake of efficiency and convenience alone can lead to the dehumanisation of care and create an unequal power relationship between caregiver and care recipient. Mitigating these ethical and social risks requires a holistic approach to digital care that considers the interconnectedness of mind/emotion, body, skills/knowledge and connectedness, and ensures that technology serves to enhance, not diminish, the human experience of care. References are listed in the uploaded attachment. Please see the attached file. |
11:30 | Evaluating the Role of Chatbots in Higher Education Based on Students’ Experiences PRESENTER: Julia Więckiewicz-Modrzewska ABSTRACT. The integration of chatbots in higher education has emerged as a transformative tool to increase student engagement, streamline administrative processes and support personalised learning. This article examines the current uses, benefits and challenges of chatbot technology in academia, based on an empirical study of unique sample of 408 students at a pedagogical university, people who will shape the future education of others. The findings indicate that students are most likely to use ChatGPT, and to use it mainly in education-related areas, and to a lesser extent in areas of everyday life. In this respect, non-pedagogy students were more likely than pedagogy students to generate ready-made answers during exams. Despite students claiming to have a fairly good knowledge of chatbots, only a quarter had read articles or research on them. The study also explored ethical issues related to students' use of chatbots. Few respondents believe that their teachers or university have rules or guidelines for the responsible use of chatbots. Non-pedagogy students and pedagogy students with teaching qualifications rate this issue the lowest. Almost half of the respondents feel privacy and security concerns when using a chatbot. Seven out of ten believe that chatbots can be used to manipulate information or mislead students. Students also believe that chatbots could increase inequalities in education. Through a synthesis of existing literature and practical implementations, this paper provides insights for teachers, administrators and developers who want to use chatbot technology to foster innovation in university education. |
12:00 | Through the Educators' Lens: University Teachers' Perceptions of AI Integration in Higher Education PRESENTER: Julia Więckiewicz-Modrzewska ABSTRACT. The integration of artificial intelligence (AI) and chatbot technologies in higher education has accelerated in recent years, prompting significant peda-gogical and institutional shifts. Using a mixed-methods approach, including a survey containing both closed and open-ended questions, we explore univer-sity teachers' perceptions of chatbots as educational tools. The study focuses on teachers' usage patterns, attitudes, concerns and perceptions of the ethical issues related to the use of AI-based tools. The findings indicate that teachers are most familiar with ChatGPT and AI language tools, such as language translation and speech-to-text transcription, as well as online support tools to improve writing, mainly because they use these tools in their professional work. Teachers who specialised in pedagogy believed to a greater extent than academics in other social science disciplines that pupils, students and teach-ers should participate in training on using AI. The study also explored ethical issues related to teachers' use of chatbots. Almost half of the respondents expressed security and privacy concerns when using a chatbot; however, ac-ademics with a doctorate were less likely to have privacy concerns than those with a master's degree. Over seventy percent of teachers believe that chatbots can be used to manipulate information or mislead students. They al-so believe that chatbots could increase inequalities in education. Recommen-dations are offered for policy makers, developers, and institutions aiming to integrate chatbots in ways that uphold academic standards and enrich the teaching-learning experience. |
12:30 | Quizly: Transforming Quiz Experiences with Multi Modal Inputs for Differently Abled Users PRESENTER: Farzana Jabeen ABSTRACT. In the rapidly evolving landscape of mobile applications, the integration of innovative technologies plays a pivotal role in the transformation of user experiences, particularly in educational contexts. This research introduces a groundbreaking development, the multimodal Android quiz application named 'Quizly', which utilizes advanced machine learning techniques to redefine participation in educational quizzes. By seamlessly integrating hand gesture recognition, voice navigation and gaze tracking, the application introduces a new dimension to user interaction, fostering an interactive and intuitive environment for quiz participants especially for differently abled users. The Quizly aims to redefine accessibility standards through the use of machine learning, specifically Convolutional Neural Networks (CNNs), and a tech stack that includes Python, React Native, MongoDB, Express, and Node.js. The user study is conducted with the help of 30 participants who tested and validated Quizly. The results were quite satisfactory. The project methodology places a strong emphasis on an incremental and iterative development approach to improve user feedback and technological advancements until the final version of Quizly. The research findings result in a set of useful insights for educational technology, which are the factors of innovation and user-centric design of an instantly responsive quiz environment. The results of the article serve as a basis on which researchers can build and investigate more on the field of interactive educational applications. The goal of the project is to be a catalyst for inclusive technology especially for differently abled users not just a technical fix. |
11:00 | Corporate Governance, Business Ethics, and Their Relationship with Financial Performance in Companies Listed on the Mexican Stock Exchange PRESENTER: Juan Carlos Yañez Luna ABSTRACT. The objective of this study is to individually and collectively analyze the relationship between the governance dimension and its impact on the financial performance of non-financial companies listed on the Mexican Stock Exchange (BMV) between 2017 and 2022. Based on agency theory and stakeholder theory, the general hypothesis is established that the governance dimension of ESG indicators positively influences the financial performance of non-financial companies listed on the BMV. Three regression analyses are conducted: the first at the level of each corporate governance element, the second using a flexible corporate governance index to measure the impact of governance by averaging compliance, and finally, the total or null compliance with governance practices is analyzed using a strict corporate governance index. Through a study sample consisting of 584 observations, the multiple regression analysis demonstrates that some corporate governance elements increase profitability (ROA, ROE). Additionally, the flexible corporate governance index shows a positive impact on profitability as measured by ROA and ROE. This study has practical implications for both the business sector and regulators concerning corporate governance in Mexico. |
11:30 | Ethical and Technological Perspectives on Risk Distribution in Cryptocurrencies PRESENTER: Oliver René Arroyo Leos ABSTRACT. The emergence of cryptocurrencies in financial markets has transformed traditional investment paradigms, introducing new ways to evaluate, assume, and distribute financial risk. Unlike traditional assets, cryptocurrencies exhibit extreme volatility due to factors such as lack of regulation, market manipulation, and high speculation. From a technological perspective, the analysis is based on the application of machine learning and artificial intelligence algorithms to identify patterns in market behavior. At the same time, it explores the use of Blockchain as an underlying technology to offer transparency and decentralization, which helps mitigate fraud and manipulation risks. However, challenges persist regarding cybersecurity, transaction anonymity, and network scalability. From an ethical standpoint, automation through trading bots and high-frequency algorithms can create information asymmetries, benefiting large investors at the expense of smaller market participants. Additionally, opacity in risk assessment and the lack of regulatory oversight can facilitate the spread of speculative bubbles and financial fraud. Furthermore, the social impact of cryptocurrencies on financial inclusion is examined. While these digital assets provide access to financial markets without intermediaries, the lack of financial education and the complexity of these instruments can expose vulnerable investors to significant losses. Strategies are discussed to promote ethical and responsible investment based on transparency, fairness, and risk mitigation. This research presents a correlational model to analyze risk distribution in cryptocurrencies, utilizing statistical and econometric tools to predict price fluctuations and generate more informed investment expectations. By analyzing historical data and interest rate projections, the study proposes a model that correlates investor expectations with the occurrence of high-volatility events in cryptocurrency markets. The implications of this model for financial risk management are analyzed, and recommendations are provided to strengthen the stability of digital markets through a more robust regulatory framework and the integration of ethical standards in automated decision-making. Additionally, the possibility of incorporating mechanisms for transaction traceability is considered, allowing governments to monitor processes and identify potential illicit activities or fraud, both for users and in terms of public revenue losses (tax evasion). In conclusion, the study argues that balancing technological innovation with ethical principles is essential to ensure the sustainability of the cryptocurrency market. As technology advances, it is imperative to establish mechanisms that protect investors and promote transparency. The intersection of ethics and technology in cryptocurrency risk distribution represents an ongoing challenge that requires collaboration among regulators, technology developers, and market participants. |
12:00 | Sovereignty, Surveillance, and the Cloud: Geopolitical and Ethical Issues of Global Cloud Computing PRESENTER: Mario Arias-Oliva ABSTRACT. This review examines the growing geopolitical and ethical complexities surrounding cloud computing, driven by its increasing adoption across sec-tors. It offers a comprehensive analysis of how national policies, data sov-ereignty concerns, and ethical considerations shape global cloud govern-ance. The study analyze the approaches adopted by key regions, including the United States, Europe, the Middle East, and China, highlighting their strategies for balancing innovation, security, and ethical responsibility. Another approach is included, focusing on sector-specific challenges. Gov-ernment, banking, telecommunications, healthcare, and military operations are reviewed, as key sectors that underscore the critical need for interna-tional cooperation and well-defined regulatory frameworks. The paper con-tributes an interdisciplinary analysis, a comparative study of regional ap-proaches, and insights for various industries. It identifies future research di-rections, advocating for a balanced approach prioritizing, among other di-mensions, security, privacy, and ethics to ensure the sustainable and re-sponsible development of cloud computing. |
12:30 | The Mediating Effect of Job Crafting in the Relationship Between Organizational Commitment and Organizational Citizenship Behavior and Its Ethical Implications PRESENTER: Juan Carlos Yáñez Luna ABSTRACT. The present study analyzes the mediating role of job crafting in the relationship between organizational commitment and organizational citizenship behavior in hotels. The data analysis was made with a sample of 137 hotel employees from 10 different hotels, PLS- SEM was used to run the different tests to corroborate our hypothesis. Findings proved the mediating effect of job crafting between or-ganizational commitment and organizational citizenship behavior, as well as the relationship between organizational commitment with organizational citizenship behavior and job crafting. This study contributes to understanding how these variables interact with each other and their effects in the hospitality industry, as well as providing evidence to comprehend ethical behavior in technology-mediated environments, aligning with international frameworks on responsible business conduct. |
11:00 | Why Hemisphere Theory Matters for How We Think About Ethics and AI in Medicine ABSTRACT. The rise of artificial intelligence (AI) in medicine risks reinforcing a cognitive paradigm that prioritises efficiency and protocol adherence over clinical intuition and patient-centred care. Drawing on Iain McGilchrist’s model of hemispheric cognition, this paper argues that AI systems, such as scribes, represent an augmented left-hemispheric orientation — favouring abstraction, categorisation, and rule-based processing at the expense of the right hemisphere’s holistic, context-sensitive, and relational modes of engagement. The integration of AI scribes into clinical settings marks a transformation in healthcare, promising to alleviate administrative burdens and thereby enable practitioners to focus on patients. However, these technologies carry ethical, epistemological, and cognitive implications that extend beyond concerns of privacy, security, and algorithmic bias. This paper explores how the dominance of left-hemispheric cognition, augmented by AI integration, jeopardises the moral and relational dimensions of clinical care — areas where the right hemisphere's contributions to empathy, moral reasoning, and context-sensitive judgment are vital. Framed within Lewis Mumford’s concept of the “Machine,” these technologies are not neutral but actively reinforce a paradigm of efficiency, standardisation, and bureaucratic control, undermining the art in the practice of medicine. |
11:30 | Artificial Intelligence Integration in Every Life – Utopian Friend or Dystopian Foe? PRESENTER: Minna Rantanen ABSTRACT. Information Systems (ISs) like Artificial Intelligence (AI) have become increasingly pervasive in human lives. We use AI tools such as Alexa or Siri for daily tasks, view content recommended by an algorithm, and use healthcare apps to assess symptoms before obtaining expert human care. We use AI tools to expand bullet points and summarise large masses of text, curate our ideas and make sense of those of others. Such systems have become an integral part of many human processes as they “penetrate human life, experiences, products, business processes and civil society” [1 p. 47]. This degree of AI-human intertwining has been noted by scholars and practitioners alike – both communities are focused on debating the pros and cons of this phenomenon, including the definition of what progress and good life should look like for everyone. Practitioners often applaud the efficiency of using AI for reasons such as improved business operations and garnering consumers’ attention. For example, recently, IAB [2] surveyed advertising professionals in Europe about their thoughts on the value AI brought to their business and found an overwhelming majority to concede for positive effects like gaining operational efficiencies (78%) and competitive advantage (60%). Similarly Silo AI [3, 4] reported that over 77% of Nordic organisations actively use Generative AI, while only about 2% reported using no AI technology. The same poll also found that over 70% of participating organisations prioritised AI development for the coming year. Businesses’ prioritisation of AI in their operations seems supported by consumers’ primarily positive response to its benefits. For instance, Frank et al. [5] determined that consumer demand for AI-assisted products stemmed from their acknowledgement of its utilitarian, hedonic, and symbolic value, contingent factors such as national culture and individual self-construal. In another study, Chakraborty et al. [6] determined that consumers’ willingness to adopt AI-assisted experience in the beauty and cosmetics industry was moderated by the virtual trial experience possibility and significantly affected by a cascading effect of variables such as AI compatibility, perceived performance expectancy and perceived ease of use. Meanwhile, a body of research has highlighted the negative side of the pervasive use of AI and emphasised its ability to promulgate a surveillance culture [7–9], bias and discrimination [10–12] and impede freedom and human autonomy [13]. For example, Stahl and Eke [14] studied the ethical impacts of ChatGPT to assess the benefits and risks that the tool can surface and identified eleven benefits (e.g., human identity, labour market, sustainability and health) and thirty ethical concerns (e.g., psychological harm, harms to society, environmental harm, and autonomy). Such studies indicate that the ethics of using AI to add convenience to business and consumer activities is a broad topic with overlapping positives and negatives that scholars are debating. However, even scholars may be privy to the ethical issues stemming from using AI. In a recent scoping review, Bouhouita-Guermech et al. [15] highlighted that the evolution of AI for supporting research activities have spawned specific ethical challenges, including the lack of AI-specific standards and governance structures to support research ethics. Therefore, it seems evident that the pervasiveness of AI into human lives is a topic that needs more research to be properly understood. However, how is the pervasiveness of AI perceived by individuals? Does an individual stop to think how and when they should use AI, how AI providers use their data, and what that means to their privacy and autonomy– not to mention the ethical justifications of their perceptions? According to consumer statistics, the answer is ambiguous. To exemplify, a poll among United States consumers [16] reported that most respondents (88% Gen Z and 77% millennials) expected AI to improve their online shopping experiences. In another survey, about 47% of Japanese consumers expressed that Gen AI could be optimally used to enhance consumer services [17]. However, another poll among Spaniards [18] found 74% of respondents to agree that AI was evolving at a rate faster than society’s ability to assimilate it, and 44% distrusted people’s ethical use of this technology. Thus, there is no clear answer to how such pervasive AI use is perceived or about the ethics of the pervasiveness and its expected impacts. This study aims to fill this gap and increase understanding of individuals’ perceptions of pervasiveness of AI systems. We take a critical approach and use a focus group as a method to map how people see the role of AI systems in different dimensions of their lifeworld and their attitudes towards those roles. Focus group has been shown to be suitable for critical IS research [19], which makes it a viable approach for profoundly understanding people’s perception of pervasive AI systems and their ethical implications. We follow the definition of Powell and Single [20 p. 499], according to whom a focus group is “a group of individuals selected and assembled by researchers to discuss and comment on, from personal experience, the topic that is the subject of the research.” It is particularly suitable for complex research topics that benefit from additional validation and sensemaking with stakeholders [20], which is also the case in the present study. To gain both a general image of perceptions and in-depth knowledge on their grounds, this study includes five focus groups focusing on different areas of life where AI has gained increasing attention: academic use of AI (knowledge creation), private sector use of AI (organisational context), and individual use of AI in everyday life. Each focus group consists of 5-7 people from different backgrounds to achieve varied perspectives of stakeholders. To ensure the analysis is ethically solid, we use Habermas’s lifeworld/system -model [21–23] to conceptualise the ethics of the pervasiveness of AI systems in human life. In this model, lifeworld is a sphere where individuals share a common reality, encounter each other and communicate in everyday life. In the lifeworld, communication (ideally) leads to mutual understanding and consensus-seeking discourse with others, i.e., communicative action. In contrast, systems refer to economic, political, administrative and technological systems driven by rationalities that serve the needs and aims of those institutional systems rather than specific individuals. Problems arise when systemic powers colonise the lifeworld and implement rationalities of the systems into lifeworlds. For example, the institutional logics and value systems embedded in AI algorithms are not those of their users but of the systemic powers, defining, e.g., the rules and the infrastructure of digital platforms we communicate on. While systemic powers can also cultivate the essence of our lifeworlds, taking systemic rationalities as given risks making them the driving force of our lifeworld and thus colonising it. For instance, mobile phones and the systems behind them make it possible to connect people even if they are living in different locations. However, when the use of mobile phones is driven by the interest of the business that tries to make us spend as much time staring at our phones, depriving us of connection with others, we witness colonisation of our lifeworld by technology. Likewise, AI is loaded with big expectations and promises. However, the outcome can be a situation where AI colonises our lifeworlds and hence, the rationalities behind AI will be integrated into our lives [see 1]. Habermas’s model offers an illustrative approach to understanding the pervasiveness of AI systems, as it describes rationalities, communication and coordination between lifeworlds and systemic powers. It offers a moral-philosophically sound foundation for a focus group study because it views discourse as an essential building block of ethical action. According to Habermas, ethical justifications are achieved through proceduralist, deliberative and communicative action that aims towards mutual understanding rather than mere strategic goals [23–25]. The strength of this approach is discourse ethics, which does not take a rigid standpoint characteristic to the normative ethical theories (deontology, consequentialism and virtue ethics). Instead, it accommodates different ethical viewpoints in discourse, serving as an arena for ethical justifications and perceptions. Therefore, it provides a mechanism that allows consideration of different moral views and intuitions, as it offers various tools for in-depth analyses of communication and discourse, making both past and current communication problems visible [26]. This approach has been used by, e.g., Rehg [27], who developed a framework for moral inquiry and reflection on problematic cyberpractices, and Yetim [28], who provided metacommunication model – used for analysing communication (see also [29], [30]). Overall, Habermas’ critical social theory, which we use here, has been one of the factors contributing to paradigm change in IS research [see 31–33]. Focus groups used in this study bring together people from different perspectives, which ensures comprehensive stakeholder inclusion – a prerequisite for communicative action and discourse ethics [34]. This study is thus a contribution to increasing our understanding of the ethics of pervasive digital systems and how people perceive the pervasiveness in real life. Hence, this combination is rich in theoretical and empirical insights that can be used to inform the design and governance of digital systems so that they bring ethical value to organisations and individuals. |
12:00 | How AI May Affect Our Thinking ABSTRACT. Currently the risks of AI are discussed intensively. This discussion focuses mainly on technical aspects or on internal technical processes which may give AI unforeseen super powers. The fears are that AI may become more intelligent than humans, emerge as an independent agent with its own agenda, force us into something we do not want to be, lead evolution in a radically different direction, and possibly transform the whole universe (see for example: Bostrom, 2014; Future of Life Institute, 2023; Harrari, 2015; Kurzweil, 2006; Tegmark, 2017). Although we cannot to rule out these risks, and because we want to do something to eliminate the risks, we would need to look to other conditions than the exclusively technical features, which may be playing an important role in causing these concerns about a very powerful AI. Let us first see what we expect from AI, and what AI actually does. AI, as it is designed and used today, provides answers to problems and delivers products and services. All our efforts to develop AI tools and systems aim to support its ability to solve problems we humans have, and to satisfy our needs. All interest around AI is concentrated on achievement of goals, solutions of problems, satisfaction of needs. This is what we expect AI to deliver and we design it accordingly. If we now turn our focus on ourselves and try to see how we manage to handle our problems we can identify a certain tool we have for this purpose. It seems that thinking is the biological tool for achieving goals, solving problems, and satisfying needs (Platon, 1986). However, this is not something we like so much to use. For us it is not easy to reason in a rational way, and it is even less easy to control the process of thinking even though we are conscious about the high value of right thinking. For us as persons it is necessary to have the skill to run a rational thinking process and the emotional strength to take responsibility for our own decisions. But if we try to reason in a rational way, we need to put great effort in this, concentrate our focus and spend a lot of energy. We have also to postpone making the decision and proceeding to action until we can come to a conclusion and find a suitable answer to the problem at hand. On top of these difficulties is the feeling of responsibility and the anxiety that follows with it when one knows that if something goes wrong there is no one else to blame but oneself. Suppose now that we, despite all these difficulties and costs in energy and time, still make the effort to think rationally in order to find the answer to the problem we have in front of us. Will we achieve a satisfying result? The answer is unfortunately “maybe”. So, the optimal solution cannot be guaranteed through the adoption of a rational, philosophical or scientific approach. Not in the real world, compared to an ideal or metaphysical world (see for example the discussion on the application of the categorical imperative in real life, Kant 2006). The fact is that in real world non-rational ways of handling are successful very often. This has been studied empirically in psychology of human cognition and it is the so called heuristic (Kahneman, Slovic & Tversky, 1982; Sunstein, 2005). In philosophy we have the Aristotelian habits, which also function very well (Aristoteles, 1975). Why should we then adopt the rational way of handling our problems given all the difficulties and the fact that we may still come to a wrong conclusion and action? When we know also that not so autonomous ways of thinking have a great chance to give us a satisfying solution? So, most of the time we do not adopt rational thinking. AI can strengthen this. We have already seen the effects of technology on our thinking. For example, during the early times of Internet in the world many of us were enthusiastic about its potential to give us access to all kinds of information, especially information contradictory to our beliefs and certainties. We were seriously expecting it to have a healthy impact on thinking. Unfortunately, that proved to be wrong. Instead of promoting doubt and dialog it allowed people to look for and connect with other like-minded people in order, not to examine and falsify their beliefs, but to confirm what they already are convinced is true. Another aspect worth consideration is that algorithms, e.g. in social media, suggest links which are supposed to be in accordance with what the user likes. The design and use of AI follows the same path. We create it in order to give us answers, not to ask questions that dispute our wishes or beliefs. We expect it to give us the products or the services we need. We want AI as a tool for the satisfaction of our needs, as our thinking is, but a biological one. The key point here is that advanced AI is or will be an immensely better tool. Will it then replace thinking? A great risk is that this will happen. Unless we change focus in the design and use of AI: from the answers, or products or services provided, to the process leading to these results. An AI designed to help us use our thinking in order to find the answers and to produce the things we need, is something different than using AI to deliver the answers and the products directly to us. We can imagine AI as a partner in an ongoing dialog (Kavathatzopoulos, 2024). First and foremost, we have to be highly conscious about the difficulty of such a plan. For example, we have already seen how Internet is used, meaning that the same may happen with the use of AI. But if we want to use AI as a support tool for our thinking, to help us run the process of thinking in the right way, we have first to ask the question whether this is possible at all. Are we able to accept or at least tolerate something that disturbs our state of mind and makes us doubt our convictions? Especially when we know very well that this same tool can deliver to us the best possible answers to our problems, and give us satisfaction, harmony and happiness? This seems to be a very difficult plan to design and apply. However, it is possible to think that a compromise may give it a bigger chance. A compromise that is in accordance with the real-life use of thinking. We are never perfect philosophers, scientists or democrats. We cut corners and we use heuristics, but still we manage somehow to reach our goals and to satisfy our needs. Then, what about an AI that sometimes and when there is no other better alternative because of the high demands of the problem at hand, support us to think in a rational way? But also an AI that most often support us in finding and using the best possible heuristics and habits? References Aristoteles (1975). Ethika Nikomacheia [Nicomachean ethics]. Papyros. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. Future of Life Institute. (2023). Pause giant AI experiments: an open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Harrari, Y. N. (2015). Homo Deus. Natur & Kultur. Kahneman, D., Slovic, P. and Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press. Kant, I. (2006). Grundläggning av sedernas metafysik [Foundations of the metaphysics of morals]. Daidalos. Kavathatzopoulos, I. (2024). Artificial Intelligence and the sustainability of thinking: How AI may destroy us, or help us. In T. T. Lennerfors and K. Murata (Eds.), Ethics and Sustainability in Digital Cultures (pp. 19–30). London: Routledge. Kurzweil, R. (2006). The Singularity is near: When humans transcend biology. Penguin Books. Platon (1986). Protagoras [Protagoras]. Zacharopoulos. Sunstein, C.R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28(4), 531–542. https://doi.org/10.1017/S0140525X05000099 Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf Publishing Group. |
12:30 | How AI is Reshaping Creativity: Deepseek vs Chatgpt plus in Lego® Serious Play® PRESENTER: Inês Almeida ABSTRACT. his paper investigates how generative artificial intelligence (AI) models—specifically ChatGPT Plus (with and without the Projects feature) and DeepSeek—can support the de-sign of LEGO® SERIOUS PLAY® session plans. Through a systematic reflexive con-tent analysis methodology, we evaluate the originality, structure, and methodological fidel-ity of AI-generated LSP session plans across two distinct thematic areas: Creative Leader-ship and Building Team Trust. Our multi-layered evaluation framework includes content analysis, session structure mapping, and a temporal triangulation protocol that compensates for the limitations of single-researcher evaluation. Findings reveal that ChatGPT Plus with Projects exhibits superior consistency and alignment with LSP principles (scoring 13.7/15 on our composite index), while DeepSeek demonstrates strengths in information synthesis but limited session structuring (8.9/15). Comparative analysis of AI-generated facilitator prompts reveals significant qualitative differences, with ChatGPT Plus with Projects pro-ducing questions with greater metaphorical depth and nuance, creating space for partici-pants to explore complex dimensions of the themes. The paper contributes to the fields of creativity, participatory facilitation, and AI by providing an evidence-based framework for assessing AI support in creative methodologies and offers practical guidelines for facilita-tors seeking to leverage AI as a collaborative partner in session design. Our prompt engi-neering protocol, detailed in this paper, provides a replicable template for facilitators and researchers exploring AI-human collaboration in experiential learning design. |
Panel Description: As artificial intelligence becomes increasingly embedded in decision-making, infrastructure, and daily life, urgent questions arise about its long-term ethical and social impact. The rapid deployment of AI technologies—often outpacing legal and moral frameworks—raises critical concerns. Key technologies and sectors' implications will be discussed with an interdisciplinary approach.