View: session overviewtalk overview
09:00 | Digital Identity and Control: How AI Replicas Challenge Performance Rights PRESENTER: Ahmed Qayyum ABSTRACT. This paper explores the ethical challenges posed by the use of digital replicas in the entertainment industry, particularly in light of recent labor movements and rapid AI integration into production pipelines. Focusing on the reproduction of performer likenesses through artificial intelligence, the paper analyzes how these technologies challenge concepts of autonomy, informed consent, and artistic authenticity. Using recent examples from film and television, along with theoretical analysis grounded in deontological and utilitarian ethics, we examine how synthetic performances disrupt traditional models of compensation and control. The digital resurrection of deceased actors, the creation of synthetic performers, and the indefinite reuse of captured performances all raise unresolved legal and moral dilemmas. Drawing from the 2023 SAG-AFTRA labor negotiations and recent policy debates, the paper proposes a series of safeguards including continuous consent mechanisms, updated publicity rights laws, and greater industry transparency. We argue that responsible AI integration in entertainment must center human dignity and creative agency over technological expediency. |
09:30 | Symbolic Aspects of Online Privacy Protection Behaviour: from a Social Communication Perspective PRESENTER: Yasunori Fukuta ABSTRACT. This study provides a conceptual review of the symbolic dimensions of privacy protection behaviours (PPBs) in social media communication, drawing on inter-actionist frameworks, particularly those of Blumer and Goffman. It also investi-gates their core characteristics through evidence gathered from semi-structured in-terviews and a questionnaire survey. Our findings show that the symbolic fram-ing of explicit PPBs is linked to the emergence of potentially undesirable self-impressions, such as appearing overly preoccupied with privacy. This, in turn, may discourage individuals from adopting such behaviours. These findings con-tribute to a better understanding of the symbolic barriers to privacy-related behav-iours in mediated social contexts. |
10:00 | Ethical Issues in the Use of Generative AI Chatbots for Therapeutic Purposes ABSTRACT. This paper discusses the pros and cons related to the digital transformation of mental health, particularly highlighting the therapeutic domain; against this setting, the technical underpinnings of generative AI chatbots are out-lined alongside a discussion of the broader ethical and epistemic challenges related to the integration of LLMs in the medical domain —a domain that is more advanced in experimenting with and deploying LLMs than the field of mental health. While focused on mental health, the paper draws on these broader discussions to highlight shared concerns, complemented by insights specific to mental health. Against this backdrop, the paper analyzes AI-based therapy and generative AI companion chatbots, assessing their potential benefits and associated risks, including considerations of relevant European legal frameworks. Following up, the paper argues that even if we succeed in escaping the challenges at the intersection of ethics, episteme, and technology, generative AI chatbots are ill-suited for therapeutic purposes. Playing with the idea of replacing human relationships with "stochastic parrots" risks reducing therapy to algorithmic risk stratification. Such a shift threatens to diminish the potential for cultivating authentic, trust-based therapeutic relationships. |
09:00 | Cybersecurity 2030: the Synergy Between Machine Learning and Generative Artificial Intelligence PRESENTER: Mario Marques da Silva ABSTRACT. The rapid evolution of technology, characterized by the 4th Industrial Revolution, has reshaped the cybersecurity landscape. This paper explores the implementation of Generative AI in the context of cybersecurity, highlighting their applications in data analysis procedures and automatic control systems. By integrating machine learning (ML) for real-time detection and generative AI for simulating advanced attack scenarios, we can detect cyberattacks at an early stage and minimize the impact on the systems functioning. We conclude by discussing the implications for the future of cybersecurity and the anticipated dominance of AI-driven solu-tions by 2030. |
09:30 | Comprehensive Approaches to Personal Data Protection amid Evolving Cyber Threats PRESENTER: Sabina Szymoniak ABSTRACT. Protection of personal data is more important in today's online security environment, considering the rise in cyber threats and extensive exploitation of user data. Organizations are challenged more with intelligent cyberattacks, poor data management, and the growing application of Big Data-based profiling methods. This study offers a comprehensive overview of the development of personal data protection, covering legislation, technology, and organizational practices for reducing security risks. This study is a precious addition since it creates a holistic approach. It integrates regulations like the GDPR and NIS2 Directive with advanced security controls. Such controls include encryption, anonymization, and artificial intelligence-based threat detection. In addition, this paper covers the future technologies of blockchain and quantum computing and how they can impact data protection strategy in the future. This study identifies critical gaps in current data protection strategies by conducting an in-depth literature review and real-case studies. It offers a blueprint for enhancing security controls. Our findings highlight the need for a comprehensive strategy that combines adherence to the law, strong security, and user education to protect personal data effectively in a time of accelerated technological change. |
10:00 | Enhancing Health Information Access via ChatGPT and the E-Citizens Portal PRESENTER: Natalija Zeko ABSTRACT. With the development of technology and digital systems, everyday life is undergoing significant changes. As one of the most significant technological advances of today, dating back to the mid-20th century, artificial intelligence (AI) is now used in various industries, especially in healthcare. Digitalization enables faster and more efficient access to information, and the advancement of artificial intelligence brings new opportunities for improving healthcare services. This paper deals with the application of artificial intelligence, especially ChatGPT, in the healthcare sector through the e-Doktor web application. The theoretical part provides insight into the basic concepts of artificial intelligence, its history, types and applications with an emphasis on the healthcare sector. The paper presents diagrams to illustrate the structure of the system, which includes business processes, data models and user activities. The practical part of the paper focuses on the development of a prototype of the e-Doktor web application that uses ChatGPT. ChatGPT is a virtual assistant that every user of the system can contact when they have certain questions. The aim of this paper is to demonstrate the integration of ChatGPT with the e-Citizens portal through the development of a prototype of the e-Doktor web application. The e-Doctor web application provides an insight into how artificial intelligence can improve access to health information and facilitate communication. Thus, the application provides functionalities such as reviewing medical data, asking questions to a virtual assistant, and generating personalized diet plans. The main goal of the application is to reduce the administrative burden on healthcare workers and improve the quality of healthcare services. The outcome of the research is a functional prototype of the e-Doctor web application for improving access to health information. |
11:00 | Implementing AI into Engineering Technology and Computer Science Courses PRESENTER: Richard Cozzens ABSTRACT. Implement AI into Engineering Technology and Computer Science Courses, an Ethical Perspective Department of Engineering and Technology And Computer Science Southern Utah University 351 West University Boulevard Cedar City, Utah 84720 Phone: (435) 586-7983 Professor Cozzens and Chang Cozzens@Suu.edu & CathyChang@suu.edu If this paper is accepted, we commit to attending and presenting it at the ETHICOMP 2025 Conference in Lisbon, Portugal. My framework for curriculum development has been guided by (Butcher & Wilson-Strydom, 2012) “A Guide to Quality in Online Learning.” My scoring rubric and pedagogic theory use the Quality Matters (QM) and Roblyer rubric. These concepts have been applied and assessed using “Double Loop Learning.” (Batista, 2006). This paper first reviews the history of these frameworks and ideas as applied to regular face-to-face, hybrid, and distance concurrent enrollment courses for rural high schools and students in Wuan, China. After demonstrating years of applying these concepts, the paper introduces how AI is now being applied and evaluated in Engineering Technology by Professor Cozzens and the Computer Science curriculum by Professor Chang at Southern Utah University (SUU). In 2008, I presented a paper at ETHICOMP titled “Feasibility of Web-Based Training.” The paper was based on research I conducted while publishing the CATIA V5 Workbook (Cozzens, 2000) and developing a web-based training site, CATIA V5 Workbook.com, in 2003. The research demonstrated the feasibility of web-based CAD training, provided it incorporated the right components. In 2010, I presented a follow-up paper at ETHICOMP titled “Quality Web-Based CAD Training,” which explored the history and continued relevance of web-based CAD training. This research affirmed that while web-based training could not entirely replace face-to-face instruction, advanced web-based curriculum and technology had established themselves as viable substitutes for traditional methods. In 2011, my ETHICOMP paper “Social Media: An Effective Web-Based CAD Training Tool” highlighted how platforms like YouTube transformed CAD training. My research concluded that technology serves as a tool to enhance the dissemination of knowledge and improve the teaching process, but it should not be the driving force of change. Instead, educators should focus on robust curriculum design and effective pedagogy. In 2015, SUU developed a two-plus-two program with Wuhan Polytechnic University (WPU) in Wuhan, China. This provided an opportunity to put my robust curriculum development and delivery to the test—teaching courses in a foreign country, new environment, and culture. Even though the class was taught in English, 1/3 of the 200 students did not have a functional knowledge of English. WPU supplied an in-class translator to help clarify the information. The translator slowed the class tempo down and provided very little feedback to me, the instructor. Many new problems challenged the pedagogy and technology initially developed for SUU students. The details of these challenges and how Double Loop Learning was used to help minimize those challenges are covered in this paper. Revisions and additions had to be made to make the curriculum effective, such as developing bilingual eBooks and videos for students' reference resources. In 2016, I published a paper titled "Evaluating the Effectiveness of Concurrent Web-Based Engineering and Technology Curriculum for Rural High Schools, a New and Different Challenge. " This paper was presented at the Sustainable Ecological Engineering Design for Society (SEEDS) in Leeds, England. I continue refining the blended learning model I applied early in my career. This approach incorporated online lectures, formative and process assessments, and flexible scheduling, allowing students to progress independently. This allows advanced learners to complete coursework quickly while struggling students receive tailored support through targeted instruction and customized interventions. Blended learning also allows for a face-to-face lecture supported by hybrid and online components in the curriculum. It allows the instructor to present using various delivery methods, including the Flipped Classroom method. It provides variability to enhance engagement. This is where I also incorporated the Robyler rubric for assessing interactive qualities in distance (online) courses. I modified this rubric to fit hybrid and blended learning formats. The challenges faced by teaching in Wuhan to WPU students and distant Concurrent Enrollment students provided opportunities to use Double Loop Learning to improve the curriculum, apply new effective technology, and assess its effectiveness. These curriculum and assessment technology improvements proved beneficial in 2019 when all teaching and learning were forced to go online. 2019 the COVID-19 Pandemic hit, and we were forced to teach and learn remotely using Zoom. All the previous work helped me prepare for this experience; it was a seamless transition and painless. There were still many lessons to be learned and documented, especially with the course taught to my 200 students in Wuhan, China. This multicultural course introduced even new and more complex challenges to developing, delivering, and assessing quality curricula. The lessons learned from that experience contributed to my publication in 2021, documenting the experience. In 2021, I published a paper for the International Sustainable Ecological Engineering Design for Society (SEEDS) in Leeds, England. The title was “Improving the Learning Outcomes for Chinese ESL Students.” The paper won the award for Best in Education and Training. In 2024, I used the lessons learned in the past combined with how Professor Chang was implementing AI into her Computer Science courses. With AI's aid, I solved many of the challenges discussed in the previous publications. I was able to streamline many of the processes in class, making a much more efficient class and improving the timing and quality of feedback for students. Professor Chang (Computer Science) and I (Engineering Technology) are working together to implement AI, particularly ChatGPT, in our classrooms. We are implementing it in every aspect of the curriculum (development, delivery, assessment, and feedback). Our initial results in applying scoring rubrics for grading and providing specific feedback show a significant reduction in grading time and increased consistency in scoring. These tools also provide students personalized feedback, fostering a more engaging and practical learning experience. In this paper, we present our initial strategy of implanting AI into our curriculum and our lessons learned from the feedback provided by Double Loop Learning. We will also discuss the ethical issues encountered on this journey since AI and its application in education can be a polarizing discussion. This paper demonstrates how traditional frameworks, assessments, and pedagogical ideas can be improved by applying AI tools with innovative pedagogical methods to improve student learning outcomes, engagement, and knowledge retention. References Cozzens, R. "Effectiveness of Web-based Training." Proceedings of ETHICOMP 2008, Pavia Matua, Italy. Cozzens, R. "Quality Web-Based CAD Training." Proceedings of ETHICOMP 2010, Tarragona, Spain. Cozzens, R. "Social Media: An Effective Web-Based CAD Training Tool." Proceedings of ETHICOMP 2011, Sheffield, England, UK. Cozzens, R. “Evaluating the Effectiveness of Concurrent Web-Based Engineering and Technology Curriculum for Rural High Schools” International Sustainable Ecological Engineering Design for Society (SEEDS) 2015. Cozzens, R. “Web-Based STEM Curriculum for Rural High Schools” American Society for Engineering and Education (ASEE), Seattle, WA, 2015. Cozzens, R. “Improving the Learning Outcomes for Chinese ESL Students, International Sustainable Ecological Engineering Design for Society (SEEDS) Leeds England 2021. Batista, E. (2006). "Double-Loop Learning and Executive Coaching." Retrieved from http://www.edbatista.com/2006/12/doubleloop_lear.html, accessed 01.23.2011. Argyris, C. "Theories of action, double-loop learning and organizational learning." Retrieved from http://infed.org/thinkers/argyris.htm, accessed 01.23.2011. Butcher, Neil and Wilson-Strydom, Merridy. A Guide to Quality in Online Learning. s.l. : Academic Partnerships, 2012. https://www.qualitymatters.org/rubric. Quality Matters Rubric. [Online] [Cited: February 21, 2015.] Garrison, D., 2007. Online Community of Inquiry Review: Social, Cognitive, and Teaching Presence Issues. Journal of Asynchronous Learning Networks, 11(n1), pp. 61-72. Roblyers, M.D and Wiencke, W.R., 2010. Design and Use of a Rubric to Assess and Encourage Interactive Qualities in Distance Courses. American Journal of Distance Education, 12:2, 77-98. |
11:30 | Gamer’S Dilemma: Intentions Matter PRESENTER: Kai Kimppa ABSTRACT. Gamer's Dilemma has been approached wrongly. Rather than consequences, character or imperatives to not molest children, we ought to look at the intention of the player when they are offered the possibility to kill in games compared to when offered a possibility to molest children. Already the defining of the issue has problems. Paedophilia is a medical condition, not a crime or a moral failing. Child molestation is. Murder is not what we see in most games, and when we do, it is actually considered to be a problem. Rather, we see killing for a goal. And that is a whole different ball game than molesting children for a goal. For more in depth analysis, see the uploaded file. |
12:00 | Epistemic Injustice in AI-Driven Healthcare ABSTRACT. Artificial Intelligence (AI) is increasingly integrated into healthcare, with predictive models being used for early diagnosis, treatment recommendations, and patient monitoring. One such application is using AI models trained on voice and patch sensor data to predict Chronic Obstructive Pulmonary Disease (COPD) and provide lifestyle improvement suggestions. While AI systems promise enhanced efficiency and accuracy, their integration into medical decision-making introduces risks of epistemic injustice. Drawing on the frameworks provided by Fricker (2007) and Carel and Kidd (2014), this paper explores the epistemic injustices that may arise for both patients and practitioners when AI models operate without adequate transparency and justification mechanisms. Epistemic injustice, as conceptualized by Fricker (2007), includes testimonial injustice—where certain knowers are systematically discredited—and hermeneutical injustice—where individuals lack the conceptual tools to make sense of their experiences. In the context of AI-driven COPD prediction, testimonial injustice may manifest when patients’ lived experiences and concerns are overridden by AI-generated predictions, diminishing their role as credible knowers of their health. This is particularly concerning for marginalized groups whose voices have historically been excluded or devalued in medical discourse. If a patient’s voice data does not align with AI-derived assessments, they may be unable to challenge the AI’s determination due to a lack of epistemic authority. Additionally, hermeneutical injustice arises when patients, particularly those from disadvantaged or nondominant groups, do not have access to or understanding of how AI-derived decisions are made, leaving them without the resources to question or interpret their diagnosis effectively. For medical practitioners, epistemic injustice materializes in two critical ways: (1) their professional judgment may be undermined if the AI model’s recommendations are prioritized over their expertise, leading to a shift in epistemic authority from human professionals to opaque algo-rithms, and (2) their decision-making may be constrained by a lack of justi-fication for AI-generated outputs, making it difficult to critically engage with and contest the model’s predictions. This is particularly problematic when practitioners interact with AI systems that operate as “black boxes,” providing results without explainability. Without insight into the AI’s reasoning, practitioners may either be forced to accept its outputs blindly or struggle to defend their clinical assessments when they diverge from AI-generated outputs. This can lead to cognitive overload, reduced professional autonomy, and epistemic exclusion, reinforcing systemic biases in healthcare decision-making. The harms associated with these epistemic injustices extend beyond individual interactions to structural issues within the healthcare system. Patients may experience increased mistrust in medical practitioners and AI-assisted healthcare, leading to disengagement and poorer health outcomes. Practitioners, on the other hand, may face professional disempowerment, exacerbating stress and dissatisfaction in medical practice. Additionally, the lack of epistemic transparency in AI decision-making perpetuates power asymmetries, wherein AI developers and institutions hold epistemic authority while patients and healthcare providers are positioned as passive recipients of AI recommendations. To mitigate these epistemic injustices, several measures must be implemented. First, AI models should be designed with explainability and transparency in mind, providing justifications for their outputs in language accessible to both practitioners and patients. This ensures that both groups can critically assess AI-generated recommendations rather than deferring to them uncritically. Second, AI systems should incorporate feedback loops where practitioners can contest and refine AI decisions based on clinical expertise, allowing for dynamic model adjustments. This approach aligns with epistemic justice by ensuring that AI is not treated as an infallible authority but rather as a tool that assists human decisions. Third, patient-centered AI governance should prioritize epistemic inclusivity by involving diverse patient populations in model training, validation, and oversight. This would reduce the likelihood of marginalized groups experiencing testimonial or hermeneutical injustice due to their unique physiological or sociocultural factors being inadequately represented in training data. Furthermore, institutional policies should emphasize the role of epistemic humility in AI-assisted healthcare, recognizing the limitations of both AI and human decision-making. By fostering collaborative epistemic environments where AI recommendations are subject to professional scrutiny and patient input, healthcare systems can uphold fairness and accountability in medical knowledge production. Finally, educational programs for practitioners should integrate AI literacy training, equipping them with the skills to interpret, question, and contextualize AI predictions, thereby reinforcing their epistemic agency in clinical decision-making. In conclusion, while AI models offer significant advancements in COPD prediction and healthcare delivery, their deployment must be critically examined through the lens of epistemic justice. Without deliberate interventions, AI can perpetuate testimonial and hermeneutical injustices, silencing patients and eroding the professional authority of practitioners. By prioritizing transparency, practitioner feedback, and inclusive model design, healthcare institutions can mitigate these risks and foster a more equitable, just, and epistemically responsible AI-driven healthcare system. |
12:30 | Some Problems in the Ethical Impact Assessment of Emerging Technologies and Socio-Technical Visions: Case CityVerse PRESENTER: Antero Karvonen ABSTRACT. This paper examines methodological challenges in the participatory ethical assessment of emerging technologies in urban contexts, using the CityVerse vision in Tampere, Finland as a case study. While metaverse technologies promise to transform smart cities by blending physical and virtual spaces, their ethical implications remain unclear. Through focus groups with city officials, we explored how participatory methods can effectively evaluate ethical dimensions of emerging technologies when they remain largely conceptual. Our findings reveal that while stakeholders can generate substantive ethical discourse, they struggle with the abstract nature of metaverse experiences, producing more questions than definitive answers. We argue that sociotechnical visions serve better as platforms for ethical discourse than as concrete implementation plans, functioning primarily to surface tacit values and assumptions. The study contributes to ethical technology assessment methodologies by suggesting that for emerging technologies, developing structured ontologies of questions may prove more valuable than premature answers. We conclude that CityVerse design should be approached as an ongoing discourse—not merely about technologies, but fundamentally about designing for improved quality of human life—where participatory ethical vision assessment functions as a form of collaborative conceptual engineering. |
11:00 | Cybersecurity Curriculum to Include Ethics and Privacy ABSTRACT. This paper discusses and addresses how the author used the idea of developing or modifying cybersecurity curriculum to provide an educational experiential learning for online students using the right ethical practices of National Society for Experiential Education (NSEE, 2009). It also highlights the motivation and the previous work of the author with the same framework in a different discipline (Kesar and Pollard, 2020, 2021). This paper sheds light on how author designed the classes and how it has been beneficial in creating an interactive online presence for students. Founded in 1971, the Society for Experiential Education (SEE) is the premier, nonprofit membership organization composed of a global community of researchers, practitioners, and thought leaders who are committed to the establishment of effective methods of experiential education as fundamental to the development of the knowledge, skills and attitudes that empower learners and promote the common good (NSEE, 2023). The framework consists of eight principals linked with good practices. In this paper, the project conducted with graduate cybersecurity students is discussed that was designed by the instructor (author) as part of an experiential learning activity. The goal was that this experience and the learning will add value to the fundamental of creating an online cybersecurity training as part of a group project. Consequently, ensuring both the quality of the learning experience and of the work produced by the students, and in building an assignment that underlie the pedagogy of experiential education. Although the NSEE framework was used, the main thought process was very different when designing the project. It considered the framework as well the research regarding the team projects and importance of training in cybersecurity. This is because this style of pedagogy will provide an experiential learning education environment, which will better prepare the student to face challenges in the ever-evolving cybersecurity field. While developing the curriculum, various studies were taken into account, including author’s previous published research. Best standards and Guiding Principles of Ethical Practice by the National Society for Experiential Education (NSEE) were used. The NSEE Guiding Principles of Ethical Practices are used to develop the pedagogy to teach ethics and professional as part of an experiential education. This paper describes how the how instructor included ethics and professionalism in this team project. The eight principals are exampled below. Intention: In this principal, all parties must outline a clear vision on the reason which this experience is chose and the why experience is the chosen approach to the learning. This principle was used to develop assignments that included induvial reports and teamwork with an ethical and privacy aspect of cybersecurity, that is often overlooked. Furthermore, it focuses on the purposefulness that enables experience to become knowledge and, as such, is deeper than the goals, objectives, and activities that define the experience. Preparedness and Planning: The main objective of this principal was to ensure the students, both traditional and non-traditional from different educational backgrounds have a experiential learning that encapsulates the new emerging trends in the cybersecurity field. The topics in this course were planned and research to ensure they aligned with the identified intentions, adhering to them as goals and objectives of the overall degree. The hands-on activities included research and critical thinking as a key component while problem solving. Authenticity: In this principal it is important the students have an experience that is in a real- world context. For this class, an assignment on cyberlaw and ethics was developed, as a scenario based cased. Such topics help students to be better equipped with the necessary ethical framework to navigate complex situations in the digital world. Reflection: NSEE refers to Reflection as an element that transforms simple experience to a learning experience. With this principal in mind, the assignment were designed so that knowledge can be discovered and internalized as the students research, test assumptions and hypotheses about the outcomes of decisions and actions taken in context of cybersecurity training. It also gave them an opportunity to reflect and understand the societal implications of their actions and make responsible decisions when handling sensitive data, balancing security needs with individual privacy rights, and upholding ethical standards within their professional roles. This process in the assignment comprised of a report writing and presentations at conference and as a final exam. This, according to NSEE, is integral to all phases of experiential learning, from identifying intention and choosing the experience, to considering preconceptions and observing how they change as the experience unfolds. Orientation and Training: The students were required to discuss and their learnings as a part of an interactive assignment on Canvas. It allowed students to experience and learn about each other and about the context and environment in every changing field. Monitoring and Continuous Improvement: In this principal, it is important to note that any learning activity designed should be dynamic and changing. The various assignments with different topics provided the richest learning possible to the students. Students also had to write a self-reflection on their own progress as well as their team members. This feedback process relates to learning intentions and quality objectives. Consequently, this allows the structure of the experience to be sufficiently flexible that permitted changes in response to what that feedback suggests. Subsequently, monitoring and continuous improvement represent the formative evaluation tools. Assessment and Evaluation: Assessment is a means to develop and refine the specific learning goals and quality objectives identified during the planning stages of the experience. Whereas evaluation provides comprehensive data about the experiential process as a whole and whether it has met the intentions which suggested it. Based on the NSEE definitions, the outcomes and processes of assignments of the project included systematically reports, presentations, and self- reflection that were linked with the initial intentions. Acknowledgment: At the end of the project, the assignment also included that students recognize the lessons learned, recognition of learning and impact occur throughout the experience by way of the reflective and monitoring processes and through reporting, documentation and sharing of accomplishments. All the students and instructor’s experience were noted and included in the lessons learned and reflection in the recognition of progress and accomplishment. Given that this was part of an on-going research where other projects used NSEE’s framework, the lessons learned from culminating documentation and the impact of these projects were part of designing as well as helped to provide closure and sustainability to the experience. KEYWORDS: NSEE, Cybersecurity, Ethics and Privacy, Pedagogy, online class, Tradiational Students, Non-tradtional students. REFERENCES Kesar, S., and Pollard, J. (2021) “Cultivating an Empathic Learning Pedagogy: Experiential Project Management”, in Normal Technology Ethic Proceedings of the ETHICOMP* 2021, Coords. Mario Arias Oliva, Jorge Pelegrín Borondo, Kiyoshi Murata, Ana María Lara Palma, Universidad de La Rioja, 257-259. https://dialnet.unirioja.es/servlet/libro?codigo=824595 Kesar, S., and Pollard, J, “Lesson Learned from Experiential Project Management Learning Pedagogy”, in Paradigm Shifts in ICT Ethics Proceedings of the ETHICOMP* 2020, Coords. Mario Arias Oliva, Jorge Pelegrín Borondo, Kiyoshi Murata, Ana María Lara Palma, Universidad de La Rioja, 99-100. The National Society (2009). Guiding Principals of Ethical Practices. https://www.nih.gov/health- information/nih-clinical-research-trials-you/guiding-principles-ethical-research. |
11:30 | Digital Agriculture Under Threat: Cybersecurity Challenges and Policy Gaps ABSTRACT. Agriculture has undergone a digital revolution in recent years. Advanced technologies such as the Internet of Things (IoT), artificial intelligence, autonomous machines, and cloud data collection have been introduced. We refer to this as Agriculture 4.0 and 5.0. Although such innovations bring measurable benefits in terms of efficiency, adaptability to capabilities, and sustainability, they simultaneously expose agricultural infrastructure to an increasing number of cyber threats. The greater the risks, the less aware the farmers are in this regard. This article provides an overview of research on the connections between cybersecurity and food security, focusing on the analysis of cyberattacks targeting digital agricultural systems. A typology of the main threats, their potential impacts on food supply chains, and a review of current legal regulations have been presented. Based on case studies and international policy, critical security gaps, regulatory and ethical deficiencies have been identified, and directions for further research and practical actions have been proposed. The article emphasizes the necessity of integrated strategies that combine technical, legal, and educational aspects to enhance the resilience of food systems against cyber threats. |
12:00 | Artificial Intelligence and Ethical Responsibility: the Impact of Algorithmic Decisions on Cybersecurity PRESENTER: Sabina Szymoniak ABSTRACT. The development of artificial intelligence creates new prospects in areas that include cybersecurity and raises critical ethical questions. When using AI in vital applications like safeguarding personal information or managing energy structures, it is crucial to assess the impact of wrong algorithms. Some examples include intrusion detection systems and intrusion prevention systems that employ machine learning models to monitor and analyze data in real-time to detect threats. The main issues are the system's opacity (the so-called ‘black box’ issue), the possibility of bias in the data used, and legal liabilities. The lack of transparency can lead to difficulties in understanding AI decisions, thus affecting user trust and allowing attackers to exploit algorithms. Bias in data can lead to unfair decisions and also tends to weaken security mechanisms. Also, it is still unclear who will be responsible for the AI failures, particularly regarding the data breaches and financial losses. To address these challenges, it is essential to promote the development of Explainable AI technologies, teach people how to use AI, check data, and adopt new legal requirements for AI. The paper analyses these problems and calls for a cross-disciplinary strategy encompassing technological, ethical, and legal perspectives to make the use of AI in cybersecurity more reliable. |
12:30 | Cybersecurity Best Practices: a Comprehensive Guide ABSTRACT. Due to the ever-evolving cyber threats, organizations and individuals are compelled to take up cybersecurity measures effectively to secure sensitive data and digital assets. This comprehensive guide will consider some of the best practices in cybersecurity, including risk assessment, strong authentication methods, data encryption, and strategies on network security. Other areas to be discussed in this paper include employee training, incident response planning, and compliance with cybersecurity frameworks. These best practices, if implemented, will significantly reduce the chances of businesses and individuals being victimized by cyberattacks, while improving digital security overall. |
11:00 | Decision-Making Ability: Recommendation Engine Exposition PRESENTER: Antonio Segura ABSTRACT. The aim of this study is to highlight the potential risks associated with the degradation of human decision-making abilities. The central hypothesis suggests the existence of a negative feedback loop: as recommendation engines become more widely used, individuals make fewer decisions. Consequently, the decline in decision-making frequency weakens the cognitive processes required for making choices, further increasing reliance on recommendation engines to avoid the cognitive cost of decision-making. Recommendation engine algorithms first emerged in the 1990s [4] and have since become integral to the most popular websites. Over the past few decades, these algorithms have been incorporated into streaming platforms, marketplaces, and social networks [1], [2]. For example, Amazon began integrating recommendation engines into its user interface in the early 2000s [3] and continues to enhance these systems, with an update to its recommendation features published last year [5]. As a result, a portion of human decision-making is increasingly outsourced to algorithms. Given that decision-making is a trainable skill [6], [7], a reduction in its use may lead to its deterioration, reinforcing the negative feedback loop. Certain activities, such as content creation and video gaming, may counteract the effects of recommendation engines by exposing users to environments that require frequent decision-making. However, these activities do not impact all individuals equally, leaving some populations at greater risk of reduced exposure to decision-making processes. In this study, we propose and discuss the limitations of an experimental framework designed to test our hypothesis and identify population groups most affected by this phenomenon. Finally, we outline additional experimental data required to gain a deeper understanding of decision-making degradation. |
11:30 | Inclusive Governance of Artificial Intelligence: Towards an Ethical Framework for Neurodivergence PRESENTER: Elisabet Bolarín Miró ABSTRACT. Artificial intelligence (AI) is increasingly present in daily life and institutional decisions, yet its rapid development has outpaced ethical frameworks that address cognitive diversity. Neurodivergent individuals including those with autism, ADHD, dyslexia, and Tourette syndrome are systematically excluded by AI systems designed around neuronormative assumptions. This article examines how current AI technologies, from automated hiring to customer service and educational platforms, reproduce and amplify structural discrimination by failing to accommodate diverse cognitive profiles. A critical review of the European legal and policy framework as of 2025 (including the GDPR, AI Act, and Digital Rights Declaration) reveals that neurodivergence and cognitive accessibility remain largely unaddressed. In response, we propose a model for inclusive AI governance based on five pillars: participatory co-design, ethical oversight committees, dynamic regulation, ethical education in cognitive diversity, and representative data. We argue that only by integrating neurodivergence as a structural axis in design, regulation, and evaluation can AI systems become truly fair and just, ensuring technological progress benefits all members of society and preventing the perpetuation of algorithmic injustice. |
12:00 | Interpretability and the Measurement of Ethical Foundations in Artificial Intelligence PRESENTER: Ramón Alberto Carrasco ABSTRACT. With the rapid development of Artificial Intelligence (AI), its integration into decision-making processes across various sectors is accelerating. The demand for interpretability and ethical accountability has become more urgent than ever. This work explores the critical intersection of these two domains. It begins by examining the concept of interpretability in AI, then turns to the ethical foundations of AI. This work also examines how these intertwined concepts of interpretability and ethics are pivotal in advancing corporate social responsibility (CSR) by fostering transparency, enabling responsible governance, and addressing societal impacts such as algorithmic bias, job displacement, and environmental concerns. Integrating interpretability and ethics is essential for building transparent, accountable, and demonstrably ethically sound AI systems that proactively support robust CSR objectives and ensure profound alignment with human values and fundamental rights. This crucial integration helps create equitable opportunities for all, paving the way for a genuinely responsible and sustainable technological future that benefits society broadly and promotes inclusive growth. |
12:30 | Ethical Principles for the Production of Official Statistics Using Machine Learning and Artificial Intelligence Techniques PRESENTER: Ramón-Alberto Carrasco-González ABSTRACT. The development and implementation of Artificial Intelligence (AI) and Machine Learning (ML) systems in the field of official statistics require an ethical and regulatory approach to ensure the protection of human rights, privacy, and fairness. The 2024 Artificial Intelligence Strategy (Gobierno de España, 2024) emphasizes the importance of responsible AI, ensuring that these systems are safe, transparent, and beneficial to society. Given the potential impact of AI, organizations must establish an ethical framework to manage the risks associated with its use. The paper proposes an ethical framework based on international and European principles, such as transparency, impartiality, and human oversight. It also addresses the challenges of AI adoption, including technical competence, infrastructure, and governance. The goal is to ensure that AI-ML enhances the quality and efficiency of official statistics without compromising public trust or ethical standards. |
14:30 | ESG Investment, Industrie 4.0, and Blockchain: the Beauty of Imperfection ABSTRACT. This paper explores new directions for value creation in the digital economy and the construction of a sustainable society through the integration of ESG investment, Industry 4.0, and blockchain technology. First, based on Max Weber’s theory of free will, investment is conceptualised as a process of self-realisation, illustrating the potential for expanding a smooth investment structure that incorporates Environmental, Social, and Governance (ESG) elements beyond the traditional capitalism model focused on monetary returns. Furthermore, by drawing on Adam Smith’s Theory of Moral Sentiments and Norbert Wiener’s cybernetic theory, the paper discusses the potential for new governance models within decentralised autonomous organisations (DAOS) and digital networks. Next, it examines the advancement of network infrastructures in Industrie 4.0 technologies and the democratization of data structures through blockchain. Special attention is given to the evolution of financial networks symbolised by cryptocurrencies and the efforts to integrate individuals’ multidimensional value indicators into market mechanisms, proposing the construction of a new economic foundation capable of reflecting emotions and diverse values beyond mere monetary metrics. Finally, the paper considers the illusion of "free" networks, the concept of perfection, and the role of human imperfection in the context of happiness in digital society. Against the backdrop of governance dominance by major ICT corporations and the regression of knowledge-sharing ideals, it argues that redefining individual free will and social structures is essential for fostering sustainable innovation and meaningful digital engagement. It concludes that human limitations, absent in machines, will serve as a vital source for creating new values in future society. |
15:00 | Parental Memory and Digital Traces of School Closures During the COVID-19 Pandemic : What Is Remembered, What Fades, and What Is Left Behind ABSTRACT. This study examines how Japanese parents remember and record their expe-riences during the COVID-19 pandemic, particularly focusing on school closures in spring 2020. Using a mixed-methods approach—semi-structured interviews (n=10) and an online survey (n=200)—the research explores what is remembered, what fades, and what remains in digital form. Many participants misremembered the timing of school closures and the State of Emergency, often recalling them as starting earlier and ending later than in reality. Women were more likely to perceive earlier disruption, re-flecting potential gendered burdens in caregiving. Preventive behaviors like handwashing persisted, but scientific understanding often faded. While 71% used social media, few actively documented their experiences. Nota-bly, those who found the period emotionally painful were more inclined to want their experiences preserved. These findings highlight a discrepancy be-tween lived experience and administrative facts, and raise ethical questions about whose voices—and which memories—are digitally preserved for the future. |
15:30 | Security and Ethics in the Use of Computing Technologies and the Internet PRESENTER: Laercio Cruvinel ABSTRACT. The rapid adoption of computing technologies and the Internet has transformed various aspects of society, including education, communication, commerce, and governance. While computer and communication technologies offer significant benefits, they also present complex ethical and security challenges. This paper explores the ethical and security dimensions of computing technologies, focusing on issues such as data privacy, algorithmic bias, cybersecurity threats, and digital well-being. Through a detailed analysis of these challenges, the paper examines how data collection, automated decision-making, and digital surveillance can un-dermine user autonomy, exacerbate inequalities, and compromise user privacy. The discussion is guided by ethical frameworks, including deontological and consequentialist perspectives, providing a balanced view of the ethical implica-tions of technology use. The paper also proposes best practices for ethical tech-nology integration, including clear data protection policies, bias mitigation strate-gies, transparent AI design, and user education programs. By promoting digital literacy and fostering a culture of ethical technology use, institutions can harness the benefits of computing technologies while minimizing risks. This paper em-phasizes the need for a collaborative approach involving educators, administra-tors, developers, and policymakers to ensure that technology serves as a tool for empowerment rather than exploitation. |
14:30 | The Ethical Dilemmas of AI-Powered Cybersecurity: Balancing Privacy and Protection ABSTRACT. With its sophisticated threat identification, automated responses, real-time monitoring, and improved threat detection, AI is revolutionising cybersecurity. These developments, especially with relation to privacy, openness, and responsibility, create major ethical questions, though. Unregulated artificial intelligence-driven security systems possess the danger of infringing personal rights by means of mass surveillance and therefore fostering biases. This study suggests a structured ethical framework for cybersecurity powered by artificial intelligence by using deontology, utilitarianism, and virtue ethics to evaluate security models in a critical way. It also offers a technical study of AI security mechanisms, pointing out how bias gets into these systems and suggesting fixes for these defects. Examined not only descriptively but also analytically, real-world case studies including NSA PRISM and AI-driven facial recognition.To guarantee that artificial intelligence-driven cybersecurity stays efficient, fair, and consistent with ethical values, the conclusion provides concrete solutions including regulatory supervision, technology safeguards, and industry best practices. |
15:00 | In the Hands of AI: a Cybersecurity Philosophical Approach PRESENTER: Pedro Brandao ABSTRACT. The intersection of artificial intelligence (AI) and cybersecurity presents a complex philosophical and practical landscape, characterized by both opportunities and challenges. This research is an exploration of this topic from a cybersecurity philosophical approach. Cybersecurity systems are interconnected socio-technical ecosystems influenced by external forces like regulatory frameworks and societal norms. Machine learning algorithms improve threat detection by identifying unusual patterns that could indicate breaches that no person can do in a timely manner. Meanwhile, involving diverse stakeholders in designing AI-driven cybersecurity systems can help align technology with societal values and ethical principles. When AI autonomously makes decisions—such as blocking IPs or quarantining files—questions arise about accountability. The human sense and sensitivity to cybersecurity is under a revolutionary transformation at the hands of AI. Determining responsibility for errors or unintended consequences remains a challenge. Anyway, organizations need tools that can help them outsmart attackers. Could we put our digital life in the hands of AI? Have we enough digital literacy for own cyber safeguard? How moral uncertainty and disagreement impact the risk assessment? A balanced approach that integrates technical innovation with robust ethical frameworks is essential to ensure the safe and responsible deployment of AI in cybersecurity contexts. The future of cybersecurity lies not in the hands of AI alone but in the synergy between human sensitivity and digital alienation. That way, a cybersecurity philosophical approach should involve reasoning and thinking critically about threats in today's and tomorrow's digital world. This combines technical analysis with philosophical reflection to make cybersecurity accessible to a wide audience - from IT professionals to policymakers and general readers. References Abbas, R. et al. (2023) Artificial Intelligence (AI) in Cybersecurity: A Socio-Technical Research Roadmap. The Alan Turing Institute. Badi, S. (2024). Ethical Implications of Integrating AI in cybersecurity Systems: A Comprehensive examination. International Journal of Applied Mathematics and Computer Science, 1(1), 56–63. Grigoryan, L., & Mirzoyan, L. (2023). Cybersecurity Risks and Its Regulations. The Philosophy of Cybersecurity Audit. WISDOM, 25(1), 67–77. https://doi.org/10.24234/wisdom.v25i1.970 Olejnik, L., & Kurasiński, A. (2023). Philosophy of Cybersecurity (1st ed.). CRC Press. https://doi.org/10.1201/9781003408260 |
15:30 | Deepfake Manipulation and Ethical Dilemmas: a Comprehensive Risk Assessment PRESENTER: Sabina Szymoniak ABSTRACT. Deepfake technology, based on artificial intelligence (AI), enables the creation of realistic images, videos and text, bringing new possibilities and serious ethical, social and legal threats. The article examines key areas in which deepfakes affect contemporary society. This technology can be used to create false materials aimed at social destabilisation and propaganda in terms of disinformation. In the area of cyberbullying, deepfakes contribute to privacy violations such as pornography and blackmail. Additionally, generating false evidence affects court decisions and enables financial fraud. This problem also affects the crisis of public trust, undermining the media's credibility. The authors emphasise the need to introduce safeguards such as watermarking, public education and identification of AI-generated content. The need for international legal regulations and the development of technologies to detect deepfakes is indicated. The article considers ethical issues connected with this technology, minimising the risk of abuse and supporting innovation sustainably. |
14:30 | Ethical Challenges of AI-Driven Targeted Ads for Youth ABSTRACT. The rise of Artificial Intelligence (AI) has brought multiple new technological advancements and business opportunities. One of these opportunities is AI-enhanced targeted advertising. Advertisements are everywhere—on TV, in stores, and on most websites. However, with more and younger users spending time on platforms that deploy these targeted ads multiple times, some ethical concerns arise. |
15:00 | Using AI for Research and Educational Support: Enhancing the Design of Computer-Based Evaluations PRESENTER: Joana M. Matos ABSTRACT. This paper has two objectives related to using AI generative tools for research and education: a) exploring the opinion of a group of students on this matter, and b) sharing our experience in AI-assisted methods to design various question types for online evaluation. Teachers can ethically use AI, among other activities, to prepare courses and design enhanced online assessment question databases. In this way, we can generate a large enough database of questions to ensure a wide variety of topics are covered and prevent repetitions among different students’ questions. We developed and implemented a Moodle quiz questionnaire for students, collecting their opinions on using AI in their work. We analyzed the students’ answers to understand their level of awareness regarding the ethical implications of using AI during academic activities. We share these results in the second part of our paper. |
15:30 | The Use of Social Media and Artificial Intelligence to Radicalize Young People in Jihadist Terrorism ABSTRACT. One of the main challenges and threats to international security lies in jihadist terrorism. It is a threat that is becoming increasingly latent and is increasingly expanding to the international scene. As for Europe, the continent has been the scene of major jihadist attacks carried out by the two major terrorist organisations that ravage the international scene: Al Qaeda and Daesh. This was the case of 11M in Madrid or the attacks that took place in Paris in 2015 and the fatal attack on Las Ramblas in Barcelona two years later. However, it is not the only cause of terrorism at an international level. Currently, the panorama is very diverse, but with a common denominator and that is to put the spotlight on the use of Artificial Intelligence and social networks by terrorist groups to recruit and radicalise new combatants. In five points, Pisabarro (2024) explains how the TikTok platform is a strategy to expand cyberjihad among young people: (1) due to its format based on creativity and “hooking” users through challenges, (2) due to its diversity in reaching any community, (3) due to its influence on public opinion, (4) due to young people's fear of missing out on what happens on social media, and (5) due to the Artificial Intelligence algorithm that makes the user consume similar content. According to Rapoport (2004), from the end of the 19th century to the present, there are four waves of terrorism that the author differentiates. The first is led by anarchist terrorism where the target of the attacks were the heads of state because they were seen as the cause of all the evils in society. This first typology continued until the time of Versailles, when, after this, the author indicates that the second wave began: “the colonial wave” where the objective of the attack was directed towards the security forces and bodies because, as the author explains, they were perceived as the ears and eyes of the State. From then until the 70s when the third wave described by the author begins: “the wave of the extreme left” where violence has no political weight and ends in the 80s with the fourth wave understood as “religious terrorism”. It is in this last one that we are currently in and that, according to the author, radicalization began to mutate into terrorism. These ideas together with the international impact caused by the 9/11 attack, demonstrated that far from this strictly local conception of terrorism, terrorism is an issue that concerns the international community. Similarly, in reference to what Neumann (2009) explains, there are two types of terrorism: an “old terrorism” based on a counterattack structure where objectives that are understood as legitimate are pursued; and a “new terrorism” made up of a network structure whose purpose is to reach the international level and provoke mass attacks with excessive violence. However, within jihadist terrorism, authors such as Reinares, García and Vicente (2020) divide this phenomenon into three periods: a first that runs until months after 9/11, a second that spans a decade until 2011, coinciding with the bite of the then leader of Al Qaeda, Osama Bin Laden; and a third moment that begins with the jihadist insurgency in the context of the Syrian Civil War and which, in turn, coincides with the terrorist explosion on social networks. Therefore, not only was there talk of terrorism, but there was already talk of cyberterrorism. The Internet itself provides a host of benefits and has changed the way we see the world, creating this information dependency. With a click of a button, we can find out what is happening in another part of the planet, participate virtually in events or get in touch with people miles away. But at the same time, the arrival of the Internet makes it possible for illegal actors to see this as an opportunity to commit their crimes, recruit and radicalize new combatants. Among them, it is worth highlighting the possibility of reaching new followers and radicalizing lone wolves. The current threat from the European Union in this matter is such that, in fact, according to Espinosa (2016), most of the lone wolves who attack on European soil are citizens of the region. This increasingly perennial idea of cyberjihad, together with what Vicente (2024) has stated, indicates that 23.40% of the jihadists convicted or killed in Spain at the beginning of the radicalisation process between 2012 and 2023 were under 18 years of age, which represents an increase of 6.3 percentage points compared to the period between 2001 and 2011. This is why this communication will analyse how terrorist organisations target young people through social media, what messages they send to their target audience and through what formats. In this way, it will be observed how the role of young people in terrorist organisations has evolved and how they have used social media to expand their narrative to generation Z. Therefore, the main objective of this research is to analyse how terrorist organisations have adapted to the new platforms and are using the main social networks to reach an increasingly younger audience. This will either refute or confirm the initial hypothesis: terrorist organisations have adapted their propaganda machinery with social networks and Artificial Intelligence to attract and radicalise new and increasingly younger combatants. In addition, this research will look at the factor of the ethics of terrorism, taking into consideration authors such as Malatji and Tolah (2024). Therefore, under this idea and with the premise that more and more Young people and minors are becoming radicalised through social networks, the main question will be answered: How do terrorist organisations penetrate the minds of young people who are recruited and radicalised?; which in turn will answer another question: What messages and formats do terrorist organisations use to attract new followers? All of this from the use of a methodology based on a descriptive method, thus allowing us to show the theoretical and conceptual framework about this phenomenon. After this, an analytical method has been carried out, which has allowed us to understand and establish the causal relationships between the transformations of jihadist propaganda and the growth of increasingly younger users on the main social networks. |
16:30 | On the Consumer Expectations and Concerns of Consumers Regarding Personal Data Use and Collection: Insights from Web Survey Results ABSTRACT. INTRODUCTION This study will examine the existing state and issues based on the results of a questionnaire survey to ascertain consumer attitudes toward the use of user information such as action histories. In particular, the author would discuss the survey results of consumers' expectations and concerns about the use of personal data. BACKGROUND Our interest is to understand the actual status of consumer awareness of the collection and use of consumers' personal information by companies and to examine the issues involved. In developing the questionnaire, we focused on the following three points. (1) Consumers' precautionary attitude toward personal information (2) Desired content of control of personal information (3) Degree of concern about the use of personal information by companies METHODS Based on the awareness of the above issues, we decided to conduct a questionnaire survey of general consumers. The survey was conducted using a self-administered questionnaire tool (Freeasy) provided by I-Bridge Corporation. A total of 1204 people, 86 male and 86 female, of all ages, were surveyed from among the monitors registered with the site. It should be noted, however, that the representativeness of the respondents is limited because of the survey method used. The question that is the subject of analysis in this study is about “the degree of concern about the use of personal information by companies. Specifically, (1) expectations and concerns about the overall trend of companies' use of personal information, (2) attitudes toward the acquisition of personal information by companies, and (3) attitudes toward the use of personal information by companies were each asked in a four-point test. A multiple-answer system was employed, in which respondents were asked to select all applicable items from a list of 10 items, including “other. SUMMARY OF SURVEY RESULT a) Expectations and Concerns about Personal Data Use Consumers were asked how they feel about companies' use of their personal information. The results show that, overall, the most common response was “somewhat anxious” (33.1%), which together with the second most common choice (28.2%), “very anxious,” accounted for over 60% of the responses. This means that less than 40% of consumers have high expectations. Furthermore, the selection rate for “high expectations” was the smallest (12.3%). Overall, it can be said that consumers are not so optimistic and are feeling a little anxious, despite the importance of data science. If we look at the differences by gender, males had a higher selection rate for “expectation > anxiety” and females had a higher rate for “anxiety > expectation” (a chi-square test was conducted, and a significant difference was obtained, p<0.001). By age group, the selection rate of “expectation > anxiety” was higher among teenagers (20.3%). On the other hand, those in their 50s (70.3%) and 70s and older (69.2%) selected “Anxiety > Expectation” more frequently (based on age, a chi-square test was conducted, and p<0.001 was obtained, indicating a significant difference). Next, we turn our attention to the results of the cross tabulation with the frequency of smartphone use. 40.0% of those who selected “expectation > anxiety” responded that they “look at (smartphones) whenever I have free time. Conversely, for those who chose “Anxiety > Expectation,” a higher percentage of respondents “rarely look at it” and “don't have a smartphone. A chi-square test was conducted, and a significant difference was obtained with p<0.001. b) Relationship with the right to control one's own information In Japan, the 2021 revision of the Personal Information Protection Law has brought renewed attention to the idea of understanding privacy as “the right to control information about oneself” (Itakura, 2021). What kind of control do consumers want? In order to grasp the actual situation, we asked “desirable control contents of personal information. Furthermore, when we looked at the cross tabulation of the results and “expectation vs. anxiety,” we found that those who chose “expectation > anxiety” showed a high selection rate for the “confirmation” and “correction” items. In addition, those who selected “Anxiety > Expectation” showed higher selection rates for “Erasure,” “Disclosure to a third party,” and “Limitation of retention period. The chi-square test revealed that “Confirmation” (p<0.001), “Correction” (p<0.001), “Erasure” (p<0.001), “Restriction on provision to third parties” (p=0.002) and “Restriction on retention period” (p=0.010), and significance was obtained for these five items. c) Attitudes toward Collection and Use of Personal Information Next, we present the results of a survey on consumer attitudes toward the collection and use of personal information by companies, such as Web site browsing, search keyword history, and shopping records. Specifically, the survey asked consumers about their opinions on (1) the collection activities themselves and (2) the use of such information for product recommendations. Overall, the item with the highest selection rate regarding the collection of personal information by companies was “I wish they would stop if possible” (29.2%). This was followed by “I think it is inevitable,” and “I don't understand the meaning of collecting personal information,” which together accounted for more than 70% of the total (25.6% and 15.6%, respectively). Of the 30% of the total positive opinions, about 10% (10.5%) were strongly in favor of “I would be willing to provide it if it has merit,” and less than 20% (18.0%) selected “It is essential for me to receive appropriate services. No significant difference between male and female respondents was found for these items (p=0.168). On the other hand, a chi-square test for age showed a significant difference (p<0.001). d) Concerns about the collection and use of personal information Next, we examine consumers' concerns about the collection and use of personal information by companies. The questionnaire asked consumers to select all applicable items from the following list regarding their “concerns about companies' collection of personal information such as purchase histories and Web site browsing histories. (1) “Insufficient explanation” regarding the use of data (2) “Lack of veto power” of consumers (3) “Information management system information” of companies (4) Concerns about “resale” of information as a “source of profit (5) Fear of “anonymized” processing (6) Doubts about “collecting more information than necessary (7) Discomfort with “persistent use of old information (8) Business ethics (profit first) (9) Indication of “offensive advertisements (10) Fear of “profiling (11) Other (free answer) Overall, the item with the highest selection rate was “management system” (41.4%). This was followed by “Profiling” with the highest selection rate (34.6%). Items with selection rates exceeding 25% were “Insufficient explanation (28.8%),” “Profit source (resale)” (28.2%), and “Lack of veto power (25.3%). Other items were selected by around 20% of the respondents. First, no gender differences were found in the selection rate of anxiety. As for age differences, statistical significance was found for individual identifiability (p=0.002) and lack of veto power (p=0.005). e) Relationship between “expectation vs. anxiety The relationship between “expectation vs. anxiety” and “expectation vs. anxiety” is further shown above. A chi-square test was conducted, and significant differences were obtained for the following six items. In order of overall selection rate, they are: “Profiling” (p<0.001), “Repeated use of old information” (p=0.032), “Anonymization technology” (p=0.009), “Collection of more information than necessary” (p=0.001), “Offensive advertisement” (p=0.03) and “Profit first” (p<0.001). (p-values in parentheses). f) Relationship with “Desirable control contents of one's own information Finally, we show the relationship with the desirable control contents of self-reported information. However, since the desired “control contents” can have up to three multiple responses and the “anxiety factors” can also have multiple responses, a chi-square test using a crosstabulation table is not appropriate. Therefore, in this paper, a chi-square test between a set of items was conducted by taking one item from each of the “control contents” and “anxiety factors. The overall trend revealed that the selection rate of “limitation of holding period” was low for seven of the control contents. In addition, the selection rate of “elimination” was high for all the anxiety factors, obtaining a significant difference. This result is appropriate considering the recent attention to the “right to be forgotten. CONCLUSIONS AND REMARKS In this study, we have examined the results of a survey of consumer attitudes toward the collection and use of personal information. When asked which they felt more strongly about the data society, “expectations such as improved convenience” or “risks and concerns such as information leakage and invasion of privacy,” men tended to select “expectations” and women “concerns. Those who selected “Expectation” used the four major social networking services in both real and anonymous mode, and wanted “Confirmation” and “Modification” in terms of self-information control. On the other hand, those who were “anxious” tended not to use SNSs, and they emphasized “deletion,” “limitation of retention period,” and “limitation of provision to a third party” as their right to control their own information. In addition, those who feel “uneasy” are more likely to answer “hobby” or “none” as the contents of information that can be entrusted to “information banks. The importance of data may increase, but it will not decrease in the future. Under these circumstances, we hope that the results of this survey will provide some insight into what factors contribute to expectations and what factors are effective in alleviating anxiety about the use of data. |
17:00 | Consumer Perceptions of Personal Data Trust Banks in Japan : Results of Web-Based Questionnaire Survey ABSTRACT. INTRODUCTION In recent years, there has been growing interest in Personal Data Trust Banks (Hereinafter abbreviated as PDTB. This study clarifies the status and issues regarding the expectations and concerns of PDTB when utilizing consumers' personal data such as behavioral history (behavioral history data), based on the results of a web questionnaire survey. BACKGROUND The ability to implement scientific marketing by accumulating usage history of smartphones, shopping cards, etc. has been discussed in both academia and industry. In the post-privacy era, personal data is the “new oil of industry” and at the same time, personal data is positioned as payment for enjoying the provision of various services. Privacy has been transformed from a concept of a fundamental human right concerning personal secrets that cannot be covered by property rights into a currency that can be paid for. Under these circumstances, the function of a Personal Data Trust Bank is attracting attention as a data utilization infrastructure that takes users into consideration (or considers data utilization from the user's perspective) in order to avoid “unintended utilization” of personal data. This study introduces and discusses the results of an awareness survey on Personal Data Trust Bank in Japan conducted by the author. METHODS. This study employed a questionnaire survey of general consumers. The survey was conducted using a self-administered questionnaire tool (Freeasy) provided by I-Bridge Corporation. A total of 1204 people, 86 male and 86 female and 86 male and female by age, were surveyed from among the monitors registered with the site. It should be noted, however, that because of the survey methodology employed, there are limitations regarding the representativeness of the respondents. The core question asked was, “What information would you be willing to entrust to the Personal Data Trust Bank? SUMMARY OF SURVEY RESULTS A) Overview of the Survey of PDTB In the survey, respondents were asked to indicate the information they would be willing to entrust to a Personal Data Trust Bank from among the following items (multiple responses were acceptable). (1) “Personal attribute information” such as address and gender (2) “Family personal information,” such as family structure and occupation (3) “Financial information” such as income (4) “Personal identification codes” such as account or credit card numbers (5) “Shopping history information” (purchase history information) (6) “Movement information” mediated by smartphones (7) “Vital information” such as pulse, blood pressure, and body temperature (8) “Genetic information” such as medical history (9) “Hobbies” and other information (10) Others (free description) B) Overall characteristics Overall, the item with the highest selection rate is “Hobbies” (38.5%). This was followed by “personal attributes” (29.7%) and “purchase history” (29.3%). The item with the lowest selection rate was “Other” (5.9%: n=71). The respondents were asked to freely select “Other. The breakdown of the responses is as follows: “None,” 67 respondents chose “None,” 2 respondents chose “Don't know,” and 2 respondents chose “Other. The following analysis is based on the 67 respondents who answered “none” in the free response section. The only significant gender difference was found in the “purchase history” category (p=0.004 by the chi-square test). Chi-square tests were conducted for each age group, and significant differences were obtained for “personal attributes” (p<0.001), “purchase history” (p<0.001), “vital information” (p<0.001), “medical history and genetic information” (p=0.003), and “none” (p<0.001) (p values in parentheses (p-values in parentheses). Next, we discuss the results based on the crosstabulations by age group. Let us point out the characteristics of the results as follows. Personal attributes,” which ranked second in the overall selection rate, was selected most frequently by respondents in their teens and those in their 60s or older. Next, the selection rates for “Purchasing history” and “Vital information” were high for respondents in their 70s, and slightly higher for respondents in their teens. The selection rate for “Medical history/previous diseases/genetic information” was high among teens, while the selection rate for “None” was high among those in their 50s. To simplify the results without fear of misunderstanding, more respondents in their teens and those in their 60s or older selected items that can be entrusted to the Personal Data Trust Bank, while those in their 50s selected fewer items. C) Relationship with “Expectations vs. Does the information that consumers are willing to entrust to a Personal Data Trust Bank have any relationship with their “expectations and fears” of the data society as a whole? To examine this question, we conducted a cross tabulation of “information that can be entrusted” and “expectation vs. anxiety. A chi-square test was conducted, and significant differences were obtained for nine items (including “other”) except for “medical history/previous illness/genetic information” (p=0.015). Those who selected “expectation > anxiety” selected the following seven items more frequently. The items selected most frequently by those who selected “Expectation > Anxiety” were “Purchase history,” “Personal attributes,” “Vital information,” “Transportation information,” “Family information,” “Account information,” and “Income/assets information,” in descending order. Those who selected “expectation > anxiety” selected an average of 2.1 items of “information that can be entrusted to the bank. On the other hand, those who selected “Anxiety > Expectation” selected an average of only 1.8 items. In addition, those who selected “Anxiety > Expectation” selected “Hobbies” and “None” at a higher rate than those who selected “Expectation > Anxiety,” a result that is symmetrical with those who selected “Anxiety > Expectation. The data also revealed that those who were anxious were reluctant to use the Personal Data Trust Bank. D) Relationship with “Desirable Control Contents Finally, we examine the relationship with “desirable control contents” (Table 16). First, “hobbies” was significantly different from “deletion,” “collection restrictions,” and “retention period restrictions. All these items were selected more frequently by the respondents with “anxiety > expectation”. Therefore, it can be inferred that those who were anxious about the collection and use of personal information selected “Hobbies” as the safest or least sensitive item when using the Personal Data Trust Bank. Similarly, in the case of “None,” which was selected by a high percentage of respondents with “Anxiety > Expectation,” there was a significant difference between “Deletion” and “Collection Restrictions. However, this item was not included in the questionnaire and was extracted from the free descriptions in the “Others” section. Therefore, it is safe to assume that these are the responses of those who are most reluctant to use the Personal Data Trust Bank. If so, we can understand that “deletion” and “collection restriction” are the most important control items for those who are reluctant (dare we say negative) to the idea of a Personal Data Trust Bank. Next, let us consider the “information that can be entrusted to the bank,” which was selected most frequently by those who selected “expectation > anxiety. The relationship with the “limitation of retention period,” which was selected by a high percentage of respondents with “expectation > uneasiness” was examined, and significant differences were obtained for the following five items. The significant differences were found for the following five items: “personal information,” “purchase history,” “family information,” “account information,” and “income/assets. However, the selection rate was low for all items. Comparing each item with the “desirable control contents,” for which significant differences were obtained, we can see the characteristics of personal information that can be entrusted to the Personal Data Trust Bank. For example, for “personal information,” the respondents want to be able to limit the content of information collected and to be able to disclose and correct the content of that information. In the case of “purchase history,” the respondents want not only collection but also restriction of use, and not only correction but also suspension (deletion) of use. For “family information,” the respondents want not only the collection but also the restriction of use and the suspension of use (deletion) in addition to the requirements for “personal information. In the case of “account information” and “income/assets,” only “correction” seems to be emphasized. Furthermore, Table 16 is also suggestive for the four items for which no significant difference was obtained for “limitation of holding period. We have already mentioned “None”. Vital information” was the only item among the ‘information contents’ related to the Personal Data Trust Bank that showed a significant difference from ‘Restrictions on provision to third parties. Not only were the selection rates of “Restrictions on collection” and “Confirmation and correction” of vital information high, but also the respondents' desire not to have their information used for health-related advertisements and sales can be seen. The “movement information” was significantly different from the three items of “personal information” except for “limitation of retention period” from “control contents” which was significantly different from “personal information. Finally, for “medical history, pre-existing conditions, and genetic information,” the only “desirable control item” that showed a significant difference was “limitation of collection. Conclusions and Remarks In the above, we have examined the results of a questionnaire survey on consumer attitudes toward PDTB, which have been the focus of much attention in recent years. In the future, we would like to investigate the relationship with consumers' security concepts suc |
17:30 | Supporting Independent Living for Older Adults: the Role of Digital Technology and AI in Sweden and Japan PRESENTER: Kiyoshi Murata ABSTRACT. Demographic Change and Independent Living Nowadays, we globally face demographic change caused by aging populations. Japan and Sweden, the two countries examined in this study, are no exception. Life expectancy in both is among the highest in the world: Approximately 83 years old in Sweden and 84 years old in Japan [1]. Demographic aging and other structural changes will increase social expenditures, such as medical costs and public pension spending [2]. By 2040, around 25 percent of the Swedish population will be 65 or older, and many in this age group will be active and healthy for a considerable period of their older years [3] Japan faces a similar demographic trend; it is estimated that by 2040, the population aged 65 and over will constitute approximately 35 percent of the total population [4]. As the population ages and social security costs rise, supporting independent living becomes a pressing issue in the care sector. For older people, being healthy and living independently is essential for maintaining a sufficient quality of life and establishing a sustainable and participatory relationship with communities and society. However, physical weakness and declining cognitive abilities can make it difficult to live a completely independent life. Therefore, support for independent living enables older people to maximize their independence. Support for independent living among older adults encompasses a range of services and assistance, extending beyond formal care services provided by public institutions to include informal support from family members, irrespective of co-residence. Publicly available formal services encompass home care services and home medical care. Home care services, designed to enable older adults to maximize independent living within their homes, involve visits from home care workers (referred to as "home helpers" in Japan and "Hemtjänst" in Sweden). These professionals provide a range of support, including assistance with activities of daily living (ADL) such as eating, excretion, and bathing (physical care), as well as instrumental activities of daily living (IADL) such as cleaning, laundry, shopping, and cooking (lifestyle assistance). Furthermore, specialized care businesses offer services such as transportation assistance to medical appointments, including support with boarding, transferring, and alighting from vehicles. Home care services are tailored to individual needs, with the content and scope of services varying based on the degree of required assistance. A collaborative process involving the care recipient, their family, and a care manager determines the specific care needs, culminating in the development of a personalized care plan. The governments of both countries are prioritizing policies that promote independent living among older adults to reduce social welfare costs and address the shortage of elderly care services. The Role of Technology in Independent Living for Older Adults Digital technology and artificial intelligence (AI) are revolutionizing elderly home care, empowering aging individuals to maintain independence while enhancing safety and well-being. Wearable devices and health-monitoring tools track vital signs in real-time, enabling early detection of health issues and reducing the need for frequent doctor visits. AI-powered virtual assistants and chatbots provide personalized care, from medication reminders and exercise encouragement to combating loneliness through companionship. Robots and AI-driven devices assist with daily tasks, including mobility support, meal preparation, and home safety monitoring. Smart home technology, exemplified by companies like Careium in Sweden, uses internet-connected devices to monitor seniors' environments and alert caregivers to potential problems, such as falls or prolonged inactivity [5]. Telemedicine is another significant advancement, allowing seniors to have video consultations with healthcare professionals, reducing the need for in-person visits [6][7]. Telemedicine platforms, like Min Doktor and Livi, allow seniors to consult remotely with healthcare professionals, making care access easier [8][9]. AI-powered fall detection devices (e.g., Safe Hub, Doro) automatically alert caregivers or emergency services, while robotic assistants like ElliQ offer companionship, medication reminders, and encourage physical activity [10]. AI-driven predictive health monitoring tools (e.g., Pilloxa) help ensure medication adherence and predict potential health risks by analyzing individual data patterns [11]. Cognitive health apps (e.g., MindMate) support individuals with dementia through memory exercises and daily reminders [12]. Virtual reality platforms are also utilized to combat social isolation, offering engaging experiences like virtual travel and social interaction. While Sweden's approach focuses on direct benefits for seniors, Japan emphasizes supporting caregivers. For example, Panasonic's digital care management service uses IoT monitoring to provide valuable data insights for care plan development, while NEC offers remote training for care professionals to address staffing shortages and improve care quality. These digital tools and AI technologies hold immense potential to transform elderly home care, promoting independence, enhancing safety and well-being, and supporting family caregivers, leading to more sustainable and compassionate care solutions for aging populations. The Ethical and Practical Considerations of digital technology and AI in Aged Care at Home While IT tools and AI offer significant benefits for elderly care at home, there are several risks, medical as well as ethical, that need to be considered. One major concern is privacy and data security. As wearable devices, sensors, and AI-driven solutions collect sensitive health data, there is the potential for data breaches or unauthorized access, putting personal information at risk. Ensuring robust encryption and secure storage of this data is critical. Another risk involves the reliability and accuracy of the technology. AI systems and monitoring devices may malfunction or provide incorrect data, leading to false alarms or missed health issues. This could result in unnecessary stress for caregivers or, conversely, delay timely intervention for the elderly. Additionally, technology may not be fully accessible to all seniors, especially those with limited digital literacy or cognitive impairments, potentially exacerbating existing inequalities in access to care. Over-reliance on technology could reduce the human element of caregiving. While AI can offer assistance, it cannot replace the emotional connection and personalized care that human caregivers provide. Balancing technology with human involvement is essential to ensure that elderly individuals receive comprehensive, compassionate care. It is therefore important to consider how technology can be leveraged to provide family members with the skills and support they need as caregivers. Many dedicated family caregivers lack formal training in medical or caregiving tasks. The integration of IT and AI tools can offer substantial support. These tools provide both training resources and real-time assistance to enhance caregiving. Digital platforms and mobile applications offer self-paced training modules that teach family caregivers how to handle basic medical tasks like administering medication, checking vital signs, or using medical devices. These tools provide step-by-step instructions and videos to improve caregiver competence. Additionally, AI-powered virtual assistants and chatbots can guide family members in daily caregiving routines, providing reminders for medications, doctor’s appointments, or physical exercises, ensuring that important tasks are not forgotten. Remote monitoring systems, such as wearable devices or in-home sensors, help caregivers keep track of the elderly person’s health metrics in real time, such as heart rate, activity levels, or sleep patterns. AI can analyze this data to detect potential health risks, like falls or abnormal vital signs, and alert caregivers immediately. This proactive approach enables early intervention and reduces the stress of constant surveillance. Telemedicine and virtual consultations provide caregivers with direct access to healthcare professionals for advice and guidance without needing to leave home. Family members can discuss concerns, clarify medical instructions, or receive expert opinions, ensuring proper care. Emotional support is also vital, and AI tools can assist here by offering stress-relief strategies, such as guided meditation programs, or even companionship through virtual pets or social interaction platforms for both caregivers and the elderly. By combining training, monitoring, and support, IT and AI tools may empower family members to deliver safe, effective care, reducing the burden and enhancing the quality of life for everyone involved. Digital technology and AI hold immense potential to transform elderly care, promoting independent living, improving safety and well-being, and supporting family caregivers. However, realizing this potential requires careful attention to the ethical and practical considerations discussed, including privacy concerns, data security, technological reliability, and equitable access. A balanced approach that integrates technology with human-centered care, prioritizing the emotional and social needs of older adults, is essential for creating sustainable and compassionate solutions. Future research can work on developing best practices for ethical implementation and ensuring that technology empowers both older adults and their caregivers. References [1] The World Bank (2021), Life expectancy (data). https://data.worldbank.org/indicator/SP.DYN.LE00.IN?view=map&year=2019 [2] OECD (2021), Social spending (indicator). doi: 10.1787/7497563b-en [3] Swedish Institute (2021), Elderly care in Sweden: Sweden’s elderly care system aims to help people live independent lives. https://sweden.se/life/society/elderly-care-in-sweden [4] https://www.mhlw.go.jp/stf/newpage_21481.html [5] https://www.careium.com/sv-se/om-careium/future-of-care/ [6] Hansson, J. and Blomqvist Carlsson, N. (2024). Äldre patienters upplevelse av digital rådgivning inom primärvården. Malmö universitetet, Hälsa och samhälle. [7] Mester, F. (2024). Äldres uppfattningar av digitala tjänster. Mälardalens universitet, Akademin för hälsa, vård och välfärd. [8] https://www.mindoktor.se [9] https://livissomsorg.se [10]https://elliq.com/?srsltid=AfmBOoouLYCdcnYv_UjdZwWHJx8La5h9EU7RZUuhEdTBPbJ4X_dr0uA [11] https://pilloxa.com [12] https://www.mindmate-app.com [13] https://tech.panasonic.com/jp/lifelens/dcm.html [14] https://jpn.nec.com/rtrepo/index.html |
16:30 | Ethical and Moral Implications of Sexbots in Contemporary Society ABSTRACT. 1 Purpose The integration of artificial intelligence (AI) and robotics into intimate technologies has led to the rise of sexbots—machines designed to simulate human companionship and sexual interaction. The global sexbot market is expanding rapidly due to techno-logical advancements, shifting social dynamics, and growing consumer demand. However, this development raises significant ethical, social, and legal concerns. This paper offers a multidimensional analysis of sexbots, focusing on their histori-cal and technological development, ethical dilemmas, legal challenges, and psycho-logical implications. Special attention is given to consent, objectification, gender stereotypes, privacy, and the societal impact of human-robot relationships. The aim is to provide a balanced perspective on whether sexbots contribute to human well-being or pose risks to social norms and relationships. 2 Methodology The study employs a systematic literature review of peer-reviewed articles, policy documents, and ethical discussions published over the past three decades. Drawing from human-computer interaction (HCI), legal studies, and social sciences, the re-search investigates user attachment to AI companions, ethical concerns around consent and commodification of intimacy, regulatory frameworks, and the social impact of sexbots on gender dynamics and human relationships. This interdisciplinary approach enables a comprehensive evaluation of the benefits, risks, and implications of sexbots in contemporary society. 3 Findings 3.1 Technological evolution and market Trends Sexbots have advanced from passive mannequins to interactive companions with speech, movement, and adaptive behavior. Innovations in materials such as silicone and thermoplastic elastomer (TPE), combined with machine learning, allow these robots to simulate intimacy by responding to user input. Notable early models, such as Realbotix’s “Harmony”, emerged in the early 2010s and were influenced by science fiction portrayals like “Ex Machina” and “Blade Run-ner” [5]. While still niche, the industry is expanding due to digitalization, rising loneliness, and increased social tolerance. Modern sexbots offer companionship be-yond sexual gratification, raising both opportunities and ethical questions. 3.2 Objectification and the problem of consent A key ethical dilemma is sexbots’ role in reinforcing gender-based objectification, particularly of women and children [1]. Many are designed to mimic stereotypically submissive female characteristics, perpetuating harmful gender norms. Kathleen Rich-ardson argues that such devices resemble "artificial sex slaves," reflecting dominance-subordination dynamics [4]. Another major issue is consent. Unlike human relationships, sexbot interactions lack mutual agreement and ethical reciprocity, potentially distorting users' understand-ing of consent, agency, and boundaries [6]. Over time, such interactions may erode empathy and weaken interpersonal respect [12]. The emergence of childlike sexbots is particularly controversial. Critics argue they normalize deviant behavior and pose legal and ethical risks [3]. Countries such as Australia and the UK have banned their production and distribution, citing concerns about objectification, dehumanization, and child protection [4]. International bodies like UNICEF and UNESCO emphasize that such technologies violate human dignity and contradict global child protection efforts. 3.3 Legal and regulatory challenges Sexbot regulation varies globally. Countries such as Japan and Germany, with mini-mal restrictions, have fostered industry growth, while Australia and the UK have imposed bans on childlike robots, citing ethical concerns [9]. In the United States, the absence of federal regulations has led to a fragmented legal landscape, with laws differing at the state level. Beyond legal restrictions, data privacy is a significant concern. Sexbots collect sensitive user data, including sexual preferences and biometric information, raising ethical questions about ownership, consent, and security. The lack of clear regulations on data usage increases the risk of exploitation and unauthorized access [3]. Addition-ally, questions about content ownership—whether the data belongs to the user, manu-facturer, or AI system—remain unresolved, further complicating accountability [12]. These gaps may disproportionately impact gender dynamics in AI companionship, particularly regarding consent and embedded biases. 3.4 Psychological and social implications Sexbots have been proposed as tools for improving mental health, particularly for those experiencing social isolation, low self-esteem, or anxiety related to intimacy [10].. They can provide simulated emotional support, offering a sense of closeness as a substitute for human relationships [8]. Benefits include reducing loneliness among elderly individuals and assisting in sexual therapy by providing a judgment-free envi-ronment for exploration. However, the long-term psychological effects remain uncertain [9]. While they may alleviate loneliness, overreliance on artificial companionship could deepen social iso-lation [11]. Users might develop a preference for machine-based relationships, weak-ening interpersonal skills [2]. Since sexbots simulate emotions without genuine affec-tive capacity, they create an illusion of reciprocity that could reinforce difficulties in forming human relationships [4]. Another critical issue is the ethical and legal use of sexbots in therapy. While some researchers see potential benefits, others warn about risks, making the debate com-plex. The asymmetrical nature of human-machine interactions challenges traditional notions of consent and emotional reciprocity. Without standardized guidelines, inte-grating sexbots into therapeutic settings may present unintended risks for vulnerable populations [4]. 3.5 Philosophical Perspectives Masahiro Mori’s “uncanny valley” theory remains relevant in understanding the dis-comfort people experience when interacting with robots that appear nearly—but not quite—human [7]. This phenomenon impacts both user acceptance and emotional attachment, as overly realistic sexbots can provoke unease rather than intimacy. Posthumanist discourse further complicates the ethical landscape. The evolution of sexbots from purely sexual devices to emotional companions blurs the line between human and machine. These “allodolls” challenge traditional notions of relationships, identity, and emotional exchange [14]. Sexbots are often seen as quasi-humans—simulating emotional behaviors without possessing consciousness. While they may serve as surrogate partners for those lack-ing human contact, critics argue they contribute to the dehumanization of intimacy. Favoring unidirectional, consequence-free interactions may weaken empathy and rede-fine what constitutes meaningful human connection [13]. 4 Future trends 4.1 Technological developments Sexbot technology continues advancing, generating both enthusiasm and controversy. Personalization enhances user interaction but raises ethical concerns about autonomy, consent, and manipulation. Advances in materials science create increasingly lifelike robots, intensifying debates on objectification and human-robot intimacy. Inclusive design aims to address the needs of older adults, individuals with disabilities, and marginalized groups, offering therapeutic benefits while avoiding stigmatization. 4.2 Recommendations for stakeholders Policymakers should establish regulations to safeguard user data, prevent gender-based objectification, and promote transparency in interactions. Manufacturers must adopt ethical design principles to ensure sexbots do not reinforce harmful gender ste-reotypes. Only through interdisciplinary collaboration—bridging sociology, psychol-ogy, and engineering—can we properly evaluate these technologies' societal implica-tions and chart a responsible path for development. 5 Conclusion Sexbots represent a rapidly evolving technology that may provide companionship and emotional support, particularly for those experiencing loneliness or social difficulties. However, they also pose ethical challenges, including objectification, reinforcement of gender stereotypes, consent issues, and social isolation. The lack of cohesive legal frameworks necessitates the development of standards that minimize risks while ensuring ethical innovation. Collaboration between re-searchers, industry leaders, and policymakers is crucial to fostering sustainable and socially responsible progress. As an emerging but controversial technology, sexbots require a careful balance be-tween innovation and ethical considerations to prevent the destabilization of social norms and human relationships. |
17:00 | Next Steps to Reduce the Gender Gap in Cybersecurity ABSTRACT. This paper reflects on the various research and outreach projects conducted by the author on the underlying factors of lack of women in the cybersecurity field and its impact on the workforce gap. Consequently the lack of women in cybersecurity has a significant negative impact by limiting the diverse perspectives and experiences on cybersecurity teams, potentially leading to less effective threat identification, mitigation strategies, and overall security posture, as well as hindering innovation within the field due to a smaller pool of talent. Subsequently, it has been argued that such a lack of pool of talent can: 1) reduced threat detection: teams members with different backgrounds and ways of thinking is better equipped to identify and understand a wider range of cyber threats, which can be crucial in proactively addressing emerging security risks; 2) Limits innovation: women can often bring unique perspectives and approaches to problem-solving, which can lead to more creative solutions and advancements in cybersecurity technologies; 3) Perpetuating gender stereotypes: this mindset can create a work environment where recruiting as well as retaining is challenging as it can reinforces the stereotype that the field is primarily for men, discouraging women from pursuing careers in this area; 4) Talent gap: There is no doubt with a small pool of qualified cybersecurity professionals, companies may struggle to find the necessary expertise to effectively protect against cyber threats; 5) Impact on specific threats: Reports [1] state that certain cyber threats, like online harassment and targeted attacks against women, might be better understood and mitigated with a more diverse cybersecurity workforce. The talent gap is growing, and the increasing threats are becoming more sophisticated. Diverse pool of talent can be only filled if there are strategies in place that aim in creating a pipeline within the school system; showcasing leaders and sharing their stories; and creating curriculum where male allies and women can demystify the industry and demonstrate that every individual can a cybersecurity post-secondary education and a career path in which they can thrive. Many articles highlight the importance of reducing the gap in cybersecurity field. For example, in the Cybercrime Magazine, Osborne [2] highlight the women will hold 30 Percent of Cybersecurity jobs globally by 2025 and female representation expected to reach 35 percent by 2031. Furthermore, it has been shown in a survey of 2,000 female STEM undergraduate students in 26 countries spanning six regions conducted by BCG [3] indicates “Solving both of these cybersecurity challenges—the staffing shortfall and the gender-based inequity—begins with opening STEM doors to women and girls. But the effort can’t stop at early-stage access. It must gain breadth and depth as women advance in the field so that they can fully participate in cybersecurity throughout a career trajectory. In light of this, the author shares the experiences, lesson learned, and the various methodologies used to develop a pipeline in cybersecurity for high school students (9th-12th grade in the USA). The author was recent received a three-year grant from an initiative to support a unique model that includes collaboration with college students, educators, university community, and industry members to provide outreach activities that will foster inclusive opportunities in cybersecurity and empower young women and students in underserved areas. This paper highlights the process to achieve this goal through a combination of outreach at high schools located in rural and remote southern Utah, and by leveraging her success of over a decade of existing outreach programs. The program includes hands-on activities, and opportunities for high school students to network with industry members as role models. This will allow the students to build their confidence and enhance other necessary skill sets as well as prepare them for an education and/or career in cybersecurity and technology. The motivation of this on-going research is to help create and develop programs that help students earn while they learn in-demand skills for high-wage careers. Through this grant, the author has an opportunity to foster partnerships between education and industry to provide students with work-based learning and apprenticeship opportunities while addressing the workforce needs of high-demand industries. It uses the lessons learned from past research findings as a starting point to discuss how the author has started strategizing with community members to create a pipeline in cybersecurity. It also highlights how can engaging young girls and boys can become role models and male allies to spark interest in cybersecurity and STEM fields. While addressing these factors mentioned above, this paper focuses on some of the pragmatic ways the educators can focus on that can motivate and retain diverse employees in the cybersecurity field. Consequently, it can contribute to changing structures and environments with increase in women’s representation and underrepresented group. This paper takes the four categories from her findings of the research at K12 into the context of a workplace environment. For this on-going research, this paper continues to use the “9 Strategies to Improve Gender Diversity in the Security Workforce” [3] and Microsoft and Kesar [4] articles as a starting point to highlight some examples to retain women in this ever evolving field. The strategies include: 1. Support Competitions and Scholarships Specifically for Women; 2. Set Up Internship Opportunities; 3. Use Inclusive Language in Hiring Efforts; 4. Involve Women in Recruitment; 5. Provide Opportunities for Lateral Growth; 6. Enable Employees to Pursue External Certifications; 7. Consider Women Who Are Rejoining the Workforce; 8. Offer Fair and Equitable Compensation; and 9. Organize Pathways for Advancement. Support Competitions and Scholarships Specifically for Women. Consequently, this research and outreach projects will contribute in helping to identify the skills, perspectives, and create a pipeline in the cybersecurity the field. Subsequently, this research can help close the skills gap and improve the quality of cybersecurity talent. References 1. Osborne, C (2023). Women To Hold 30 Percent of Cybersecurity Jobs Globally By 2025. Cybercrime Magazine, London, retrieved from https://cybersecurityventures.com/womenin-cybersecurity-report-2023/ 2. Osborne, C (2023). Women To Hold 30 Percent Of Cybersecurity Jobs Globally By 2025. Cybercrime Magazine, London, retrieved from https://cybersecurityventures.com/womenin-cybersecurity-report-2023/ 3. Panhans, D, Hoteit, L., Yousuf, S., Breward, T., Wong, C., AlFaadhel, A., AlShaalan, B. (2022). Empowering Women to Work in Cybersecurity Is a Win-Win. BCG Report. Retrieved https://www.bcg.com/publications/2022/empowering-women-to-work-in-cybersecurity-is-awin-win 4. Wolff, Josephine, (2020). 9 Strategies to Improve Gender Diversity in the Security Workforce, Security Intelligence, https://securityintelligence.com/articles/9-strategies-for-retaining-women-in-cybersecurity-and-stem-in-2020/ 5. Microsoft and Kesar, S (2018). Closing the STEM Gap Why STEM classes and careers still lack girls and what we can do about it, retrieved from https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE1UMWz |
17:30 | Impact of Gender Bias in the Output of AI Language Models on Heavy Users ABSTRACT. Research highlights how AI-generated narratives often reflect gender stereotypes, associating feminine characters with family and emotions, and masculine characters with politics and war. The key Goals are to investigate how AI models’ gender bias influences perceptions and decisions of frequent users and also to explore ways to adapt findings from prior studies to improve AI interactions and reduce the impact of bias in outputs. Students input five prompts into the AI, with outputs varying between generic masculine (experimental group) and gender-neutral/feminine-masculine forms (control group). They respond to questions about their desired childhood career (categorized by gendered or neutral terms) and rate the difficulty of gendered and neutral professions on a scale of 1–5. Responses are evaluated by gender and group (experimental vs. control). Cluster analysis via SPSS will create user persona (personalized fictional character). The experiment has received approval from the ethics committee, because participants remain unaware of the study’s purpose to avoid biased results. |
18:00 | The Epistemic Politics of Biometric Border Control PRESENTER: Marie Eneman ABSTRACT. This paper investigates ethical and sociotechnical dilemmas surrounding the implementation of biometrics such as facial recognition and digital fingerprints at a large Swedish airport, part of the EU’s Entry/Exit System (EES). We describe the EES as a supranational infrastructure aimed at enhancing border security across national and European levels. Drawing on empirical fieldwork from site visits, interviews, and document analysis, we explore the tensions inherent in this development. While biometrics are framed as solutions to strengthen security and efficiency, they also raise serious concerns about democratic values and rights. We draw on the concept of approximation to show how these technologies operate not by confirming fixed identities, but by inferring degrees of similarity to predefined risk profiles. This epistemic logic, based on probabilistic reasoning, introduces new forms of judgement at the border that may obscure legal standards, reinforce bias, and shift the locus of authority from humans to opaque algorithmic systems. The paper identifies several dilemmas and highlights how biometric systems risk normalising advanced surveillance within everyday border control. We position the airport as a techno-political space shaped by public–private partnerships and algorithmic infrastructures, and call for critical scrutiny of the epistemic assumptions and power asymmetries these systems reproduce. As biometric borders continue to evolve, there is a pressing need for an ongoing, critical interrogation of the normative assumptions, power asymmetries, and epistemic practices embedded in algorithmic border control. Without such reflection, we risk entrenching surveillance systems that not only monitor movement but fundamentally reshape how personhood, risk, and legitimacy are understood at the border. If borders are no longer about who you are, but about who the system believes you might be, what does that mean for the future of rights, accountability, and democratic oversight? |
16:30 | The Impact of Misogynistic, Sexist and anti-Feministic Speech in Social Media on Youth and the Role of AI PRESENTER: Anna Maria Piskopani ABSTRACT. EXTENDED ABSTRACT The impact of misogynistic, sexist and anti-feministic speech in social media on youth and the role of AI. Web 2.0 has been previously understood as an online public sphere that enables people to construct identities and communities, to influence official politics, and to promote democracy (Dahlgren, 2005). Thus, one of the new proposed digital rights is the right to participate in the digital online sphere. The European Declaration of Digital Rights advocates for the protection of the right to participate in the digital online sphere. Everyone should have access to a trustworthy, diverse, and multilingual digital environment. The justification is that access to diverse content contributes to a pluralistic public debate. The common element between the importance of the right to participate in the digital public sphere and the notion of well-being is the metaphor of “human flourishing” (Logan et al., 2023). Due to gender patterns, there are gaps between how young men and women interact in the online spaces. Young men are more likely to share content online, to look for information, read news about politics, while young women more often go online for social interaction and relationship maintenance (Bode 2017). Additionally, young women are more likely to feel that when they share their views, they are exposed to bullying, harassment and hate speech online. 60% of women (compared to 31% of men) feel that their gender is the reason for the reactions to their views (UNDP 2021). The rise of the misogynistic, sexist and anti-feministic online speech in social media in the recent years poses the question of whether youths (young people aged from 18 to 24) are affected by this, how so, and the role that AI algorithms and new AI moderation techniques play on this. Recent studies in UK investigated online misogyny and manosphere and ways to validate them in a misogyny scale (Rottweiler & Gill, 2021) and tested this scale in a university population (Broyd, 2024). Researchers recently conducted a survey in the UK to examine the impact of anti-gendered speech. The survey revealed that women primarily fear of being trolled or targeted by misogynists when they express themselves online and share photos (Stevens & all, 2024). There is evidence that algorithms in social media privilege extreme material over less provocative one, such as misogynistic speech viewed by young people, provoking a similar impact on their offline behaviour to girls (Cowood, 2024). Additionally, AI generated tools have added new ways of women’s online abuse and harassment like sexualised deepfakes (Gestoso, 2024). There is a legitimate fear that anti-feministic, sexist and misogynistic content normalises certain ways of thinking and increases inequalities between groups (Theilen, 2023) and is an attempt to silence women and increase gender inequality with severe political and social implications. In this paper, we are going to present part of our findings of our research project “Gendered exclusion and wellbeing on the internet” (University of Nottingham, Horizon Institute of Digital Economy, Welfare Campaign) which focuses on the internet as a technology which affects people’s wellbeing. Our project key aims are to explore how the misogynistic, sexist and anti-feministic speech creates barriers that prevent people from sharing opinions and thoughts in social media and from generally feeling able to participate on online platforms, and how this directly affects their wellbeing. We conducted quantitative research on social media users. Additionally, as the University of Nottingham attracts students from multiple cultural and ethnic backgrounds, we conducted qualitative research on University of Nottingham students and staff. Our first aim was to investigate: A) how they are impacted by the use and rise of misogynistic, antifeminist, hostile and manosphere online speech, B) whether they have experienced gendered digital exclusion or felt that their gender made it more difficult for them to participate due to their fear of being attacked. Additionally, we focused on the role that AI technologies play in this context. Given that Generative AI tools can help online users to create misogynistic, sexist and anti-feministic content, we investigated whether young people have encountered this AI content (such as sexualised deepfakes videos and images, AI generated images that enforce gender stereotypes and AI chatbots making misogynistic jokes or comments) and whether they believe that they can recognise AI generated content. As AI moderation services on social media can automatically make moderation decisions -- refusing, approving or escalating content – and they continuously learn from their choices, we asked online users, University of Nottingham students and staff, whether algorithms on certain platforms show them content that they regard as misogynistic, sexist or anti-feminist. We also asked them if they ever felt that content, they have posted was detected by AI algorithms as misogynistic, sexist or antifeminist even if it was not or if it should be detected and it was not. Finally, we shared these expressed views and concerns with various professional stakeholders, as social media and tech industry developers, civic organisations, policymakers about effective ways to mitigate the phenomenon. The aim of this study is not only to capture the impact of such speech on mental health and online participation but also to raise public awareness about the extent of the phenomenon and its social harms, suggest ways to mitigate it, and provide resources to educators and individuals to address it and discuss openly about these issues. In this paper we will present our research findings focused on the young people aged from 18 to 24 in comparison with the other population and search whether young people are less or more likely to take advantage of the opportunities raised by communication in the social media. References: Bode L. (2017), “Closing the gap: gender parity in political engagement on social media”, Information, Communication & Society Vol. 20, Issue 4, pp. 587-603, available at https://doi.org/10.1080/1369118X.2016.1202302. Cowood, F. (2024, June 20). The rise of the aggro-rithm: Can misogynistic content be stopped? The Guardian. https://www.theguardian.com/a-matter-of-connection/article/2024/jun/20/the-rise-of-the-algorithms-can-misogynistic-content-be-stopped-ai Gestoso P. (2024, January 8). Techno-Patriarchy: How AI is Misogyny’s New Clothes - https://patriciagestoso.com/2024/01/08/techno-patriarchy-how-ai-is-misogynys-new-clothes/ Dahlgren, P. (2005). The Internet, Public Spheres, and Political Communication: Dispersion and Deliberation. Political Communication, 22(2), 147–162. https://doi.org/10.1080/10584600590933160 Kruse, L. M., Norris, D. R., & Flinchum, J. R. (2018). Social Media as a Public Sphere? Politics on Social Media. The Sociological Quarterly, 59(1), 62–84. https://doi.org/10.1080/00380253.2017.1383143 Logan, A. C., Berman, B. M., & Prescott, S. L. (2023). Vitality Revisited: The Evolving Concept of Flourishing and Its Relevance to Personal and Public Health. International Journal of Environmental Research and Public Health, 20(6), 5065. https://doi.org/10.3390/ijerph20065065 Rottweiler, B., & Gill, P. (2021). Measuring Individuals’ Misogynistic Attitudes: Development and Validation of the Misogyny Scale. https://doi.org/10.31234/osf.io/6f829 Stevens, F. et al (2024, March). Understanding gender differences in experiences and concerns surrounding online harms: A nationally representative survey of UK adults. The Alan Turing Institute. Retrieved 19 August 2024 from: https://www.turing.ac.uk/sites/default/files/2024-03/understanding_gender_differences_in_experiences_and_concerns_surrounding_online_harms_-_a_nationally_representative_survey_of_uk_adults.pdf Theilen J., Article 20: Digital inequalities and the promise of equality before the law. Digital Rights are Charter Rights – Essay Series. (n.d.). Digital Freedom Fund. Retrieved 2 October 2024, from https://digitalfreedomfund.org/digital-rights-are-charter-rights-essay-series/ UNDP (2021), Civic participation of youth in a digital world – Europe and Central Asia, available at www.undp.org/eurasia/publications/civic-participation-youth-digital-world, accessed 4 February 2024. |
17:00 | Ethical Considerations and Governance Frameworks in the Digital Age PRESENTER: Maria Albertina Rodrigues ABSTRACT. AI, Social Networks, and Their Influence on Youth: Ethical Considerations and Governance Frameworks in the Digital Age Introduction The integration of artificial intelligence (AI) in social media platforms predominantly used by young people has introduced some ethical challenges that require urgent attention. AI-driven algorithms influence content selection, shape digital interactions, and impact youth behavior in ways that raise concerns about privacy, mental health, algorithmic biases, and digital literacy. These concerns align with global initiatives such as the Global Digital Compact, which advocates for a human-centered, rights-based, and inclusive digital future. This paper aims to explore the ethical considerations surrounding AI deployment in youth-oriented social media environments, emphasizing governance structures and regulatory frameworks that ensure AI-driven technologies prioritize youth well-being. As organizations like UNA Portugal advocate for responsible digital policies, this discussion becomes particularly relevant within both the Portuguese and global digital landscapes. Additionally, this paper draws on research concerning the transformation of ICT values during uncertain times (Saraiva et al., 2021), analyzing how crises—such as the COVID-19 pandemic—accelerate digital transformation and reshape ethical norms. The three main research questions of this proposal are: (1) What are the main ethical implications of AI in Social Media for young users?, (2) Which is the role of governance in the AI ethics?, (3) Which is the lessons learned from a case study of digital transformation?. These questions will guide the paper´s exploration of AI ethics, policy frameworks, and the evolving role of AI in youth-oriented digital environments. Chapter 1 - The Ethical Implications of AI in Social Media for Young Users AI systems are the backbone of social media platforms, influencing user experiences through content recommendation algorithms, targeted advertising, and moderation tools. While these technologies optimize engagement and personalization, they also present ethical risks, particularly for young users who may be more susceptible to digital harm. The most critical concerns include: 1. AI-Driven Content moderation and Its Impact on Youth Well-Being Recommendation algorithms optimize content for engagement, often amplifying emotionally charged material. Research suggests that prolonged exposure to extreme content can negatively affect youth mental health, contributing to anxiety, depression, and harmful self-perception (Tavares, 2003). This paper examines how ethical AI design can mitigate these risks and promote youth digital well-being. 2. Privacy and Data Protection in Social Networks Young users often share personal data without a full understanding of AI tracking and profiling mechanisms. AI systems collect vast amounts of personal information to refine targeting strategies, raising concerns about user autonomy and data governance. The Global Digital Compact calls for stronger policies that safeguard digital rights, emphasizing the role of regulations such as the EU’s General Data Protection Regulation (GDPR) and Portugal’s digital policies in ensuring responsible AI utilization. 3. Algorithmic Bias, Discrimination, and Representation in AI AI models inherit biases from training data, leading to discriminatory outcomes in content recommendations and moderation. Marginalized youth communities are disproportionately affected, limiting their visibility and access to digital opportunities. This research investigates how ethical AI frameworks can address algorithmic discrimination and ensure fair representation in online spaces (Saraiva et al., 2021). 4. Ethical Considerations in AI-Enhanced Targeted Advertising for Young Audiences AI-driven advertising systems exploit behavioral data to personalize ads, sometimes blurring the line between marketing and coercion. The COVID-19 pandemic saw a rise in AI-powered e-commerce and digital advertising, raising questions about ethical consumption and youth autonomy (Saraiva, 2022). This paper assesses the impact of AI-enhanced advertising on youth decision-making and explores potential policy interventions. 5. Transparency and Explainability of AI Systems in Youth-Oriented Platforms Many AI-driven systems operate as "black boxes", where users don´t have much clarity on how content is selected, moderated, or restricted. Youth may struggle to understand why certain posts appear in their feeds or why they are subjected to algorithmic decisions. Transparency and explainability in AI governance are crucial to fostering trust and digital literacy. This research explores methods for designing AI systems that prioritize fairness, interpretability, and accountability. Chapter 2 - The Role of Governance in AI Ethics The governance of AI-driven social networks requires a multi-stakeholder approach involving governments, policymakers, tech companies, educators, and civil society organizations. The Global Digital Compact promotes human-centered AI policies, while national initiatives, such as UNA Portugal, contribute to discussions on AI ethics. 1. Regulatory Considerations While the GDPR provides a robust legal framework for digital rights protection in Europe, additional measures are required to address emerging AI-related risks. This study reviews existing AI governance frameworks and identifies regulatory gaps that need to be addressed to enhance AI accountability and compliance. 2. Educational Initiatives for Digital Literacy Ensuring ethical AI development requires robust digital literacy programs that empower young users to critically engage with social media environments. Schools, universities, and NGOs play a fundamental role in equipping youth with the knowledge and skills needed to navigate the digital landscape responsibly. This paper examines which are the best practices in digital literacy education and their impact on AI awareness among young users. 3. AI and Civic Engagement Among Young People AI-driven platforms have the potential to foster youth civic engagement by providing access to diverse information sources and enabling digital activism. However, concerns about algorithmic bias, political polarization, and misinformation challenge AI’s role in democracy. This research tries to investigate how AI can be leveraged to strengthen civic participation while minimizing misinformation risks. Chapter 3 - Lessons Learned: A Digital Transformation Case Study The COVID-19 pandemic accelerated AI adoption across multiple domains, reshaping ethical considerations and digital governance priorities (Saraiva et al., 2021). Key lessons include: 1. Adjustments in ethical values related to AI consumption: The pandemic highlighted concerns about digital security, privacy, and mental health, leading to greater demand for transparency in AI systems. 2. AI in distance learning and education: Online education became a necessity, with e-learning platforms playing a central role. However, concerns about accessibility, surveillance, and algorithmic biases emerged. 3. The need for stronger AI governance: Increased reliance on digital platforms during the pandemic highlighted the need for ethical AI policies that protect youth users from exploitation and misinformation. 4 - Methodology To ensure a robust empirical foundation, this paper employs a mixed-method approach, combining quantitative data analysis and qualitative research: 1. Quantitative Analysis Survey on AI Awareness and Digital Literacy: A large-scale survey of Portuguese youth (ages 13-24) will assess their understanding of AI, privacy concerns, and social media experiences. This survey will include: Sentiment Analysis of AI-Generated Content: Using QuestionPro Software, this analysis tries to understand the sentiment trends in social media posts directed at youth. Algorithmic Bias Testing: Using AI models trained with different datasets, this study will evaluate disparities in content recommendations based on gender, ethnicity, and socioeconomic status. 2. Qualitative Analysis Expert Interviews: AI ethicists, policymakers, and educators will provide insights into governance challenges and AI literacy initiatives. This analysis will include: Policy Review: Comparative analysis of AI governance frameworks, with a focus on the Global Digital Compact, GDPR, and Portuguese digital policies. 5 - Conclusion As AI continues to shape youth minds on social media, ethical considerations must remain at the forefront of technological development and governance. This research fosters interdisciplinary discussions that bridge AI ethics, policy, education, and youth advocacy. By integrating insights from global initiatives like the Global Digital Compact and local efforts by UNA Portugal, this paper aims to contribute to a more inclusive and responsible digital future. |
17:30 | Social Media as Learning Ecosystems: Ethical Considerations and the Role of Artificial Intelligence in Youth Skill Development PRESENTER: Nuno Silva ABSTRACT. Social media platforms have transformed the way young adults interact with information, yet their potential as effective learning environments for skills development remains largely unexplored. This study investigates how young people (18-24 years old) make decisions about acquiring new skills, the extent to which social media influences these choices, and how AI-driven tools can optimize equitable learning pathways within digital ecosystems (Roshanaei et al, 2023). Using a multi-stage research design, this study first analyzes the decision-making patterns of young adults regarding skills acquisition. Through large-scale surveys conducted in Portugal we identify key motivational drivers, preferred learning formats, and the external factors that shap their choices. The second phase examines the role of social media as an informal learning space, assessing which platforms, content types, and engagement mechanisms are most effective in supporting skills development. Finally, the study evaluates the potential of AI-driven interventions—such as personalized content recommendations, adaptive learning pathways, and interactive digital mentors—to improve decision-making and optimize learning outcomes. Preliminary findings show that young adults make decisions about skills development based on perceived career relevance, peer influence, and accessibility of learning resources, with social media serving as a primary but unstructured source of information. While traditional platforms provide fragmented learning opportunities, AI-enhanced systems have the potential to structure and personalize these experiences, bridging the gap between passive content consumption and active skills acquisition. This study provides a comprehensive framework for integrating AI-driven mechanisms into social learning environments, offering insights for educators, policymakers, and platform developers on how to use/leverage social media for more effective, data-driven skills development among young adults. References Roshanaei, M., Olivares, H. and Lopez, R.R. (2023) Harnessing AI to Foster Equity in Education: Opportunities, Challenges, and Emerging Strategies. Journal of Intelligent Learning Systems and Applications, 15, 123-143. |
18:00 | Impact of Personal Growth on Adolescents’ Decisions to Study on Social Media PRESENTER: Nuno Silva ABSTRACT. Nowadays social media offers an important source of learning. In particular, social networks provide learning communities and discussions while adding Artificial Intelli-gence (AI) tools to facilitate learning and knowledge sharing. The objectives include identifying motivation triggers and selecting effective digital tools to facilitate learn-ing and knowledge sharing. This paper takes a theoretical approach to adolescence in terms of its choices and what then leads to practice after the age of 17. So far, the decision-making styles and moral development of adolescents in the context of so-cial networks involve complex interactions between cognitive, emotional, and social factors (Elwadhi 2024; Jacob et al 2024). Adolescence is an important transition in life. It is a time of rapid development, spending more time with friends and less with family, trying out different personal styles, different hobbies, all signs of growth towards adulthood (Yang and Laroche, 2011). Adolescents are figuring who they are and what they believe, besides adjusting to a changing body. There is a controversy on the influence of peers on adolescents' motivation to achieve academic results in school (Goodlad, 1984). Different opinions and discus-sion with a friend might also provoke changes in adolescents' main decisions. Berndt et al. (1990) investigated in their research whether friends' discussions would lead to an increase in the similarity of their decisions on motivation-related dilemmas. Social media is one of the most enthusiastic activities online where adolescents are considered the heaviest users (Nuñez-Rola & Ruta-Canayong, 2019). They conclud-ed that there should be guidelines in choosing social media contents used as class-room teaching materials and as a tool to enhance learning. Parental involvement exerts apparently a significant effect on social network sites usage by adolescents and academic motivation (Minimol and Angelina, 2015). The way adolescents make decisions is one of the behavioral indicators of auton-omy (Zimmer-Gembeck and Collins, 2003) with autonomy being defined as the ca-pacity for self-direction and self-regulation of individuals (Hill & Holmbeck, 1986). As per Barber (1996) the influence of parental control on adolescent decision making has two dimensions: behavioral (refers to attempts of controlling adolescent behav-ior) and psychological (refers to parental control that interferes with adolescent emo-tional and psychological development using manipulation, criticism, among others). The results of Perez & Cumsille (2012) research evidence that both parental con-trol practices and adolescent temperamental dimensions are associated with decision-making, concluding that in middle and late adolescence, it is assumed beneficial for adolescent welfare to increase autonomy in decision-making on issues of personal and multifaceted domain. Sachisthal et al (2020) investigated the relationships between the network charac-teristics of Year 10 students in Australia with their decision to enroll in a science course in the last year of secondary school, supporting the validity of network theory in the context of science interest, as central indicators apparently play an influential role in the network. Frequently adolescents are often portrayed as passive receivers of social influence, where peers easily sway their decisions. However, following the results of Slagter et al (2023) study, adolescents are also motivated to gain information about the prefer-ences of specific peers, and thus play a role in selecting the sources that may inform their decisions, demonstrating a preference for their friends over non-friends, as well as for peers who were perceived as trustworthy. Moral decision making is a complex process involving many components. Adoles-cents begin to make choices within a behavioural ecology where for many of them there are emerging issues in the decision-making process (Susanu, 2023), and most of them operate at the Kohlberg’s conventional level, where societal norms and relation-ships play a central role in moral reasoning. Cognitive growth, such as the develop-ment of formal operational thinking (as per Piaget), supports this progression by ena-bling adolescents to consider multiple perspectives. Morally developed adolescents are more likely to use social networks responsibly for studying, with the ability to critically evaluate information found on social net-works, to engage in respectful and constructive interactions, creating a positive envi-ronment for collaborative learning helping them to discern between reliable and unre-liable sources for their studies. Concerning the development of independence and identity, there is a need for a deeper understanding of the impact of social media on adolescent moral develop-ment and the crucial role of parents in guiding adolescents in responsible social media usage (Hermansyah, et al., 2024). Artificial intelligence (AI), that could be described as a computer system that be-haves intelligently and as if a human being were so behaving (McCarthy et al. 1955), has helped to solve many complex problems in the world. When these types of sys-tems were first introduced, they were only used in some specialized domains but with advances in machine learning techniques their applications have broadened (Ha, T. et al, 2022). Contrary to the traditional concept of artificial intelligence, explainable artificial intelligence (XAI) aims to provide explanations for the prediction results and make users perceive the system as being reliable. If AI systems are designed in a more transparent way and thus are more explainable from the user’s point of view, these systems will be considered more reliable and contribute to the users’ satisfac-tion. Sadler et al. (2016) referred that the level of system transparency affects not only trust but also the results obtained of the system. So, we believed that the transparency and explainability of AI systems (XAI) in youth-oriented social networks are critical for building trust, ensuring ethical use, and fostering digital literacy among adolescents. Many AI-powered apps used in social networks function as "black boxes," making it difficult for users to understand how decisions (e.g., recommendations or content moderation) are made. Therefore, based on the work done by He Xin et al (2023) in the medical domain, we believe that it is necessary to integrate the studying needs of adolescents and build user-centered ex-plainable XAI design practices in the domain of social networks. With this new ap-proach of XAI systems, adolescents could be compelled to using the social networks to study as a further source to their academic preparation. Finally, important research questions emerged from this research: How and what do adolescents learn on social networks? Is decision style related to moral development? What about transparency and explainability of AI systems in adolescents-oriented social networks? Our approach is to extract explanation content needs through a comprehensive review of existing literature on the topic and integrate the results into a systematic XAI adolescents needs to study in social networks. References 1. Berndt, T., Laychak, A. & Park, K. (1990). Friends’ Influence on Adolescents’ Academic Achievement Motivation: An Experimental Study. Journal of Educational Psychology, 82(4), 664-670. 2. Elwadhi, S. (2024). The Impact of Social Media on the Decision Making of Youth: A Sur-vey-Based Analysis. Innovative Research Thoughts, 10(2), 57–69, DOI: 10.36676/irt.v10.i2.08. 3. Goodlad, J. (1984). A place called school. New York: McGraw Hill 4. Ha, T., Sah, Y., Park, Y., Lee, S. (2022). Examining the effects of power Status of an Ex-plainable Artificial Intelligence System on Users’ Perceptions, 41(5), 946-958. 5. He, X., Hong, Y., Zheng, X. & Zhang, Y. (2023). What Are the Users’ Needs? Design of a User-Centered Explainable Artificial Intelligence Diagnostic Systems, International Journal of Human-Computer Interaction, 39(7), 1519-1542. 6. Hermansyah, D., Syaharuddin, & Amir, L. S. (2024). The influence of social media usage on the ethical and moral behavior of high school adolescents. International Conference on Education, Teacher Training, and Professional Development, 35-44. 7. Hill, J., & Holmbeck, G. (1986). Attachment and autonomy during adolescence. Annals of Child Development, 3,145–189. 8. Jacob, P. & Agarwal, S. (2024). Impact of Social Media on the Decision-making Process of Student. http://dx.doi.org/10.2139/ssrn.4927276 9. McCarthy, J., M. L. Minsky, N. Rochester, and C. E. Shannon. 1955. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” Retrieved from http:// www-formal.stanford.edu/jmc/history/dartmouth/ dartmouth.html. 10. Minimol, K.T. & Angelina, J. M. (2015). Balancing Social Network Sites Usage Among Teenagers With Parental Involvement: Effects on Academic Motivation, Indian Journal of Positive Psychology, 6(1) 57-62 11. Nuñez-Rola, C. & Ruta-Canayong,(2019) N. Social Media Influences to Teenagers, Interna-tional Journal of Research Science & Management, DOI: 10.5281/zenodo.3260717 12. Pérez, J. Carola & Cumsille, Patricio (2012). Adolescent temperament and Parental Control in the development of the adolescent decision making in a Chilean sample. Journal of Ado-lescence 35 659-669 13. Sadler, G., H. Battiste, N. Ho, L. Hoffmann, W. Johnson, R. Shively, … D. Smith. 2016. “Effects of Transparency on Pilot Trust and Agreement in the Autonomous Constrained Flight Planner.” Paper presented at the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). 14. Slagter, S.K.; Gradassi, A.; Duijvenvoorde, A.C.K.von; Wouter van den Bos (2023). Identi-fying Who Adolescents Prefer As Source of Information Within Their Social Network, Sci-entific Reports doi.org/10.1038/s41598-023-46994-0 15. Susanu, N. (2023). Moral Development in Adolescents. New Trends in Psychology, Vol. 5, No. 1/2023, pp. 30-34 16. Yang. Z., & Laroche, M. (2011). Parental responsiveness and Adolescent susceptibility to peer influence: A cross-cultural investigation. Journal of Business Research, 64(9), 979-987 17. Zimmer-Gembeck, M. & Collins, W. (2003). Autonomy development during adolescence. In G. R. Adams, & M. D. Berzonsky (Eds.), Blackwell handbook of adolescence (pp. 175–204). Malden, MA: Blackwell Publishing |