View: session overviewtalk overview
Keynote speaker: Juho Leinonen
10:30 | Explainable AI (XAI) in a societal citizen perspective – power, conflicts, and ambiguity ABSTRACT. Explainable AI (XAI) as a research field has had an exponential growth in the last decade driven by a curiosity to investigate "the inside" of AI models appearing as black boxes through developing techniques and methods. Recent definitions frame XAI as a process encompassing both data and application, explicitly underscoring that XAI should be regarded as more than just techniques and methods. The human stakeholder perspective is clearly underscored in recent definitions of XAI, but what a human stakeholder focus means in organizational and societal settings is currently unexplored. This paper therefore aims to explore how the concept of XAI can be applied in societal settings by first presenting a layered theoretical understanding of XAI, suggesting that societal explainability dimensions might be grasped through an analysis of the current discourse surrounding AI in public press. We use a sample of news articles published in the Norwegian public press early summer 2024 regarding Meta's approach to use personal data in training of AI models to perform a discourse analysis. The aim of the analysis is to provide insights into how the articles create a perception of reality relating to the future AI system Meta aims to develop. Our analysis reveals that the public is presented with oppositions constructing a reality surrounding the future AI system Meta aims to develop, with shifting power dynamics, conflicting interests, and ambiguity in responsibility as major themes present in the current discourse. We end by discussing the implications following from our analytical approach and findings. |
10:50 | A Deep-Learning Based Approach for Multi-class Cyberbullying Classification Using Social Media Text and Image Data PRESENTER: Israt Tabassum ABSTRACT. Social media sites like Facebook, Instagram, Twitter, LinkedIn, have become crucial for content creation and distribution, influencing business, politics, and personal relationships. Users often share their daily activities through pictures, posts, and videos, making short videos particularly popular due to their engaging format. However, social media posts frequently attract mixed comments, both positive and negative, and the negative comments can in some cases take the form of cyberbullying. To identify cyberbullying, a deep-learning approach was employed using two datasets: one self-collected and another public dataset. Nine deep-learning models were trained: ResNet-50, CNN and ViT for image data, and LSTM-2, GRU, RoBERTa, BERT, DistilBERT, and Hybrid (CNN+LSTM) model for textual data. The experimental results showed that the ViT model excelled in multi-class classification on public image data, achieving 99.5% accuracy and a F1-score of 0.995, while RoBERTa model outperformed other models on public textual data, with 99.2% accuracy and a F1-score of 0.992. For the private dataset, the RoBERTa model for text and ViT model for images were developed, with RoBERTa achieving a F1-score of 0.986 and 98.6% accuracy, and ViT obtaining an F1-score of 0.9319 and 93.20% accuracy. These results demonstrate the effectiveness of RoBERTa for text and Vision Transformer (ViT) for images in classifying cyberbullying, with RoBERTa delivering nearly perfect text classification and ViT excelling in image classification. |
11:10 | Kunstig intelligens og kreativitet ABSTRACT. Denne artikkelen presenterer resultater av en studie som ser på hvilken innvirkning bruk av generativ kunstig intelligens kan ha på individuell kreativitet hos kunnskapsarbeidere i konsulentbransjen. Studien er basert på intervjuer med ni informanter fra to konsulentselskaper. Vi finner at bruk av generativ KI kan ha både positiv og negativ innvirkning, og diskuterer viktige forutsetninger for at KI skal fremme snarere enn hemme menneskelig kreativitet. Studien bidrar til forskning på betydningen av KI for organisasjoner, og er relevant ikke bare for konsulentbransjen, men også i andre former for kunnskapsarbeid. |
11:30 | Managing responsible AI in organizations ABSTRACT. This paper focuses on the intersection between responsible artificial intelligence and organizational management. With the rapid advancement of AI, numerous questions emerge. Some of the most serious questions relate to how AI can be implemented and used responsibly in organizations. Managers play an important role in addressing these concerns. Conversely, the implementation of AI also affects managers, eliciting ethical considerations. Reviewing 28 empirical studies, we examine the current state of research in this field. |
11:50 | Algorithmic Profiling in the Workplace: Employee Perceptions and Technostress ABSTRACT. Algorithmic profiling is becoming a common practice in workplaces, aimed at enhancing productivity and security. However, it raises concerns about employee privacy, algorithmic aversion, and technostress. This paper examines two cases of algorithmic profiling in a Norwegian municipality: a Security Awareness Program tailored to employee behaviors and a User Behavior Analytics (UBA) system that monitors endpoint activities. Using technostress theory, we investigated how algorithmic profiling affects employee sentiments, focusing on privacy concerns, perceived invasiveness, and stress responses. Our mixed-method case study reveals concerns about algorithmic fairness and heightened stressors such as techno-overload and techno-insecurity. The findings suggest that while algorithmic profiling can enhance productivity, it also can induce technostress, particularly through techno-insecurity, techno-complexity, and techno-invasion. To mitigate these challenges, ethical implementation and transparency are critical. We also provide recommendations for organizational practices and future research directions. |
10:30 | Automatic 3D Segmentation of Closed Mitral Valve Leaflets on Transesophageal Echocardiogram PRESENTER: Maïlys Hau ABSTRACT. Heart disease is a leading cause of death worldwide, with mitral valve (MV) disease being among the most prevalent pathologies. The MV, constitutes a complex three-dimensional apparatus which makes clinical assessment challenging. Therefore, it would be highly desirable to have a patient-adapted model of the mitral annulus shape and its leaflets, both for diagnosis and intervention planning, as well as follow-up purposes. The main objective of this work is two-fold: improve the valve segmentation’s quality using modern architectures and extend it to a sequence of 3D ultrasound recordings for the entire systolic phase. For training purposes, we used a dataset consisting of 108 volumes that were semiautomatically segmented using a commercially available package. We tested several network architectures and loss functions available in the MONAI package to investigate which ones are best suited for the task at hand. We aimed for fast processing times that were usable in practice. Our method was evaluated on 30 recordings and compared to annotations made by two expert echocardiographers. The comparison metrics include Average Surface Distance (ASD), Hausdorff Distance 95% (HSD 95%), as well as standard classification metrics. Our results were a Dice score of 77.06±13.18 % on the evaluation test and distance errors of 0.09±0.12 mm for ASD and 0.49±0.43 mm for HSD 95% and the segmentations were considered comparable to the ground truth by clinicians. The proposed annotation method was significantly faster than one of the previous works and yielded results comparable to the state-of-the-art using a noisier ground truth. |
11:00 | Automatic Segmentation of Hepatic and Portal Veins using SwinUNETR and Multi-Task Learning PRESENTER: Shanmugapriya Survarachakan ABSTRACT. Accurate segmentation of the hepatic and portal veins plays a vital role in planning and guiding liver surgeries. This paper presents a novel approach using multi-task learning(MTL) within SwinUNETR architecture to segment both the hepatic and portal veins at the same time. The MTL framework is trained using Dice-Focal loss and designed with two decoder branches each for segmenting the hepatic and portal vein branches. The results from the clinical CT data have shown significant performance for both the hepatic and portal veins compared to the base model (SwinUNETR), especially at the early stages of training. Notably, the MTL model achieved statistically significant results for the portal vein segmentation compared to the base model after 100 epochs. Our proposed MTL model (SwinUNETR_MTL) achieved a dice similarity coefficient (DSC) of 0.8404 for the hepatic vein and a DSC of 0.8120 for the portal vein segmentation. Our findings suggest that the MTL model attains faster convergence and increased segmentation accuracy, making it a promising approach for segmenting complex structures in the clinical setup. |
11:30 | Enhancing Cell Detection with Transformer-Based Architectures in Multi-Level Magnification Classification for Computational Pathology PRESENTER: Jarl Sondre Sæther ABSTRACT. Cell detection and classification are important tasks in aiding patient prognosis and treatment planning in Computational Pathology (CPATH). Pathologists usually consider different levels of magnification when making diagnoses. Inspired by this, recent methods in Machine Learning (ML) have been proposed to utilize the cell-tissue relationship with different levels of magnification when detecting and classifying cells. In particular, a new dataset named OCELOT was released, containing overlapping cell and tissue annotations based on Hematoxylin and Eosin (H&E) stained Whole Slide Images (WSIs) of multiple organs. Although good results were reached on the OCELOT dataset initially, they were all limited to models based on Convolutional Neural Networks (CNNs) that were years behind the state-of-the-art in Computer Vision (CV) today. The OCELOT dataset was posted as a challenge online, yielding submissions with newer architectures. In this work, we explore the use of transformer-based architecture on the OCELOT dataset and propose a new model architecture specifically made to leverage the added tissue context, which reaches state-of-the-art performance with an F1 score of 72.62% on the official OCELOT test set. Additionally, we explore how the tissue context is used by the models. |
10:30 | PRESENTER: Guru Bhandari ABSTRACT. Efficient detection and mitigation of Distributed Denial of Service (DDoS) attacks targeting Internet of Things (IoT) infrastructure is a challenging task in the field of cybersecurity. Y. Jia et al. propose Flowguard, an extraordinary solution to the mentioned problem that relies on inspecting network flow statistics leveraging statistical models and Machine Learning (ML) algorithms. Flowguard utilizes CICDDoS2019 dataset and the authors' unique dataset. The authors did not provide the source code or the complete dataset, yet, motivated by their findings, we decided to reproduce Flowguard. However, we ran into numerous theoretical and practical challenges. In this paper, we present all of the issues related to Flowguard's foundations and practical implementation. We highlight the false and missing premises as well as methodological flaws, and lastly, we attempt to reproduce the flow classification performance. We dismantle Flowguard and show that it is unrelated to IoT due to the absence of IoT devices and communication protocols in the testbeds used for generating their and CICDDoS2019 datasets. Moreover, Flowguard applies nonsensical statistical models, and uses an overfitted ML model that is inapplicable in real-world scenarios. Furthermore, our findings indicate that Flowguard's binary ML classification results were manipulated. They were presented in a misleading manner and improperly compared against another paper's multi-class classification results without a reference. Our results show that Flowguard did not solve the problem of DDoS detection and mitigation in IoT. |
11:00 | Security Architecture for Distribution System Operators: A Norwegian Perspective PRESENTER: Martin Gilje Jaatun ABSTRACT. Power distribution is becoming increasingly vulnerable to external cyber threats due to the interconnectivity between the OT and IT systems at the Distribution System Operator's (DSO) premises. Security architectures provide a system overview and simplify the implementation of security measures. However, few works explain the development and design of such a security architecture for the DSO. This paper proposes a future-oriented security architecture for Norwegian DSOs, based on interviews and meetings with the industry, existing security standards, and smart grid guidelines by applying a design science approach. The architecture includes national systems, (e.g., Elhub), and near-future smart grid developments (e.g., Advanced Distribution Management Systems). The architecture signifies the need to consider implications of the DSO's future digital developments, responsibilities, and functionalities in other countries. Future research should investigate the people and processes related to DSO premises to complement the technology perspective. |
11:30 | Fuzz Testing of a Wireless Residential Gateway PRESENTER: Noah Holmdin ABSTRACT. The rise of cyber-attacks against the ever-expanding network connectivity has resulted in a need for conducting security assessments in home gateway devices, which serve as junctures between private and public networks. Fuzzing, a method where invalid, random, or unexpected data is injected into a system, has emerged as a potential candidate for such assessments. This study is centered around testing the feasibility of fuzzing against home gateway devices, using an action research methodology focused on evaluation through practical implementation. An important aspect of conducting fuzzing is the implementation of monitoring tools to capture data that causes the target to behave unexpectedly. This study found that both a process monitor and a network monitor are essential for overseeing the fuzzing session. The process monitor tracks the status of the target process, while the network monitor captures network traffic between fuzzer and target. The findings demonstrate that fuzzing is an effective tool for conducting security assessments of home gateway devices. |
Keynote speaker: Øystein Haugen
14:15 | Trenger vi læringsassistenter? Hvordan kan hverandre-vurdering og/eller egen-vurdering gi god læringsutbytte. ABSTRACT. Denne studien undersøkes effekten av lærervurdering (LV), egen- vurdering (EV) og hverandrevurdering (HV) på læringsutbyttet til 528 studenter i et introduksjonskurs i informatikk. Studentene ble tilfeldig fordelt i tre grup- per, hvor hver gruppe mottok en annen type tilbakemelding på en obligatorisk oppgave. Resultatene viste at alle intervensjonene førte til en signifikant for- bedring i studentenes prestasjoner fra første utkast til endelig innlevering (p < 0.05). HV-gruppen hadde den største gjennomsnittlige økningen i poeng (M = 0.52), etterfulgt av LV (M = 0.45) og EV (M = 0.36). På eksamen oppnådde HV- gruppen den høyeste gjennomsnittlige poengsummen (67.08%), sammenlignet med EV (62.85%) og LV (62.59%). Disse funnene fremhever potensialet til al- ternative vurderingsmetoder, spesielt hverandrevurdering, som effektive verktøy for å fremme læring og prestasjoner. Studien bidrar til forskningen på vurdering i høyere utdanning og gir innsikt i fordelene og utfordringene ved hver tilnærming. Resultatene har viktige implikasjoner for vurderingspraksis og understreker be- hovet for å kombinere egenvurdering og hverandrevurdering med tradisjonell læ- rervurdering for å skape et mer helhetlig og studentsentrert vurderingsmiljø. |
14:45 | Applying Liljedahl’s Thinking Classrooms in a Higher Education Digital Technology Course ABSTRACT. This paper explores student learning experiences in the ICT course Digital Technology using one specific pedagogical approach, Pe- ter Liljedahl’s ”Thinking classrom”. We assigned 25 students to random groups to solve a network problem on whiteboards and conducted in- depth interviews with 10 participants. Thematic analysis revealed im- proved communication and academic focus, though knowledgeable stu- dents still took more active roles. While most students felt engaged, some noted drawbacks such as increased energy demands, awkwardness, and reduced autonomy, and expressed concerns about using this method for summative assessments. |
15:15 | Programmeringsparadokset etter Sfard og Leron ABSTRACT. Sfard og Leron (1996) observerte at studentar arbeider flittigare og oftare lukkast når dei programmerer enn dei gjer med analytisk matematikk, sjølv om programmeringsoppgåva løyser det same matematiske problemet i ein meir generell form. Her går me gjennom relevant litteratur og teori for å drøfta kvifor det kan vera slik. Dette reiser interessante spørsmål både om korleis me kan bruka programmering for å styrka matematikkforståinga og korleis ein best lærer programmering. |
15:45 | Students' perceptions toward essential functionalities and qualities of peer code assessment tools ABSTRACT. Peer code assessment (PCA) empowers computer science students, enhancing their learning and equipping them with practical skills for industry work. However, instructors often face a scarcity of well-designed tools customized for the learning environment. Additionally, there is a knowledge gap concerning the latest generation of peer assessment tools. So, the question is, how could a PCA tool be designed to support students’ programming learning experience? A case study was conducted using the PeerGrade tool, employing interviews and observation as data generation methods to answer this question. Research aimed to identify features that enhance student learning and uncover essential qualities of peer code assessment tools from students’ perspectives. Informants considered inline comments, general comments, rubrics, threads, and Code editor functionalities essential. They also highlighted the importance of utilizing a customized, user-friendly tool with a step-by-step process. The Self-Regulated Learning conceptual theory has been used as a theoretical lens. Based on SRL, it has been found that implementing identified tool qualities and features will improve the learning experience by increasing students’ motivation and enabling them to follow their learning strategies. The findings can be used as a design principle for developing peer code assessment tools. |
16:15 | A Case Study on Student Perspective of Peer Code Review (PCR) PRESENTER: Attiqa Rehman ABSTRACT. Peer Code Review (PCR) is a professional practice and a learning method. A case study on PCR was conducted in a “Programming Languages” course in the fall semester of 2023 at the Norwegian University of Science and Tech-nology (NTNU). A new protocol for peer code review was implemented where the students received a suggested solution and instructor feedback on their solutions before reviewing their peers’ solutions. The motivation for the protocol was to reduce the students’ cognitive load, allowing them to focus on assessing the code produced by their peers. A survey among the students showed that they engaged in the PCR and found the workload rea-sonable. The survey also indicates that students found the review work an opportunity for learning even under the new protocol. |
14:15 | Enhanced Anomaly Detection in Industrial Control Systems aided by Machine Learning ABSTRACT. This paper explores the enhancement of anomaly detection in industrial control systems (ICSs) by integrating machine learning with traditional intrusion detection. Using a comprehensive dataset from the \gls{swat} facility at iTrust labs, we leverage both network traffic and process data to improve the detection of malicious activities. This hybrid approach significantly improves detection capabilities by capturing both network anomalies and process deviations, addressing gaps in traditional intrusion detection systems for increasingly interconnected ICSs. The findings contribute to a deeper understanding of anomaly detection techniques, providing actionable insights to improve the security posture of critical infrastructure. |
14:45 | PRESENTER: Rebeka Toth ABSTRACT. Ensuring information security means not only improving the technical controls of business data confidentiality and integrity but also managing the human factor. One of the key user weaknesses is considered to be their susceptibility to emotional manipulation exploited by cybercriminals to trick their victims into taking an insecure action. Phishing emails are the easiest and most widespread form of cyberattacks. In this article, we study the correlation between the emotions users have when they receive phishing emails and their further behavior toward those emails. The research consists of two phases: self-reflection survey, when respondents assess their emotions and behavior toward presented emails (1), and field study, when respondents are sent simulated phishing email attacks, recording all actions taken after receiving such emails (2). The research has confirmed the importance of emotions as one of the key factors affecting user behavior toward phishing emails. Moreover, we have found that the range of emotions makes no difference, whereas their intensity does: the more intense the emotions are, the more likely that users will take insecure actions induced by the fraudster. |
15:15 | Elevation of MLsec: a security card game for bringing threat modeling to machine learning practitioners ABSTRACT. Machine learning based systems have gained a massive adoption the last few years. These systems bring with them inherent risks that must be handled as they get brought into new domains where the risk landscape might change in unforeseen ways. While a lot of work in machine learning security research is being done on offensive security, not enough emphasis is put on security by design and the creation of more resilient systems. Threat modeling and risk analysis will likely play an important role in the future of machine learning security, assisting the shift-left movement. I propose Elevation of MLsec, which is a threat modeling game inspired by Elevation of Privilege. The game is intended to get more ML practitioners started with threat modeling and support them in building of more secure ML systems. I describe the objectives, risk framework mapping, design considerations and testing experiences during the creation of the game. |
NIK meets Bergen Language Design Laboratory (BLDL): https://conf.researchr.org/home/bldl-15
The program includes a talk from Bjarne Stroustrup
NIK meets Bergen Language Design Laboratory (BLDL): https://conf.researchr.org/home/bldl-15
The program includes a talk from Bjarne Stroustrup