NIKT 2024: NORWEGIAN ICT CONFERENCE FOR RESEARCH AND EDUCATION
PROGRAM FOR WEDNESDAY, NOVEMBER 27TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:00 Session 20: NOKOBIT Keynote

Keynote speaker: Björn Þór Jónsson

Location: Storsalen
09:00
On The Importance of Database Techniques for Multimedia Applications

ABSTRACT. In this day and age, it is rare to find multimedia applications that focus on small media collections, yet the research community largely continues to do research using yesterday’s small-scale benchmark collections. In this talk we argue the need for scalability and why this eventually boils down to database techniques. We present two successful scalability projects from the past, addressing large-scale copy detection and interactive learning of image preferences, respectively. Finally, we describe a current multimedia analytics project where we have made some advances but further techniques are needed.

10:15-12:00 Session 21A: UDIT 3: Assessment
Location: Storsalen
10:15
Rettferd i variantoppgåver med tilfeldig trekking

ABSTRACT. I automatiserte prøver som studentar skal kunne ta om att mange gonger, er det behov for å lage store mengder oppgåver som blir trekte tilfeldig frå gong til gong. Ved formativ bruk av prøvene trengst variantar for å unngå at stu-dentar etter kvart svarer rett på alt berre ved å pugge svar, og ved summativ bruk av prøvene trengst variantar for å hindre at gjentak gir ein urettvis for-del framfor å ta prøva på første forsøk. Eit mogeleg problem med tilfeldig trekking mellom mange variantar, er dersom variantane har ulik vanskegrad, slik at studentar kan vere meir eller mindre heldige med kva oppgåver som blir trekte for dei. Denne artikkelen ser på nokre variantoppgåver som blei brukte både i formative treningstestar og summative meistringstestar i eit innleiande programmeringskurs i Python for førsteårs universitetsstudentar, meir spesifikt oppgåver som skulle teste forståing av if-setningar i Python. Spørsmåla vi ser på er følgjande: 1) Kva analyser kan faglærar gjere for å sjekke om oppgåvevarianter er rettferdige? 2) Var oppgåvene rettferdige ved dette konkrete høvet? 2) I den grad det blei funne variasjon i vanskegrad mellom variantar, kva var årsaka til denne? Spørsmåla blei undersøkte ved statistisk analyse av anonymiserte studentprestasjonar oppgåvene. Funn ty-der på at jamvel om kodekompleksiteten var lik i alle variantane, var det noko variasjon i vanskegrad. Mogeleg forklaring kan liggje i samanhengen mellom koden og den tekstlege utforminga av oppgåveteksten eller svaral-ternativa. Artikkelen avsluttar med idear til korleis oppgåvene kunne ha vore gitt på ein meir rettferdig måte.

10:45
Automated adaptive testing vs. linear testing in undergraduate mathematics

ABSTRACT. We conduct a lab-based randomized controlled trial with 47 undergraduate students in mathematics, comparing an automated adaptive testing system, which adjusts difficulty based on performance, to traditional linear tests. Results shows that students using the adaptive test scored 26.2 percentage points higher on a subsequent exam than those in the linear test group (p < 0.05). Feedback indicate that participants found the system user-friendly, believed it could improve their performance, and valued the tailored feedback, particularly guidance on focus areas.

11:15
Insights from the Perspective of Examiners on the Justification of Grades in Higher Education in Norway

ABSTRACT. Students in higher education in Norway have the right to ask for justifica-tion for and/or complain about the grades they receive. Drawing on an online survey of examiners (n = 54) at a Norwegian university college, we report in terms of numbers and open-ended comments about justifications given to students. Guided by Winston and Boud (2020), we contribute insights that fill a gap in the literature. Our results show that the examiners experienced many requests for justification, which they perceived as time-consuming. They suspected that the information system for exams and grades (WISEflow, in our case) made it easy for students to request justifications. At the same time, they were positive that feedback should be given to stu-dents. We conclude that there is a need for more research on the usefulness of the justification of a given grade in higher education. Our findings reveal that the examiners requested better guidelines for the purpose and content of justifications (descriptive feedback only or advice on how to improve grades) and how to motivate students to be receptive and learn from justifications. Thus, there is great potential for future studies pertaining to this topic.

10:15-12:00 Session 21B: NOKOBIT: Session two (Information Systems, Data Analytics, and Sustainability)
10:15
Using Techniques from Neuroscience in Information Systems Modelling Research: A Systematic Mapping Study

ABSTRACT. A lot of research has been done on use of conceptual models in information systems development. In other areas working with knowledge representations, such as linguistics and software engineering one has applied techniques from neuroscience to study the biological and neurological processes when working with textual knowledge structures in tasks such as program code debugging. The use of such techniques has only to a limited degree been used when it comes to our understanding of visual conceptual models so far. We will argue for the utility of using such techniques also for information systems modelling research and present a structured mapping study on the use of techniques from neuroscience to investigate how we work with visual conceptual models. The main approach is based on techniques used in multi-modal learning analytics, which investigates how performance on learning tasks is correlated with biometric data, collecting data in parallel from EEG, eye-tracking (ET), wristbands, and facial expression (through cameras). Through this study, we also identify gaps in our knowledge on information systems modelling, which can be filled with extending the use of collecting and analysing biometric data under modelling activities.

10:35
Information overload – A case study of using an integrated electronic health record system in the emergency room

ABSTRACT. Background: In modern healthcare services, physically distributed and disciplinary specialized healthcare teams must cooperate under pressure and instantly share increasing amounts of patient information. Therefore, Electronic Health Record (EHR) systems are deployed to facilitate data sharing.

Purpose: We wanted to understand how Emergency Room (ER) teams use EHR systems in their daily practice. ER can be seen as an extreme case for EHR deployment because of extreme time pressure and the need for cross-disciplinary collaboration.

Methods: We used an interpretative case study approach, investigating an EHR system used in the ER department of a large hospital. We interviewed ER personnel, observed EHR use in real-world ambulatory settings, and consulted various documents to create a deeper understanding of the case.

Results: EHR systems are increasingly important in ER personnel’s practices. At the same time, the information in the EHR system can be perceived as irrelevant and duplicated, which leads to perceived information overload. Moreover, gaining an overview of relevant information under time pressure is difficult, which can lead to stress and frustration.

Conclusion: EHR systems play an increasingly important role in healthcare services and can sometimes be the only source of information about patients, such as in ER. ER teams must cooperate with multiple healthcare disciplines, often under extreme time and space conditions. Therefore, ER should be used as an important case for EHR design.

10:55
”It should last long without harming the environment”: perspectives on sustainability in an environmental and historical project
PRESENTER: John Krogstie

ABSTRACT. This paper explores the role of sustainability into an environmental and historical IT-supported project, named Gaia Vesteralen. The project utilises spatial AR/projection mapping technology to promote environmental and cultural awareness. We applied a qualitative research approach to explore sustainability aspects of the project. Sustainability Analysis Framework (SusAF) is a widely used approach to sustainability design that revealed issues related to projection mapping technology. The results suggest that sustainability is a multifaceted concept, and the project aims to balance different dimensions of sustainability while delivering engaging, immersive experiences to a diverse museum audience. The complexities of integrating sustainability into technologically driven initiatives are carefully balanced with goals of environmental, social, and economic responsibility. The findings suggest that the Gaia Vesteralen project represents an aspiring attempt to integrate sustainability into a technology-driven exhibition.

11:15
Collective Anomaly Detection in Fisheries

ABSTRACT. One of the tasks of the Norwegian Directorate of Fisheries is surveillance of ocean fisheries in Norwegian waters. To discourage illegal, unreported and unregulated fisheries, they collect various types of data about fishing activities, in particular data about each catch operation. However, catch data from fishing activities are by nature unpredictable, and many factors may be causes of variation. This makes it hard to identify single catch operations that are anomalous and perhaps incorrectly reported. In this paper we show how we can use the concept of collective anomalies by looking at collections of catch operation reports and check how they deviate from the expected. We do this by running a machine learning model to predict total catches of trawlers' catch operations, compute the prediction errors from the model, and see how the prediction error distribution of a vessel deviates from the whole set of catch reports. The experiments are promising and we are able to identify deviating vessels in a consistent manner, but the outcomes still need to be evaluated by domain experts.

11:35
VeriDash: An AI-Driven, User-Centric Open Source Dashboard for Enhancing Multimedia Verification
PRESENTER: Johannes Skivdal

ABSTRACT. This paper presents VeriDash, an open source dashboard that integrates AI-based technologies to streamline the multimedia verification process for fact-checkers. VeriDash offers advanced features such as automated transcription, geolocation, and an intuitive interface that supports a human-driven fact-checking process while ensuring ease of use. By incorporating a human-in-the-loop approach, VeriDash balances technological efficiency with human expertise, promoting trusted and responsible AI technology to support and enhance the fact-checking process.

10:15-12:00 Session 21C: NIK: Session three (Software Applications)
10:15
Augmented reality projections as a tool to enrich simulations in health care education: Combining AR projector technology with manikins

ABSTRACT. The advancement of augmented reality in recent years has increased the interest and potential of using the technology in various fields. In health care education augmented reality is becoming a popular tool, together with manikins, to enhance realism of patient interaction for students in a learning environment. The purpose of this research was to study an augmented reality projector and its ability to add an extra layer of realism to a manikin in a simulation setting. A focus was given to the results of the projections, the usability, the performance and the limitations of the technology in simulation, and how it compared to similar setups that were not specifically designed for augmented reality. Two scenarios were chosen to test the projector. The first scenario was to add a facial expression on the manikin to simulate pain, and the second scenario was to add blood flow to a surgical wound. The results showed that the projector gave the wanted effect in both scenarios, it was easy to use and flexible, and the performance was adequate for the use in this study. Some limitations were also discovered giving rise to possible future work. They included colour change and disturbances in the projections, and finding and fitting appropriate resources.

10:45
Discovery of endianness and instruction size characteristics in binary programs from unknown instruction set architectures

ABSTRACT. We study the problem of streamlining reverse engineering (RE) of binary programs from unknown instruction set architectures (ISA). We focus on two fundamental ISA characteristics to beginning the RE process: identification of endianness and whether the instruction width is a fixed or variable. For ISAs with a fixed instruction width, we also present methods for estimating the width. In addition to advancing research in software RE, our work can also be seen as a first step in hardware reverse engineering, because endianness and instruction format describe intrinsic characteristics of the underlying ISA.

We detail our efforts at feature engineering and perform experiments using a variety of machine learning models on two datasets of architectures using Leave-One-Group-Out-Cross-Validation to simulate conditions where the tested ISA is unknown during model training. We use bigram-based features for endianness detection and the autocorrelation function, commonly used in signal processing applications, for differentiation between fixed- and variable-width instruction sizes. A collection of classifiers from the machine learning library \texttt{scikit-learn} are used in the experiments to research these features. Initial results are promising, with accuracy of endianness detection at $99.4\%$, fixed- versus variable-width instruction size at $86.0\%$, and detection of fixed instruction sizes at $88.0\%$.

11:15
Understanding Federated Learning from IID to Non-IID dataset: An Experimental Study
PRESENTER: Jungwon Seo

ABSTRACT. As privacy concerns and data regulations grow, federated learning (FL) has emerged as a promising approach for training machine learning models across decentralized data sources without sharing raw data. However, a significant challenge in FL is that client data are often non-IID (non-independent and identically distributed), leading to reduced performance compared to centralized learning. While many methods have been proposed to address this issue, their underlying mechanisms are often viewed from different perspectives. Through a comprehensive investigation from gradient descent to FL, and from IID to non-IID data settings, we find that inconsistencies in client loss landscapes primarily cause performance degradation in non-IID scenarios. From this understanding, we observe that existing methods can be grouped into two main strategies: (i) adjusting parameter update paths and (ii) modifying client loss landscapes. These findings offer a clear perspective on addressing non-IID challenges in FL and help guide future research in the field.

11:45
GluPredKit: Development and User Evaluation of a Standardization Software for Blood Glucose Prediction

ABSTRACT. Blood glucose prediction is an important component of biomedical technology for managing diabetes with automated insulin delivery systems. Machine learning algorithms hold the potential to advance this technology. However, the lack of standardized methodologies impedes direct comparisons of emerging algorithms. The purpose of this study is to address this challenge by developing a software platform designed to standardize the training, testing and comparison of blood glucose prediction algorithms. First, we design and implement the software guided by the current literature. To ensure the platform's user-friendliness, we conducted preliminary testing and a user study. In this study, four participants interacted with the software and provided feedback through the System Usability Scale (SUS) and open-ended questions. The result of the study was the software GluPredKit, which features a modular, open-source architecture, complemented by a command-line interface, comprehensive documentation, and a video tutorial to enhance usability. The user study indicates that GluPredKit offers high usability, facilitating comparisons between different algorithms. Future directions include continuously enhancing the software based on user feedback. We also invite community contributions to further expand GluPredKit with state-of-the-art components and foster a collaborative effort in standardizing blood glucose prediction research.

10:15-12:00 Session 21D: NISK: Session three (Incident Response)
10:15
Information Sharing between the Computer Security Incident Response Team and its Members: An Empirical Study
PRESENTER: Vilja Steffensen

ABSTRACT. The number of cyber incidents is steadily increasing in all sectors. Not all sectors have access to cybersecurity personnel with domain-specific knowledge, which further motivates the need for a Computer Security Incident Response Team (CSIRT). However, for a CSIRT to function as intended, effective digital communication should be at the forefront. This paper explores the communication practices between the CSIRT and its members by using a case from the Norwegian municipality sector. Ten semi-structured interviews with eleven participants representing the CSIRT and the municipalities were conducted. The findings include the most used communication channels and the members' perceptions of information sharing. Key factors limiting information sharing are the size of the municipality and access to critical resources, geographical location, and the lack of personal networks. Future work should investigate the generalizability of the findings in other sectors and countries.

10:45
The Role of Custom Scripting in APT Incident Response

ABSTRACT. Advanced Persistent Threats (APTs) present complex challenges by employing covert and sophisticated techniques that evade traditional security measures. This study investigates the role of custom scripting in improving incident response capabilities based on interviews with cybersecurity professionals in various sectors. The findings demonstrate that custom scripts bridge critical gaps left by commercial and open-source tools, providing the flexibility and precision to detect and mitigate complex threats. Despite their effectiveness, custom scripts require specialized skills and resources, creating a disparity between large and small organizations in their ability to combat advanced threats. This paper advocates integrating custom scripting within standardized incident management and response, and helping commercial tools address these challenges. Recommendations include targeted training, investment in skill development, and establishing robust policies for script usage and maintenance. Future research should explore the integration of emerging technologies such as artificial intelligence (AI) and machine learning to further enhance scripting capabilities in cybersecurity operations.

11:15
From Uncertainty to Prosecution: Enhancing Cyber Resilience through Forensic Readiness

ABSTRACT. Organizations relying on digital services must acknowledge that their systems will fail at some point, and if they have not been victims of cybercrime yet, they will be. Cyber resilience is an approach that prepares to withstand and recover from system failures and incidents. To recover from a system failure, the incident's root cause must be understood to mitigate it properly. Thus, there is a need to investigate the incident. An investigation is also essential to hold individuals accountable for malicious incidents in a court of law. The cost of an investigation and the evidential value of digital evidence can depend on how forensically ready an organization is. This apparent connection between cyber resilience and forensic readiness made us question these concepts' interconnection. We conducted a focused literature review and examined relevant legislation, standards, and frameworks to identify the connection between cyber resilience and forensic readiness. Our research shows that the need to determine the root cause of an incident to mitigate it properly is central and that frameworks do not sufficiently address holding individuals responsible for malicious incidents accountable in a court of law. Our main contribution is to show how forensic readiness is a crucial component of cyber resilience and how a systematic investigation is central to incident response. We also propose introducing redress as a core function in the NIST Cybersecurity Framework as a first step to ensure criminals are held accountable for their actions.

13:00-14:00 Session 22: NISK Keynote

Keynote speaker: Patrick Adrianus Bours

Location: Storsalen
13:00
Using Behavioural Biometrics beyond Gaining Access

ABSTRACT. Biometrics is always considered as a manner to get access to systems, but we will actually show that it has a much wider range of use. We will show that we can use biometrics in a completely different way also. We will also go into two different use cases, in both cases trying to solve real life problems. The first is about protection of vulnerable members of society from “bad things” that can easily happen to them online. The second use case is trying to solve a problem in academia to ascertain the quality of our students.

14:15-16:00 Session 23A: UDIT 4: Syllabus and Conceptualisation
Location: Storsalen
14:15
The Nordic Prior Knowledge Test in Programming: Motivation, Development and Preliminary Results

ABSTRACT. With recent updates to Norway's national curriculum, computational thinking and programming has become a core part of the K-12 education, leading to an influx of students entering higher education with prior programming experience. This shift has the potential to impact the teaching of ICT at universities, as foundational knowledge could allow for the introduction of more advanced topics earlier. However, the quality of programming education varies, making it essential to assess students' prior knowledge.

To address this need, the Nordic Prior Knowledge Test in Programming was designed to assess incoming students' programming proficiency in the basic elements of the introductory programming course (CS1). This study details the rationale for assessing incoming students, the development and content of the test, and the preliminary results from its 2024 administration.

The test was completed by 3,038 students (2,661 after data pruning) across eight higher education institutions in Norway. Results indicate a mean score of 39.9%, with a significantly higher performance among students exposed to the new curricular model (50%). Despite these gains, a substantial proportion of students scored at the lower end of the scale, highlighting the ongoing need for foundational programming instruction.

Although most students will benefit from completing the standard CS1 course, a notable subset of students achieved high scores (14.7% scoring above 90%), suggesting the potential value of accelerated or alternative learning pathways, such as an advanced CS1 course or direct progression to CS2.

14:45
Fundamentals of Norwegian CS1

ABSTRACT. The introductory programming course, known as CS1, has evolved considerably since its inception, with diverse opinions on the essential concepts that should be included. This study aims to identify the fundamental concepts taught in Norwegian CS1 courses in order to develop a validated assessment tool: a concept inventory. This tool will be utilized in the Nordic Prior Knowledge Test in Programming, which is designed to assess the pre-existing programming knowledge of students entering higher education. This test uses Python, the dominant programming language in K-12 and higher education in Norway.

To identify the fundamentals of CS1 we employed a triangulation approach that included three perspectives: the intended curriculum, the assessed curriculum, and the experienced curriculum. Our methodology involved a Delphi process with Norwegian CS1 educators, an analysis of final exams from various Norwegian institutions, and surveys of computer science students regarding the difficulty and importance of programming concepts.

Our findings reveal that concepts related to looping, functions, conditionals and error interpreting are central to Norwegian CS1 courses, aligning with existing literature. However, we also identified notable discrepancies compared to older CS1 concept studies developed in other countries, particularly in concepts like recursion, data structures beyond arrays/lists and maps, and test design. These results underscore both the dynamic nature of computer science education and the enduring importance of foundational topics that students are expected to master.

15:15
Shaping a Modern Programming Paradigms Course for Advanced University Students

ABSTRACT. Programming is one of the core disciplines in Computer Science (CS) and Computer Engineering (CE) courses, and it is increasingly permeating the curricula of other study programs. After undergoing an introduction to programming and a course on object-oriented programming, some students will attend an advanced course on programming languages, where features of different programming paradigms are discussed, with an emphasis on the semantics of execution. Due to the emphasis on theory aspects, such courses often employ old or experimental languages that have little practical application, resulting in low engagement of students. Unfortunately, most of the research work focused on introductory programming, which has a more established syllabus and larger possibilities for interventions. In this paper we analyze the current status of the programming languages course at our university, and we investigate possibilities for renewing the syllabus with modern languages and tools. We first review the current topics addressed by the course, and then we discuss possible content changes with the aim to shape a more engaging course. The paper ends with a plan for implementing and evaluating the new version of the course starting from the next academic year.

15:45
Students' Visualisations of Programming Concepts: An Exploratory Analysis

ABSTRACT. This study explores the intuitive visualisations of code created by novice programming students with the aim of uncovering how these reflect their understanding of fundamental programming concepts. Through a thematic analysis of student drawings, this exploratory research identifies common themes and categorises different modes of visual expression. The findings reveal great variations in how students can visualise code, and suggest that drawings can serve as an effective tool for both diagnosing student misconceptions and supporting their learning of programming. This study highlights the potential of incorporating visual exercises into programming education to improve concept comprehension.

14:15-16:00 Session 23B: NOKOBIT: Session three (Digital Transformation and Critical Operations)
14:15
Restructuring Digital Infrastructures: Architectural Transformation through Sociotechnical Interplay in large-scale incumbent organizations
PRESENTER: Egil Øvrelid

ABSTRACT. Large incumbent organizations are increasingly compelled to modularize their IT architecture to stay competitive. This architectural transformation entails navigating technical, organizational, and managerial dimensions, warranting a socio-technical perspective. Although extensive research has been conducted on this topic, a deeper understanding of how incumbent organizations manage this complex process, and its outcomes remains necessary. Our research question thus explores how Digital Infrastructures can be restructured to support architectural transformation in large-scale incumbent organizations. We conceptualize digital infrastructures as multilayered sociotechnical entities, characterized by greater complexity than standalone IT systems and more diversity than digital platforms. Our study focuses on a prominent Norwegian bank undergoing architectural transformation. We identify three restructuring activities integral to the architectural transformation of incumbent organizations: bundling business and IT, modularizing the operational foundation, and fostering a learning culture. Restructuring activities constitute a multidimensional framework that encapsulates the sociotechnical interplay essential for performing architectural transformation.

14:35
Enhancing Team Situational Awareness in Crime Scene Investigation through the Use of Head Cameras

ABSTRACT. Abstract. In the rapidly evolving landscape of technology, where data volumes are expanding at a dramatic rate, law enforcement agencies face the imperative to significantly boost their efficiency. Modern society is undergoing a revolutionary shift in the digital realm, compelling law enforcement and other public sector agencies to stay up to date and explore the potential of new technologies for en-hancing team situational awareness and overall performance. However, the inte-gration of new technological tools does not consistently translate into increased effectiveness in police work. Also, the technology must be customized for offic-ers' benefit, and its acceptance depends on multiple factors. This paper outlines the background, intent, and early results of an ongoing Norwegian project exam-ining the possibilities of using head cameras during crime scene investigation (CSI) for live streaming and video recording. The results from field trials and a following survey indicate considerable promise in enhancing team situational awareness within CSI.

14:55
How do Norwegian enterprises create awareness of cyber security?

ABSTRACT. The increasing digitisation of society means that Norwegian enterprises experience daily cyberattacks. Most successful attacks today are due to human error, and cybersecurity awareness has never been more important. We therefore ask: How do Norwegian enterprises create awareness of cybersecurity? We have interviewed managers from 16 Norwegian enterprises of different sizes and from different sectors and compared them based on size, public versus private sector, and tolerance for cybersecurity breaches. The results show that almost all the enterprises offer some form of course or training, but completion is not always compulsory. Paradoxically, and somewhat worryingly, the informants who reported least security knowledge were the ones who expressed the greatest satisfaction with their security work and were least concerned about attacks. Many small and private enterprises had chosen to outsource cybersecurity, but did not always have detailed knowledge of the cyberdefences that were set up for them. The study indicates that Norwegian enterprises are implementing measures to create awareness around cybersecurity, but that many areas can still be improved, with different challenges facing different enterprise types.

15:15
Extended Reality in critical sectors: Exploring the use cases and challenges
PRESENTER: Camille Sivelle

ABSTRACT. Critical sectors such as healthcare, energy production or defense increasingly consider the use of Extended Reality (XR) technologies as a way to enhance training and facilitate operations. In this context, immersive technologies bring forth new opportunities, but also a range of privacy and security challenges that, to date, remain understudied. To evaluate and address those challenges, there is a need to better understand the contexts and purposes for which XR is used in critical sectors. Based on data collected through an online questionnaire (N=6) and a series of semi-structured in-depth interviews (N=7) with professionals from a diverse pool of Norwegian organizations using XR, this study explores various characteristics of relevant use cases of XR, with a focus on critical sectors. Our findings indicate that XR, and in particular VR, is already used in a large range of sectors, with training being the most prominent purpose. Nonetheless, these use cases are still relatively immature and various barriers limit the current use of XR in critical sectors. Further, while privacy and security are considered to be important, they are not a driving concern yet, and will need to be further taken into account before XR can be leveraged more broadly in critical sectors. These results aim to help to orient future research efforts towards reconciling usability, security and privacy for XR in the Norwegian ecosystem.

15:35
The pandemic and the digitalization of Norwegian enterprises

ABSTRACT. The COVID-19 pandemic brought significant global public health and economic challenges. Government-imposed lockdowns and travel restrictions disrupted everyday life. Economically, the pandemic had widespread effects. Job losses and business closures were immediate, especially in sectors like tourism and retail. Several studies have indicated that the pandemic accelerated the adoption of digi-tal technologies, with remote work and e-commerce becoming essential. In this article, we examine how the pandemic impacted digitalization in Norwegian busi-nesses. Our findings reveal that the pandemic notably sped up digitalization pro-cesses. The high level of digital skills in the population, combined with robust mobile and internet infrastructure, facilitated this transition. The rapid shift to re-mote work underscored the necessity of strong digital infrastructure and revealed security weaknesses, which in turn led to increased demand for IT security ser-vices.

14:15-16:00 Session 23C: NIK: Session four (AI and media)
14:15
Picture This: How Image Filters Affect Trust in Online News

ABSTRACT. Users of social media platforms face concerns about the accuracy and reliability of information shared on it. This includes images being shared online, which are often linked to news events. This study investigates what effects Instagram filters have on users’ perceived trust of online news posts that include images. Trust ratings of four different articles across four image filter conditions were obtained in an online user study (N=204). We also inquired on a user's general trust and familiarity with the news topic Also, the role of general trust and familiarity with the topic. Our analysis revealed that while Instagram filters overall may not affect perceived trust, specific visual characteristics of the filters such as brightness and contrast affected trust levels. Additionally, individual differences in general trust and attitude towards a specific topic may influence the users’ perception of trust.

14:45
Can Large Language Models Support Editors Pick Related News Articles?

ABSTRACT. Editors and journalists play an important role on news platforms. Besides creating trustworthy news stories, they also provide valuable expertise on which stories are placed on the front page and hand-pick related articles for platform users to read further. This paper focuses on the specific task of related article selection commonly carried out daily by editors and journalists on news platforms. This is typically a manual process that utilizes an internal search tool to first find a pool of potential candidate articles. Then, from those candidate articles, editors and journalists hand-pick the top related articles for a given news article as a form of expert-selected suggestions for the readers. Although this task can be an important part of the editorial process in news platforms, it may become time-consuming and demanding, often requiring significant human effort. In addressing this challenge, we propose an automatic mechanism to support editors and journalists in this task by incorporating one of the latest Large Language Models (LLMs), i.e., GPT4o-mini, to shortlist a set of related articles and recommend them to be checked by journalists and editors. Our evaluation of the proposed approach, based on a real-world dataset from one of the largest commercial Norwegian news platforms (i.e., TV 2), demonstrates the effectiveness of the approach in supporting editors and journalists in their task of selecting relevant news articles.

15:15
Exploring the Ethical Challenges of AI and Recommender Systems in the Democratic Public Sphere

ABSTRACT. The rapid integration of Artificial Intelligence (AI) and Recommender Systems (RSs) into digital platforms has brought both opportunities and ethical concerns. These systems, designed to personalize content and optimize user engagement, have the potential to enhance how individuals navigate information online. However, this paper shifts the focus to the ethical complexities inherent in such systems, particularly the practice of nudging, where subtle algorithmic suggestions influence user behavior without explicit awareness. Issues like misinformation, algorithmic bias, privacy protection, and diminished content diversity raise important questions about the role of AI in shaping public discourse and decision-making processes. Rather than viewing these systems solely as tools for convenience, the paper challenges the reader to consider the deeper implications of AI-driven recommendations on democratic engagement. By examining how these technologies can quietly influence decisions and reduce exposure to different perspectives, it calls for reevaluating the ethical priorities in AI and RSs design. We present the problems identified along with their potential solutions, calling for creating a digital space that promotes independence, fairness, and openness, making sure AI is used responsibly to support democratic values and protect user rights.

15:45
Evaluating GraphRAG’s Role in Improving Contextual Understanding of News in Newsrooms.
PRESENTER: Bahareh Fatemi

ABSTRACT. In a newsroom, journalists are frequently tasked with re- porting on complex events, such as conflicts, where understanding the broader context and nuanced details is crucial for accurate and insightful reporting. The challenge lies in processing and synthesizing vast amounts of information from various sources to build a comprehensive picture of the event. This requires not only retrieving specific facts but also under- standing the interconnections between different pieces of information.The advent of Large Language Models (LLMs) has brought advancements in text processing, offering the capability to quickly retrieve and generate content from extensive datasets. However, the use of LLMs in newsrooms comes with challenges, particularly concerning their static knowledge and hallucination, where models produce responses that are plausible but incorrect. To address these challenges, Retrieval-Augmented Gener- ation (RAG) has been developed, to provide responses grounded in actual data. RAG is effective for straightforward queries where the information is contained within specific documents. However, it has limitations when dealing with complex queries that involve synthesizing information from multiple sources or understanding intricate relationships between enti- ties. GraphRAG offers to overcome such limitations by leveraging knowl- edge graphs, which offer to combine information from multiple sources in a structured manner. In this work, we design a set of experiments to compare GraphRAG’s capabilities to that of previous general LLM and RAG based approaches’, when it comes to understanding and accurately representing complex issues.

14:15-16:00 Session 23D: NISK: Session four (Security Design)
14:15
Continuous Age Detection using Keystroke Dynamics

ABSTRACT. When enrolling users into computer systems and applications that are restricted to certain age groups, it is challenging to put trust in the user's provided age. This paper looks into the deployment of continuous analysis of Keystroke Dynamics data captured from the online activity of a user. Using this data the goal is to categorize the user's age into two possible categories: above and below the age of 18, a widespread legal age. We used a dataset captured from 70 adults and 46 children, containing over 780.000 keystrokes. The data is collected when the participants were chatting with a random other participant. Two different statistical methods, using timing features, are presented in both an authentication and an identification scenario. In the authentication scenario we reached an average accuracy of approximately 80% after on average of 180 keystrokes, while in the identification scenario we obtained a 75% True Positive Rate after approximately 20 keystrokes.

14:45
Towards a Framework for the Design of Nonlinear Combiners in Ternary Logic

ABSTRACT. Ternary logic is gaining popularity since it enables realization of complex electronic circuits with fewer active elements than with binary logic. In this short paper we draw the contours of a framework for finding nonlinear combining switching functions for cryptographic applications realized on ternary IoT hardware platforms. We review the theoretical results and criteria related to ternary switching function adequacy for use in cryptography. Our framework computes the Algebraic Normal Form (ANF) of such functions and uses the Vilenkin-Chrestenson spectrum to test their non-linearity and correlation immunity.

15:15
Designing a Decentralized Identity Verification Platform

ABSTRACT. In today's world, almost every private and public service has either fully embraced digitalization or is in the process of doing so, which is generally a positive trend. However, significant foundational challenges remain. Many services still depend on various centralized physical and digital identities, making it increasingly difficult for users to securely and reliably prove their identity with trusted credentials. Additionally, there are other risks, such as single points of failure, identity theft and fraud, censorship and discrimination, and limited user autonomy. To address some of these issues, the European Union has been developing the decentralized Digital ID Wallet initiative. This initiative functions like a physical wallet, allowing users to store multiple identities in one place at user premises and giving them control over their data, including the ability to decide what information to share, with whom, and when. While the concept is promising, one of the major challenges lies in designing and developing the necessary infrastructure, which involves creating secure digital identities, integrating them with existing systems, and ensuring they are accessible to a broad audience. This paper addresses these challenges by providing architectural guidelines and a formal description of common operations in such ecosystems, based on our own experiences in developing MVPs.

15:45
End-User Privacy, Security, and Data Ownership Concerns on the Helsenorge Platform: A Mixed-Methods Study
PRESENTER: Abha Pokharel

ABSTRACT. This study explores end-users perceptions of privacy, secu- rity, and data ownership (PSDO) of Helsenorge, a leading e-health plat- form in Norway. By surveying over 100 users from diverse demographics, this study evaluates their understanding of the PSDO landscape, their sentiments, and their expectations for future improvements. The find- ings suggest that most users know the importance of PSDO features but are unaware of them, highlighting the gap between the significance of these features and user awareness. Additionally, the findings suggest that privacy can be improved through selective data sharing, configurable consent, and granular access controls, while security can be enhanced by increasing transparent data access, accountability, multi-factor authenti- cation (MFA), transparent audit trails, and robust encryption. Addition- ally, improving data ownership might provide users with greater control and ownership over their health records, fostering trust in the platform, and educating them on the related challenges and risks. The paper con- cludes by discussing the potential adoption of self-sovereign identity (SSI) and Web 3.0 technologies to address these challenges.