previous day
next day
all days

View: session overviewtalk overview

10:30-12:00 Session 6A: NIK 1
Location: KE E-102
Large-Scale Pre-Training for Dual-Accelerometer Human Activity Recognition
PRESENTER: Aleksej Logacjov

ABSTRACT. The annotation of physical activity data collected with accelerometers for human activity recognition (HAR) remains challenging despite the growing interest in large public health studies. Existing free-living accelerometer-based datasets are limited, hindering the training of effective deep learning models. To address this limitation, some studies have explored self-supervised learning (SSL), i.e., training models on both labeled and unlabeled data. Here, we extend previous work by evaluating whether large-scale pre-training improves downstream HAR performance. We introduce the SelfPAB method, which includes pre-training a transformer encoder network on increasing amounts of accelerometer data (10-100K hours) using a reconstruction objective to predict missing data segments in the spectrogram representations. Experiments demonstrate improved downstream HAR performance using SelfPAB compared to purely supervised baseline methods on two publicly available datasets (HARTH and HAR70+). Furthermore, an increase in the amount of pre-training data yields higher overall downstream performance. SelfPAB achieves an F1-score of 81.3% (HARTH), and 78.5% (HAR70+) compared to the baselines' F1-scores of 74.2% (HARTH) and 63.7% (HAR70+). Additionally, SelfPAB leads to a performance increase for activities with little training data.

Geo-locating Road Objects using Inverse Haversine Formula with NVIDIA Driveworks

ABSTRACT. Geolocation is integral to the seamless functioning of autonomous vehicles and advanced traffic monitoring infrastructures. This paper introduces a methodology to geolocate road objects using a monocular camera, leveraging the NVIDIA DriveWorks platform. We use the Centimeter Positioning Service (CPOS) and the inverse Haversine formula to geo-locate road objects accurately. The real-time algorithm processing capability of the NVIDIA DriveWorks platform enables instantaneous object recognition and spatial localization for Advanced Driver Assistance Systems (ADAS) and autonomous driving platforms. We present a measurement pipeline suitable for autonomous driving (AD) platforms and provide detailed guidelines for calibrating cameras using NVIDIA DriveWorks. Experiments were carried out to validate the accuracy of the proposed method for geolocating targets in both controlled and dynamic settings. We show that our approach can locate targets with less than 1m error when the AD platform is stationary and less than 4m error at higher speeds (i.e. up to 60km/h) within a 15m radius.

Simulated RGB and LiDAR Image based Training of Object Detection Models in the Context of Autonomous Driving

ABSTRACT. The topic of object detection, which involves giving cars the ability to perceive their environment has drawn greater attention. For better performance, object detection algorithms often need huge datasets, which are frequently manually labeled. This procedure is expensive and time-consuming. Instead, a simulated environment due to which one has complete control over all parameters and allows for automated image annotation. Carla, an open-source project created exclusively for the study of autonomous driving, is one such simulator. This study examines if object detection models that can recognize actual traffic items can be trained using automatically annotated simulator data from Carla. The findings of the experiments demonstrate that optimizing a trained model using Carla’s data, along with some real data, is encouraging. The Yolov5 model, trained using pre-trained Carla weights, exhibited improvements across all performance metrics compared to one trained exclusively on 2000 Kitti images. While it didn’t reach the performance level of the 6000-image Kitti model, the enhancements were indeed substantial. The mAP0.5:0.95 score saw an approximate 10% boost, with the most significant improvement occurring in the Pedestrian class. Furthermore, it is demonstrated that a substantial performance boost can be achieved by training a base model with Carla data and fine-tuning it with a smaller portion of the Kitti dataset. Moreover, the potential utility of Carla LiDAR images in reducing the volume of real images required while maintaining respectable model performance becomes evident. Our code is available at:

10:30-12:00 Session 6B: NISK 1: Security in Social Context
Location: KE A-204
Expanding Horizons: The Evolving Landscape of Development Opportunities in Cybersecurity Training Platforms
PRESENTER: Rebeka Toth

ABSTRACT. In today's cybersecurity landscape, offensive security plays a vital role in fortifying systems by identifying vulnerabilities and potential attack vectors. Equally significant is the training of offensive security professionals. This study conducts a comprehensive comparative analysis of renowned offensive security training platforms: Hack The Box, TryHackMe, HackerOne, PicoCTF, and PortSwigger Academy. The goal is to evaluate these platforms across eight criteria, shedding light on their strengths and limitations, while also proposing potential enhancements to address existing gaps. The criteria encompass hints, ranking systems, flags, writeups, user feedback, knowledge domains, difficulty levels, and extensibility. By subjecting these platforms to this comprehensive evaluation, we gain invaluable insights into their individual advantages and areas necessitating improvement. A salient finding of the analysis is the absence of personalized learning pathways and adaptive training based on users' unique skills and cognitive patterns. To mitigate this gap, prospective offensive security training platforms could leverage machine learning algorithms to create customized learning experiences. By adopting user activity-driven methodologies, these platforms can tailor training content, challenges, and feedback to meet learners' distinct needs and skill levels. The outcomes of this study contribute to the advancement of offensive security training by outlining the features and attributes of a plausible future platform, grounded in the pivotal considerations necessary for the creation of a more comprehensive and efficient training ecosystem. By integrating personalized learning paths and harnessing the potential of machine learning, forthcoming platforms can provide tailored experiences that optimize learning outcomes and foster enhanced engagement.

Fool Me Once, Shame on Me - A Qualitative Interview Study of Social Engineering Victims

ABSTRACT. Security breaches still continue to flourish despite of the many technical measures in place. More often than not, the human users get the blame. Social engineering attacks use various manipulation techniques to fool users into giving away sensitive information or make security mistakes that are further exploited in cyber attacks. This study has investigated how common, cyber-enabled social engineering attacks, such Business Email Compromise (BEC) phishing and romance scams can be used to exploit individuals, systems or organizations. We investigate studies from the literature and apply a qualitative approach based on in-depth interviews with sample victims of such attacks. Our results contribute to the understanding of why established social engineering protection measures sometimes fail and how the victims have experienced the aftermath of such events. Based on our findings and literature comparison, we provide reflections on how mitigations can be improved to reduce the success rate of social engineering attacks.

Exploring Digital Forensic Readiness: A Preliminary Study from a Law Enforcement Perspective
PRESENTER: Odin Heitmann

ABSTRACT. In today’s world of cybersecurity, it is not a question of if an organization will experience a cyber attack, but rather a matter of when it will happen. These incidents can cause significant disruption and financial losses to organizations. Forensic readiness is becoming increasingly crucial as it can help maximize the use of digital evidence and reduce the investigative cost after an attack. It can also aid law enforcement in identifying and prosecuting cybercrime perpetrators. Our observation of cybercrime investigations indicates divergent stakeholder priorities during a cyber attack. Victimized organizations prioritize resuming normal operations, and incident responders focus on restoration, potentially neglecting criminal evidence integrity. Law enforcement involvement occurs post-incident, usually after the initial incident handling is completed. Due to divergent focus areas, there is a lack of a comprehensive overview. This made us question the relationship between forensic readiness practices in the industry and criminal investigations performed by law enforcement after an attack. This paper investigates whether forensic readiness and criminal investigation are aligned. To assess alignment, we compare forensic readiness and criminal investigation definitions and their core components. Our research shows that forensic readiness does not sufficiently focus on criminal investigation; thus, the current forensic readiness approach does not adequately encompass criminal investigations. We propose incorporating criminal investigation integration as a new domain to address this issue while developing future forensic readiness models and practices. Furthermore, we propose using the term cross-organizational investigative readiness instead of forensic readiness to underline the importance of the industry, incident responders, and law enforcement working together to prevent, mitigate, and prosecute cybercrime.

10:30-12:00 Session 6C: NOKOBIT 1: Digitalization
Location: KE E-101
Digitalisering i norske virksomheter: Konsekvenser og utfordringer

ABSTRACT. Denne artikkelen presenterer og drøfter resultatene fra en studie om digitalisering i norske virksomheter. Til sammen 533 personer, fra 25 ulike private og offentlige virksomheter, har deltatt i en spørreskjema-basert undersøkelse, som ble gjennomført i 2022 og 2023. Studien finner at det er bred enighet om at digitalisering medfører betydelige konsekvenser for deres sektor og virksomhet, men også at respondentene ikke synes at organisasjonene deres er proaktive nok i møte med utviklingen. Det er særlig behovet for ny kompetanse, og kundenes forventninger, som påvirker dem mest. De mest krevende utfordringene er mangel på en digital strategi, mangel på kompetanse i prosessledelse, manglende samarbeid på tvers i organisasjonen, samt at arbeidsprosesser som skal digitaliseres ikke på forhånd er standardiserte. Studien gir ny forståelse av hva digitalisering innebærer for norske virksomheter, og gir praktiske råd til ledere om hva som kan gjøres for å lykkes med digitalisering.

Digitalisering og prosessledelse i offentlig sektor: en studie av digitalisering i norske kommuner

ABSTRACT. Nyere forskning viser at prosessledelse er viktig for å lykkes med digitalise-ring. Det er imidlertid begrenset kunnskap om hvor godt norske organisasjo-ner evner å bruke prosessledelse i sine digitaliseringssatsinger. Denne studien undersøker digitalisering i seks norske kommuner. Vi finner at man i liten grad arbeider med prosessledelse, og i liten grad evner å gjøre de nødvendige organisatoriske endringene for å oppnå målsettingene med satsingen. Mang-lende prosessforståelse, utydelig prosesseierskap og silo-strukturer hemmer evnen til å skape helhetlige løsninger.

Beyond Code Assistance with GPT-4: Leveraging GitHub Copilot and ChatGPT for Peer Review in VSE Engineering

ABSTRACT. Most companies are Very Small Entities (VSEs), meaning they have fewer than 25 employees. Primarily domain specialists, these companies lack in-house expertise in important areas such as security and reliability engineering, process improvement, Quality Management (QM) and Systems Engineering (SE). VSEs struggle to adhere to Standard Operating procedures (SOP), and research has shown that contractual obligations to follow industry standards and best practices have little effect on actual engineering. This paper describes a case study that explored the potential of Large Language Models (LLMs) to support engineering best practices at a VSE by taking on the role of an expert peer in areas where the company had a skills gap. Aiwell, a Norwegian producer of building automation equipment, used ChatGPT, GitHub Copilot and GPT-4 to assess the quality of their system and stakeholder requirements. A GPT-4 foundation model with no additional training was given links to reference materials on requirements engineering produced by The International Council on Systems Engineering (INCOSE) and allowed to participate in discussions on the same digital collaboration platform as the human engineers. The study found that AI-assisted requirement reviews immediately and positively impacted the entire engineering process, supporting the feasibility of integrating advanced AI technologies in VSEs, even with limited training and resources. Participants highlighted the complementary nature of human intelligence and AI, where LLMs augmented human judgment through dialogue, leading to enriched engineering practices. Ethical and data privacy considerations also emerged as central themes, emphasising the need for proactive measures.

10:30-12:00 Session 6D: UDIT 1: Data analysis and AI
Location: KE A-101
Infographics as analysis tool in student research

ABSTRACT. Higher education students learn to develop new knowledge through student research projects like bachelor and master projects. They learn to use re-search methods during their projects, and in a fast-pacing world, it can be valuable to use visualizations like infographics. Infographics are data visuali-zations that present complex information quickly and clearly to convey a message. Our experience is that IT students are good at collecting data but have difficulties in the analysis process of their research projects. The aim of this action research project is to study how to use infographics as a teaching and learning method during student research projects. The idea behind our investigation is to use infographics as an analysis tool as part of the process to help students make sense of collected data, as opposed to its traditional use of merely presenting findings. Through several action research iterations of planning, acting, observing, and reflecting, we propose a teaching method of with different types of workshop activities to help students analyze their data with infographics. In the analysis phase of a student research project, the creative process of developing infographics through iterations can be used as an analysis tool to clarify ideas, for categorization, for reflection and as an interaction tool. This leads to a more personalized understanding and an aesthetic perspective of the data and data analysis. This paper shows that symbols, signs, icons and drawings in combination with academic writing have a potential to decode and activate meaning in a visual way and to create more distinct ideas.

AI Technology: Threats and Opportunities for Assessment Integrity in Introductory Programming

ABSTRACT. . Recent AI tools like ChatGPT have prompted worries that assessment integrity in education will be increasingly threatened. From the perspective of introductory programming courses, this paper poses two research questions: 1) How well does ChatGPT perform on various assessment tasks typical of a CS1 course? 2) How does this technology change the threat profile for various types of assess-ments? Question 1 is analyzed by trying out ChatGPT on a range of typical as-sessment tasks, including code writing, code comprehension and explanation, er-ror correction, and code completion (e.g., Parson’s problems, fill-in tasks, inline choice). Question 2 is addressed through a threat analysis of various assessment types, considering what AI chatbots would be adding relative to pre-existing as-sessment threats. Findings indicate that for simple questions, answers tend to be perfect and ready-to-use, though might need some rephrasing work from the stu-dent if the task partly consists of images. For more difficult questions, solutions might not be perfect on the first try, but the student could be able to get a more precise answer via follow-up questions. The threat analysis indicates that chatbots might not introduce any entirely new threats, rather they aggravate existing threats. The paper concludes with some thoughts on the future of assessment, re-flecting that practitioners will likely use bots in the workplace, meaning that stu-dents must also be prepared for this.

Ways of using artificial intelligence in IT education of Norway

ABSTRACT. The development of artificial intelligence (AI) technology has led to the emergence of ChatGPT and other AI-based tools for various purposes in the material and productive spheres of human activity. Governments of developed countries, realising the benefits and challenges of using AI technologies, have developed national AI development strategies. In these strategies, one of the main roles is assigned to IT education, whose tasks include developing a basic understanding of AI technologies in schools, providing fundamental AI knowledge and skills in higher education, and encouraging responsible use of AI by the general population. With reference to state-of-the-art research, this short contribution presents ways and examples of the use of artificial intelligence in IT education. The article notes that a variety of AI-based tools can be used to improve IT education. Teachers can use AI tools for different purposes, including automating administrative tasks and research tasks as well as personalising students’ learning, thereby helping the students improve their performance. AI can aid students’ cognitive and motor development, stimulate reasoning, improve concentration and enthusiasm for learning, assist in automated assessments, provide suggestions for improving a piece of code, help prevent cheating in programming and plagiarism detection, and analyse student behaviour. By providing an overview of areas of use of AI in IT education, the paper offers a starting point for exploring the opportunities offered to IT educators and students by modern AI.

14:15-16:15 Session 8A: NIK 2
Location: KE E-102
GECO: A Twitter Dataset of COVID-19 Misinformation and Conspiracy Theories Related to the Berlin Parliament and Washington Capitol Riots

ABSTRACT. On August 29, 2020, a precursor to the widely known January 6 United States Capitol attack in Washington D.C., USA, occurred in Berlin, Germany, where a group of protesters participating in a demonstration against COVID-19 pandemic measures attempted to storm the German parliament in Berlin. While the event in Berlin was less dramatic than January 6 of 2021 in the US - the protesters were repelled by the police, and no serious damage or injuries were reported - in both cases, mobilization through conspiracy theories on social media is widely considered a significant factor leading to both events.

In this paper, in order to study such social media content, we present an analysis based on a manually labeled dataset sampled from a large set of COVID-19 related tweets in temporal proximity to the event in Berlin. Moreover, we provide an analysis that is based on a set of tweets following the January 6 United States Capitol event for comparison. The labels distinguish eight different classes of conspiracy theories, as well as other misinformation. This allows for studying the prevalence of different misinformation narratives around events of note. In total 23,417 tweets were labeled manually.

The purpose of this dataset analysis is to allow further study of the phenomena, as well as training of machine learning systems with the purpose of detecting conspiracy theory content.

Om å kartleggja mørk materie med maskinlæring

ABSTRACT. Gravitasjonslinsing er fenomenet der ljos frå fjerne himmellegeme vert avbøygd av tyngdekraften frå andre himmellegeme, som ofte ikkje er fullt synlege fordi mykje av massen er mørk materie. Observert gjennom ei gravitasjonslinse, framstår fjerne gallaksar som forvrengde. Der er mykje forskingsaktivitet som freistar å karleggja mørk materie ved å studera linseeffektar, men dei matematiske modellane er kompliserte og utrekningane krev i dag mykje manuelt arbeide som er svært tidkrevjande. I denne artikkelen drøftar me korleis me kan kombinera rouletteformalismen åt Chris Clarkson med maskinlæring for automatisk, lokal estimering av linsepotentialet i sterke linser, og me presenterer eit rammeverk med programvare i open kjeldekode for å generera datasett og validera resultat.

I-KAHAN: Image-Enhanced Knowledge-Aware Hierarchical Attention Network for Multi-modal Fake News Detection

ABSTRACT. In the quest to combat the proliferation of fake news, accurate detection of fabricated news content has become increasingly desirable. While existing methodologies leverage a variety of news attributes, such as text content and social media comments, few incorporate diverse features from different modalities like images. In this paper, Image-Enhanced Knowledge-Aware Hierarchical Attention Network (I-KAHAN) architecture is proposed as an enhancement to the existing KAHAN architecture. The I-KAHAN architecture utilizes a wide variety of attributes including news content, user comments, external knowledge, and temporal information which are inherited from the KAHAN architecture, and extends it by integrating image-based information as an additional feature. This work contributes to refining and expanding fake news detection methodologies by embracing a more comprehensive range of features and modalities, and offers valuable insights into the effectiveness of various methods for the numerical representation of images, feature aggregation and dimensionality reduction. Experiments conducted on two real-world datasets, PolitiFact and GossipCop, assessing the performance of the I-KAHAN architecture, demonstrated approximately 3% improvement in accuracy over the KAHAN architecture, highlighting the potential benefits of incorporating diverse features and modalities for enhanced fake news detection performance.

Forecasting Hourly Ambulance Demand for Oslo, Norway: A Neuro-Symbolic Method

ABSTRACT. Forecasting ambulance demand is critical for emergency medical services to allocate their resources as efficiently as possible. This work uses data from Norway's Oslo University Hospital (OUH) to forecast hourly ambulance demand in Oslo and Akershus. To forecast demand, we developed a neuro-symbolic method, DeANN. DeANN integrates statistical decomposition and artificial neural network methods. Statistical decomposition computes trend, seasonal, and residual components from the ambulance demand time series. Using these components, we apply a multilayer perceptron and regression to compute an overall ambulance demand forecast. Based on experimental results, we conclude that our proposed neuro-symbolic approach for ambulance demand forecasting outperforms several baseline models. Our best neuro-symbolic model has a mean squared error of 21.68 and improves on previous results for the OUH data set.

14:15-16:15 Session 8B: NISK 2: Biometrics
Location: KE A-204
Analyzing eyebrow region for morphed image detection

ABSTRACT. Facial images in passports are designated as primary identifiers for the verification of travelers according to the International Civil Aviation Organization (ICAO). Hence, it is important to ascertain the sanctity of the facial images stored in the electronic Machine-Readable Travel Document (eMRTD). With the introduction of automated border control (ABC) systems that rely on face recognition for the verification of travelers, it is even more crucial to have a system to ensure that the image stored in the eMRTD is free from any alteration that can hinder or abuse the normal working of a facial recognition system. One such attack against these systems is the face-morphing attack. Even though many techniques exist to detect morphed images, morphing algorithms are also improving to evade these detections. In this work, we analyze the eyebrow region for morphed image detection. The proposed method is based on analyzing the frequency content of the eyebrow region. The method was evaluated on two datasets that each consisted of morphed images created using two algorithms. The findings suggest that the proposed method can serve as a valuable tool in morphed image detection, and can be used in various applications where image authenticity is critical.

Towards CNN-based Level 1 Feature Extraction for Contactless Fingerprint Recognition

ABSTRACT. This work examines the detection of ridge orientation patterns, also referred to as level 1 features, from contactless fingerprint images and their classification. We trained two Convolutional Neural Networks (CNNs) to classify fingerprints based on their ridge orientation patterns. Our models were trained on synthetic data generated by SynCoLFinGer. Afterwards, we conducted various experiments for classifying these patterns and evaluated our trained models on four real-world databases: PolyU CB2CL, ISPFDv1 contactless fingerprint database, and two in-house databases.

We report the classification accuracy in terms of Classification Error Rate (CER). We achieved CERs between 28% and 38% considering all samples. Due to the amount of low-quality samples included in the database, we use NFIQ 2 to iteratively exclude samples from the databases and report the corresponding CER. We then decided to use NFIQ2 scores to iteratively exclude samples and hence report the impact of low-quality samples.

By excluding the lowest scoring 10% of all samples within each database, we achieve CERs of 24% to 35% depending on the databases. While these error rates are still high, they show promise compared to the original values. Although further research is needed to improve results, we show that combining quality-score-based exclusion of images with CNNs trained on synthetic contactless data is a promising method to classify fingerprint patterns.

Morph-PIPE: Plugging in Identity Prior to Enhance Face Morphing Attack Based on Diffusion Model
PRESENTER: Haoyu Zhang

ABSTRACT. Face-morphing attacks (MA) aim to deceive Face Recognition Systems (FRS) by combining the face images of two or more subjects into a single face image. To evaluate the vulnerability of existing FRS and further develop countermeasures against potential attacks, it is necessary to create diverse morphing algorithms that produce high visual quality and have strong attack potential on FRS. In this work, we propose a novel morphing algorithm using a diffusion model and adding identity prior to strengthening attack potential on the FRS. Compared to existing works using diffusion models, our method can add explicit control of the morph generation process through identity manipulation. We benchmark our proposed approach on an ICAO-compliant face morphing dataset against state-of-the-art (SOTA) morphing algorithms, including one baseline using the diffusion model and two representative morphing algorithms. The results indicate an improvement in the performance of the morphing attack potential compared to the baseline algorithm using diffusion while it achieves comparable attack strength to other SOTA morphing generation algorithms which rely on tedious manual intervention in the creation of morphed images.

Type^2: A Secure and Seamless Biometric Two-Factor Authentication Protocol Using Keystroke Dynamics
PRESENTER: Pia Bauspieß

ABSTRACT. Password-based user authentication comes with impersonation risks due to poor quality passwords or security breaches of service providers. An additional layer of security can be provided to the authentication through keystroke dynamics, i.e., measuring and comparing users' typing rhythm for their password. While this two-factor authentication is efficient and unobtrusive, the privacy of the biometric characteristics must be ensured. Therefore, we present the Type^2 protocol for secure two-factor authentication based on keystroke dynamics, where the anomaly detection of the latter is executed in the encrypted domain. In an experimental evaluation, we show that our proposed protocol achieves real-time efficiency with an overhead of less than 130 milliseconds compared to password-only authentication.

14:15-16:15 Session 8C: NOKOBIT 2: IT-development and architectures
Location: KE E-101
Operational Backbone Work: Modernization Activities in the Migration of Monolith-Oriented IT Architectures

ABSTRACT. To cope with new digital markets, incumbent financial organizations need to modernize their monolithic system portfolio to a more flexible and efficient form. This has proven to be quite challenging since the existing systems are tightly integrated with historical practices. We frame the portfolio as the operational backbone (OB), and ask what are the key activities of OB work in monolithic systems migration, and what is the outcome of such a migration process? Our empirical case is from a big financial institution embarking on a digital journey to modernize its core systems. Our contribution is a migration model that describes the key activities attributed to OB Work, and the role of these activities in modernizing the IT portfolio from a fragmented OB to a more coherent and flexible OB.

Development of a Toolbox on Sustainable ICT across Industry and Academia : The goforIT project

ABSTRACT. Climate change and its consequences will provide enormous chal-lenges to society over the next decades. Society needs to address these challenges, both by mitigating the changes and by adapting to them. At the same time, we need to assure that the resulting society is both economically viable and socrially desirable.

ICT plays an important role in assuring both environmental, economic, individual, technical and social sustainability. While it is commonly known what sustainability is on a high level, and why we need to change our ways, it was realized at a joint academia-industry panel at NIKT (the Norwegian ICT conference) in November 2019 that ICT-professionals did not necessarily know how they should change their ways. This also applied at the time to academia: Lecturers and those responsible for study programs did not know what should be taught in the different subjects.

On this background goforIT (Grønn omstilling for IT-bransjen) was established in February 2020 by a small group of companies and universities. It has since grown to a national network with around 10 universities, 45 private organizations and 4 interest organizations.

The development and use of the Sustainability Competence Toolkit is one of the major undertakings of goforIT to be important both for practice and education. The ambition of the authors is to solve the systemic problems for operational sustainability in the industry and the society at large, moving the knowledge development and application in parallel in industry and academia. Developing the toolkit can be looked upon as a type of action design research, given that the developers of the artifact is also some of the main users of this in their day-to-day activities. To understand how to best serve our audience, a group of design professionals have through a service design process undertaken interviews with people in various target audiences in the workforce and academia.

The role of contextual conditions in systems development: The impact of design context on participation in Norwegain Welfare Services

ABSTRACT. Human-Computer Interaction and adjacent fields agree that citizen participation is vital in designing digital public services. However, a gap remains between recommendations and how participation is facilitated in practice in the public sector. As challenges to participation remain even in the face of established design standards and best practices, contextual conditions warrant more investigation. Based on this discrepancy, we must clarify how the design context impacts participatory activities. This paper presents an exploratory case study of how designers and caseworkers seek to involve vulnerable persons in a public service project's digital solution development. We identified three interconnected contextual conditions that impact participation in the design process: 1) organizational complexity, 2) recruitment and representation, and 3) power imbalances. This paper contributes to a more nuanced understanding of the role of context as a determinant of participatory outcomes in digital public system design.

Sustainability Design in Mobile Augmented Reality

ABSTRACT. This paper describes the sustainability design process of a mobile augmented reality (MAR) application called AudioNear. Through a four-step process and a dedicated workshop with developers, the Sustainability Awareness Framework (SusAF) is applied to capture and connect sustainability issues into dimensions and levels of effects. Among twenty sustainability issues in MAR applications, eight functional issues that are essential to creating a sustainable MAR travel guide experience were identified and developed for AudioNear. First, a comprehensive list of design suggestions was formulated to facilitate sustainability design in MAR applications. Then, high-fidelity mock-ups of AudioNear were developed based on design suggestions, indicating promising results in terms of the sustainability design process. This work contributes to the field of sustainability design and MAR.

14:15-15:45 Session 8D: UDIT 2A: Fyrsteårsstudiet
Location: KE A-101
Mandatory or Voluntary Course Work in Introductory Programming Courses?

ABSTRACT. Which approach, mandatory or voluntary weekly labs, is more conducive to student learning? We conducted a quasi-experiment in a bachelor level programming course (CS2) at the Department of Informatics at the University of Bergen. The course had maintained consistent structure, content, and faculty for the past two iterations, with one key distinction: in 2022, the weekly labs were made mandatory (n=265), whereas in 2021 they were voluntary (n=311). We compared student performance, retention, stress levels, and satisfaction between the two iterations.

Our findings revealed that in the semester with mandatory labs, students demonstrated significantly better performance on an end-of-term assignment that was nearly identical for both iterations of the course. We also observed a slight increase in the retention rate for students who participated in the final exam, but this difference did not reach statistical significance.

Regarding stress and workload, we employed a mixed-methods approach, utilizing both qualitative and quantitative data collected through surveys to gauge students' experience with mandatory versus voluntary assignments and how it impacts their workload. The responses revealed that students who had not experienced mandatory assignments expressed concerns about being overwhelmed with workload. However, students who had actually gone through the mandatory workload found it manageable and even viewed it as a positive aspect of their overall learning experience in the course.

Finally, we compared the end-of-term anonymous course evaluations between the two years, and found no statistically significant difference in course satisfaction between the two iterations.

Utvikling av førsteårsstudenters tilbakemeldingskompetanse gjennom egen-vurdering

ABSTRACT. Studien fokuserer på å styrke studenters tilbakemeldigsferdighet ved hjelp av egen-vurdering. Vi drøfter hvordan studentene vurderer egne oppgaver, i hvilken grad de bruker denne vurderingen, og hvordan dette påvirker deres læringsutbytte. I denne studien ble en online strukturert vurderingsplattform brukt som et verktøy for egen-vurdering, lik for faglærere og studenter. Resultatene viste at nesten 80% av studentene valgte å gjennomføre egen-vurdering, selv om det var frivillig. Sammenligning av karakterer mellom studentene som gjennomførte egen-vurdering og de som ikke gjorde det, avslørte at førstnevnte gruppe oppnådde i gjennomsnitt 20% høyere score på den obligatoriske oppgaven. Studien avdekket også at tydelige tegn Dunning-Kruger-effekten. Overordnet viser studien at egen-vurdering kan være en verdifull metode for å utvikle studenters tilbakemeldings-kompetanse og gi dem en økt forståelse av sitt eget faglige nivå. Ved å gjennomføre denne oppgaven, oppnådde studentene økt aktivitet og engasjement i faget. Egen-vurdering ble opplevd som nyttig av studentene og bidro til økt nytte av tilbakemeldinger fra faglærerne. Resultatene tyder også på at studentene fikk en bedre forståelse av læringsmålene ved å ha tilgang til samme vurderingsform som faglærerne brukte.

Nudging in Higher Education: Text Message Interven-tions and Study Habits in Mathematics

ABSTRACT. In this field experiment, we explore the connection between study habits and aca-demic achievement among undergraduates in an introductory mathematics course at a Norwegian college. Using a procrastination scale based on self-reported be-havior, we examine how students’ study habits influence their performance.

Our findings reveal a negative correlation between self-identified procrastinators and the number of problem-sets submitted. Moreover, there is significant correlation between procrastination tendencies and the final course grade, but only for two of the four dimensions we use to measure procrastination. Notably, 43% of the vari-ation in the final grade can be accounted for by prior competence, number of homework’s and the student’s age.

Furthermore, to establish causality, we ran-domly divided the students into two groups: one received a text message on their mobile devices and the other did not. The text message emphasized the positive link between the number of completed problem sets and improved academic per-formance in the final exam. Through this controlled approach, we assess the im-pact of the text message on problem set submission and final exam performance.

Our results indicate that the text message exerts no discernible influence on either the quantity of problem sets submitted or the performance in the final exam.

14:15-15:45 Session 8E: UDIT 2B: Skulen ++
Location: KE C-101
Assessment strategies for programming integrated in upper secondary education subjects

ABSTRACT. The increasing integration of computer science and programming into formal school education is a commendable endeavor that has seen different implementation solutions. Sweden and Norway have opted for a cross-curricular model, incorporating the task of teaching and learning computing into already existing subjects, mainly within STEM modules. In-service teachers often struggle with teaching programming effectively and integrating acquired programming knowledge into their educational settings. Additionally, instructors need to understand and evaluate programming learning outcomes, taking into account the new curriculum requirements. There is a lack of clear guidance regarding how teachers could assess students' knowledge and skills when programming becomes a part of their subject. This study investigates the assessment approaches of in-service teachers who have undergone a university-level professional development program.

The qualitative analysis of the teachers' assessment plans reveals that traditional assessment strategies are adjusted for the sake of programming, leaning towards formative initiatives featuring discussions, presentations, and student projects, and to a lesser extent, tests and exams. With respect to programming, teachers' assessment initiatives cover a broad spectrum of knowledge with different degrees of abstraction and granularity, from the particularities of coding and debugging to more abstract issues of algorithmic thinking and even program quality such as robustness or reliability. Higher Education courses addressing teacher professional development in programming might therefore integrate these strategies to support teachers' assessment in programming.


Exploring Scratch to Python Transfer in Norwegian Lower Secondary Schools

ABSTRACT. With an increased focus on programming in schools, more and more pupils are introduced to Scratch in primary school before learning Python in secondary school. This paper draws on an intervention approach associated with the Model of Programming Language Transfer (MPLT), and on hugging and bridging transfer techniques more generally, to consider how to ease the transition between the two contexts and to deepen conceptual learning.

An initial guess quiz, part of the MPLT intervention approach, taken by 97 pupils from mathematics classes in three lower secondary schools in Norway, indicates that in the current context where the pupils have had some, but not much, exposure to Scratch, many of the basic programming concepts seem to be abstract true carry-over concepts, i.e. the pupils are familiar with the concepts in Scratch, but do not automatically transfer that understanding to the similar Python concepts.

An MPLT-based teaching intervention was designed in order to help the pupils see the connections between the Scratch and Python concepts. The intervention used the Python turtle library in an attempt to make the two programming environments more similar. The intervention was well-received by pupils and teachers, but follow-up interviews revealed that the teachers do not currently have enough programming knowledge themselves to be able to adapt the intervention into their regular teaching. In practice, the situation is also complicated by the two programming languages being taught by different teachers in different schools, with no communication and little curricular continuity.

Teaching Programming Blockchain Applications to undergraduate students

ABSTRACT. Blockchain is an technology with potential for supporting industrial applications with traceability and transparency. Unfortunately, the understanding necessary to make decisions regarding the adoption of the technology and to implement blockchain-based applications is still in short supply. We have designed and thought for two years an undergraduate course to help filling the talent gap that is undermining Blockchain adoption. The course is thought as a hands-on course, where students gain initial skills in developing supply management applications for the BSV blockchain. This paper describes the motivations and challenges of this course, discusses its main achievements and issues, and suggests guidelines to follow based on our observations and the student feedback.

16:00-17:00 Session 9A: UDIT 2C: Gamification
Location: KE A-101
A review of taxonomies of cybersecurity educational games

ABSTRACT. Presently, cybersecurity awareness is an essential skill ex- pected from all audiences that range from small children to older adults. Moreover, serious games and gamification has been used in cybersecu- rity education for years with interesting results. Currently, there exists a multitude of cybersecurity games that target different cybersecurity skills and topics using a wide range of game genres and approaches. With the existence of such a variety of games, a classification taxon- omy is of the utmost importance. A proper classification taxonomy can allow researchers to gauge the existing cybersecurity games as well as to place their proposed cybersecurity games in the pantheon of exist- ing games. Thus, this research work conducted a literature review aimed from 2018 to 2023 to examine the existing taxonomies for cybersecurity educational games. It was observed that there exists several taxonomies for cybersecurity games but comprehensive taxonomies especially aimed at cybersecurity serious games are lacking. Moreover, the review also identified some potential taxonomy candidates that are used in educa- tion games in general but are yet to be used in the scope of cybersecurity. Lastly, the review suggests an extended and unified cybersecurity edu- cational game taxonomy by merging some of the available educational game taxonomies.

Spillifisering for økt engasjement, fleksibilitet og alternativt læringsløp

ABSTRACT. Motiverte og engasjerte studenter har potensiale til økt læring. Det finnes mange undervisningsmetoder som i ulik grad motiverer og engasjerer. Spillifisering er en slik metode, hvor man gjør bruk av spillelementer, som poeng og nivåer, i ikke-spill sammenhenger. Mens konseptet har røtter tilbake til tidlig 2010-tallet, skiller det seg fra spillbasert læring ved at det fokuserer på spill-lignende elementer snarere enn faktiske spill. Denne artikkelen beskriver implementeringen av et pilotprosjekt for spillifisering innført i emnet IN1020 - ’Introduksjon til datateknologi’. Målet var å øke fleksibilitet, samarbeid og studentengasjement. Som del av prosjektet deltok studentene i spill-lignende oppgaver og avsluttende ’endgame’-eksamener. Tilbakemeldingene viste overveldende suksess, spesielt blant selvorganiserte studentgrupper. Mens spillifisering innen høyere utdanning er et fremvoksende felt, indikerer våre funn en betydelig mulighet for å fornye pedagogiske tilnærminger, skape dypere læringsengasjement og forberede studenter for fremtidige karriereveier. Studentresponsen bekreftet at en mer praktisk tilnærming økte deres motivasjon og at gruppesamarbeidet var særdeles berikende. Videre forskning bør konsentrere seg om en grundig evaluering av spillifiseringens effektivitet sammenlignet med tradisjonelle læringsmetoder.

16:00-17:00 Session 9B: UDIT 2D: COVID-19 og fjernstudium
Location: KE C-101
Experiences after Covid-19 and digital teaching: IT students ask for streaming and recordings, but say they learn best on campus!

ABSTRACT. In this paper, we investigate experiences among IT bachelor students one year after the lifting of the last Covid-19 restrictions. We conducted an online survey among Norwegian IT students enrolled in a bachelor program (n=420), seeking to answer the following question: To what extent and why do students want streaming and recordings of physical lectures after the pandemic? The results of the statistical analysis show that most of the respondents prefer physical teaching and say that although their motivation for learning is best on campus, they also want the option to follow a stream or watch a recording. Third-year students are more positive about digital teaching, as are those who work a lot alongside studies. We conclude that the use of technology during the Covid-19 pandemic has led students to have new expectations and requirements that need to be taken into account.

Back to campus after the Covid-19 pandemic: A qualitative study from the students perspective

ABSTRACT. In this paper, we investigate the reopening of society after the Covid-19 pandemic looking especially for long-lasting effects of the lockdowns on teaching in higher education. We conducted a survey among Bachelor's students within the field of Information Technology (IT). This paper presents the results of thematic analysis on the qualitative data of this survey. The importance of student activity and interaction within social constructivism is used as a foundation for our analysis and discussion of the post-pandemic situation. We found that students have come to expect both streaming and recording of lectures for flexibility and usefulness as learning resources. On the other hand, students are very clear that they learn better during physical lectures and that digital teaching is seen as inferior in many ways. It seems that going forward, as lecturers, we should find ways of combining online and traditional teaching.