ISD_2024: 32ND INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS DEVELOPMENT
PROGRAM FOR MONDAY, AUGUST 26TH
Days:
next day
all days

View: session overviewtalk overview

10:00-11:00 Session 3: Keynote: Xiaofeng Wang

Title: The importance of asking the right questions

Abstract: After over 20 years of research in agile software development and software startups, I've come to one important realization: asking the right questions is crucial in both the endeavors I have researched on, and in my own research work. Whether it's developing valuable software, building successful startups, or conducting impactful research, the importance of asking the right questions cannot be overstated, yet it is often overlooked by both practitioners and researchers. The recent surge of Generative AI has further underscored the necessity of asking the right questions, particularly through prompt engineering. In this keynote talk, I will present various cases to illustrate the critical importance of this skill in the fields that I know. I will also argue that the ability to ask the right questions is essential for everyone to truly harness the power of Generative AI.

Location: Room C-09
11:00-11:40 Session 4: Poster Session over Coffee I
Innovative information system approach for robust multi-criteria decision making with unknown weights

ABSTRACT. In an era where informed decision-making depends on reliable and robust information systems, this paper presents an innovative approach to the evaluation of multi-criteria decision-making problems with unknown criteria weights. Focused on selecting a city electric vehicle for personal use, the study extensively explores criteria weight scenarios within the Stable Preference Ordering Towards Ideal Solution (SPOTIS) method. Using a novel fuzzy ranking concept for ranking definition under multiple evaluation scenarios enhances decision reliability under varied input conditions. Emphasizing the significance of reliable information systems and customer support, this study aims to empower decision-makers with comprehensive insights into complex decision problems.

Social Sustainability and Large-Scale Agile Software Development

ABSTRACT. Large Scale Agile (LSA) projects present unique challenges in maintaining a positive and sustainable work environment. This study examines the social sustainability factors that influence the LSA project. A multiple case study approach was employed, involving semi-structured interviews with software professionals working in three Swedish IT companies. Thematic analysis revealed a network of interrelated eight factors that impact social sustainability in LSA projects, including trust and communication, learning culture, self-organisation, decision-making, leadership behaviour, and psychological safety. This study emphasises the human-centred aspects that are crucial for the successful implementation of LSA projects and the enhancement of social sustainability.

Characteristics of the learning data of a session-based recommendation system and their impact on the performance of the system

ABSTRACT. Recommendation systems are an effective solution for personalising e-commerce services. They are able to provide customers with relevant and useful products. Their performance is determined by the quality of the methods employed. However, it is also influenced by the input data. Session-based (SB) techniques are highly effective in real-world scenario to generating recommendations that focus on short-term user activities. This study aims to investigate the relation between data statistics and performance of SB algorithms measured by accuracy and coverage.

A Machine Learning Approach for Estimating Overtime Allocation in Software Development Projects

ABSTRACT. Overtime planning in software projects has traditionally been approached with search-based multi-objective optimization algorithms. However, the explicit solutions produced by these algorithms often lack applicability and acceptance in the software industry due to their disregard for project managers' intuitive knowledge. This study presents a machine learning model that learns the preferred overtime allocation patterns from solutions annotated by project managers and applied to four publicly available software development projects. The model was trained using 1092 instances of annotated solutions gathered from software houses, and the Random Forest Regression (RFR) algorithm was used to estimate the PMs’ preference. The evaluation results using MAE, RMSE, and R2 revealed that RFR exhibits excellent predictive power in this domain with minimal error. RFR also outperformed the baseline regression models in all the performance measures. The proposed machine learning approach provides a reliable and effective tool for estimating project managers' preferences for overtime plans.

Application of Generative Adversarial Network for Data Augmentation and Multiplication to Automated Cell Segmentation of the Corneal Endothelium

ABSTRACT. Considering the automatic segmentation of the endothelial layer, the available data of the corneal endothelium is still limited to a few datasets, typically containing an average of only about 30 images. To fill this gap, this paper introduces the use of Generative Adversarial Networks (GANs) to augment and multiply data. By using the ``Alizarine'' dataset, we train a model to generate a new synthetic dataset with over 513k images. A portion of this artificial dataset is then used to train a semantic segmentation model for endothelial layer segmentation and its performance is evaluated showing that in average the mean intersection over union for all datasets is equal to 81\%. In our opinion, the images of the endothelial layer, together with the corresponding masks generated by the GAN, effectively represent the desired data. The obtained results seem optimistic after visual inspection, since the segmentation is very precise.

Calcium activity in a myocardium-on-chip model analysed with computer vision techniques for the assessment of tissue contractile properties

ABSTRACT. Models of cardiac tissue accurately simulating the structure and function of myocardium might allow to better understand and treat cardiac disorders such as arrhythmia. One of indicators of contraction, a key feature of cardiac tissue, is calcium (Ca) activity, which can be detected and recorded with fluorescence microscopy imaging. Here we use a range of computer vision techniques to analyse Ca activity in a myocardium-on-chip model, in tissues of GCaMP6-infected myocytes, grown in a microphysiological environment. We analyse and quantify various aspects of Ca activity both local and global properties alongside temporal dynamics, considering aspects such as instantaneous and average occurrence rate of Ca waves, size of areas covered by individual waves, or a degree of regularity of Ca activity. Simple summary statistics computed allow for a comparison of Ca activity for tissues recorded in different experimental conditions.

Comparison of Deep Neural Network Learning Algorithms for Mars Terrain Image Segmentation

ABSTRACT. With the advancement of technology and space exploration, research on the application of artificial intelligence in planetary data analysis becomes increasingly more significant. Autonomous control of space rovers allows for faster and more reliable exploration of Mars. This scientific paper is dedicated to the topic of terrain recognition on Mars using advanced techniques based on the convolutional neural networks (CNN). The work on the project was conducted based on the set of 18K images collected by the Curiosity, Opportunity and Spirit rovers. The training model benefits from the pretrained backbones trained for analysis of the RGB images. The project achieves an accuracy of 83.5% and extends the scope of classification to unknown objects when compared with related projects of Zooniverse and NASA's Jet Propulsion Laboratory scientific group.

Ontology-driven Process-based Laboratory Information Management System

ABSTRACT. In this study, authors focus on the Laboratory Information Management System (LIMS) development. Taditionally, the LIMS is implemented to organize the medical laboratory processes, from specimen access through laboratory analysis to issuing reports to physicians. The LIMS is to automate laboratory processes and is compatible with laboratory tools and quality control instruments. Authors present the enterprise architecture (EA) of pathomorphology diagnosis unit (PDU), covering the LIMS applications and processes. Further, they formulate the system's knowledge model. Authors propose an OWL ontology for the IoTDT-BPMN (Internet of Things Digital Twin – Business Process Model and Notation) metamodel for the PDU. Authors conclude that ontology elements related to the processes of PDU facilitate the LIMS development and usage, particularly to support the federated learning realized by selected labs.

Ontologically Founded Design Patterns for Situation Modeling

ABSTRACT. Situation modeling is a common challenge in modeling systems in non-trivial domains. The General Formal Ontology (GFO) is a top-level ontological theory that includes various notions for representing complex temporally extended situations. It has been used for situation modeling in diverse biomedical domains. We have analyzed and compared such GFO-based application cases. In accordance with this study, we present derived ontology-based design patterns as a conceptual toolset for situation modeling.

Enhancing Personalized Travel Recommendations: Integrating User Behavior and Content Analysis

ABSTRACT. In the research presented in this paper, we focus on overcoming the obstacles of delivering personalized travel recommendations in the tourism sector. It introduces a three-part contribution: initially, it delves into the distinctive challenges of making recommendations in tourism and presents a framework to improve the ranking of trip options in tour operators' search engines. We also propose an innovative method that utilizes the behaviors of tourists and the descriptive content of travel offers to compile a dataset rich in insights about the travel industry. Furthermore, we prove that enhancing listwise learning-to-rank algorithms with an attention mechanism for selecting features significantly boosts the effectiveness of the model beyond traditional probabilistic ranking methods. The research concludes by assessing these ranking models and shedding light on the intricacies of recommending travel offers in the tourism industry.

The Reconstruction of Blowing Pressure in Pipe Organ Using Machine Learning

ABSTRACT. The reconstruction of a pipe organ involves determining the blowing pressure. The lack of information about the pressure value may even result in irreversible damage to the pipes, as the adjustment of the sound parameters that depend on the pressure requires changing the physical structure of the pipes. In this paper, we provide a methodology for determining the blowing pressure in a pipe organ, and present a formula describing the air pressure in the pipe foot, depending only on the height of the pipe’s cut-up and the fundamental frequency. We apply machine learning to determine the blowing pressure, based on the parameters of only a percentage of pipes. We found that the height of the cut-up and the fundamental frequency allow determining the blowing pressure. The more pipes, the higher the accuracy, but even 10% of pipes can be sufficient.

New Concept to Multi-Criteria Model Automatization - Machine Learning Based Approach

ABSTRACT. The commonness of information systems based on machine learning (ML) models and multi-criteria decision analysis (MCDA) methods is growing due to the increasing dimensions of data required for processing. This paper presents a hybrid framework combining the MCDA method with ML models to predict the rankings of countries considering the fulfillment of Sustainable Development Goal 7 based on the identified preferences of decision-makers. The results proved that the proposed approach can be regarded as a functional tool for multi-criteria assessment in the case of inaccessibility of experts' knowledge. The proposed approach enables to mitigate the shortcomings of MCDA methods arising from the necessity to engage decision-makers.

Improving the evaluation of Defensive Player Values with advanced machine learning techniques

ABSTRACT. Quantifying defensive actions, which offensive indicators have historically overshadowed, is challenging in football analysis. This study presents a novel approach using XGBoost and neural networks to evaluate defensive play using On-Ball Value (OBV), Valuing Actions by Estimating Probabilities (VAEP), and eXpected Threat (xT) indicators. The proposed evaluation of Defensive Player Value using machine learning techniques is presented. A comparative assessment of expert ratings and market values in a Polish PKO BP Ekstraklasa case study highlights the method's effectiveness. The research contributes to the development of sports analytics by addressing the long-term challenge of evaluating the defensive play of football players.

Leveraging Generative AI Tools for UX Design in Lean and Agile Projects

ABSTRACT. Recent advancements in Generative AI (GenAI) open new opportunities to improve User Experience (UX) practitioners’ efficiency in their projects. Due to intensive teamwork caused by time pressure and readiness for rapid changes, Lean and Agile project management seems particularly predestined for easy adoption of GenAI-supported UX design methods. However, precipitate and spontaneous leveraging of GenAI tools to UX design bears the risk that results may differ from what is expected and cause delays that harm a speedy IT project management. This paper identifies issues relevant to UX practitioners' dilemmas when considering GenAI tools for user interface projects, and proposes a fast-and-fugal decision-making framework for IT project managers and UX professionals on whether to use (or not) GenAI tools in Agile and Lean IT projects.

System for Monitoring Forests with Context-Aware Capabilities

ABSTRACT. A forest fire protection system exemplifies context-aware and proactive responsiveness to potential threats during routine monitoring of forested areas and firefighting operations. We advocate for the development of a context-driven system, which entails deploying a network of sensors across the different sectors of forests. Additionally, we incorporate context-driven and automated negotiation techniques to mitigate forest fire threats. The intelligent decisions facilitated by the system, aimed at supporting users, are the outcome of the proposed context processing.

The Impact of Foreign Accents on the Performance of Whisper Family Models Using Medical Speech in Polish

ABSTRACT. The article presents preliminary experiments investigating the impact of accent on the performance of the Whisper automatic speech recognition (ASR) system, specifically for the Polish language and medical data. The literature review revealed a scarcity of studies on the influence of accents on speech recognition systems in Polish, especially concerning medical terminology. The experiments involved voice cloning of selected individuals and adding prosodic contours with Russian and German accents, followed by transcription of these samples using all available models from the Whisper family and comparison with the original transcription. The results of these initial experiments suggest that the Whisper model struggles with foreign accents in the context of Polish language and medical terminology. This highlights the need for further research aimed at improving ASR systems for transcription of medical personnel.

Reidentifying the Compromise Model in the Analytical Decision Process: Application of the SITW and S-TFN Approaches

ABSTRACT. The paper presents a novel approach to re-identifying the trade-off model in analytical decision-making processes, using SITW and S-TFN techniques and the TOPSIS method. Focusing on the importance of criteria weights and benchmarks, the study compares the proposed methods with traditional approaches such as CRITIC-TOPSIS. The results show the higher efficiency of SITW and S-TFN in correlation with the trade-off ranking. The proposed techniques open up new possibilities in re-identifying decision-making models, emphasizing the importance of precise determination of weights and reference points.

Innovative Sales Forecasting: Utilizing Fuzzy Neural Networks for Enhanced Sales Prediction

ABSTRACT. This study aims to improve retail sales forecasting using fuzzy neural networks (FNNs). Traditional methods often miss complex sales patterns. We use accuracy and loss metrics to apply FNNs to the Walmart sales dataset, comparing them to conventional time series models and advanced techniques like LightGBM and LSTM. Comprehensive data preprocessing ensures data quality. FNNs handle uncertainties and complex relationships better, outperforming traditional methods. The findings suggest that FNNs enhance forecasting accuracy, supporting informed decision-making in retail.

A new symbolic time series representation method based on data fuzzification

ABSTRACT. Time series classification is an essential data processing task that relies on assigning class labels to sequences of temporal data. A fundamental component of any time series classification method is data representation. There exist several approaches to that task ranging from straightforward sequence distance-based methods to neural networks. We focus on symbolic time series representation-based methods. The literature of the domain repeatedly underlines their flexibility and good classification quality. We propose a new approach to convert numeric time series into symbolic ones based on fuzzy clustering. The goal is to reduce noise in the data. The proposed method utilizes cluster membership values to determine symbols that characterize the time series. The new approach was tested in an empirical procedure to validate its correctness while achieving satisfying results.

11:40-12:40 Session 5A: New Topics in IS Development I
Location: Room C-20
11:40
Tamper-proof blockchain-based contracts for the carriage of goods by road

ABSTRACT. We propose an architecture for a decentralized electronic consignment note (eCMR) management system leveraging blockchain technology. We used Design Science Research (DSR) to guide our work in cooperation with a leading logistics and transportation management solution provider. We started with a multi-vocal analysis of the literature on eCMR, regulations, and technical documentation. Then, we held several rounds of discussions with the road carriage business stakeholders to establish the core software requirements and architecture. Finally, we have implemented and tested a proof-of-concept. The resulting artifact enables logistics stakeholders to track consignment notes of interest in real-time, relying on a tamper-proof database with complete eCMR lifecycle traceability. The proposed architecture and proof-of-concept can guide the design of future decentralized eCMR services, helping to implement reliable and transparent logistics processes.

12:00
User acceptance of leisure and hobby subscription services – a systematic literature review

ABSTRACT. Digital subscription services have become an ubiquitous presence in various sectors, including entertainment, news, music, gaming, and software. Despite their growing significance and impact, particularly within leisure and hobby areas, the problem of user acceptance of these services has not yet received a comprehensive explanation. To address this research gap, a systematic literature review has been carried out. The literature items were extracted from the largest academic databases, i.e., Scopus and Web of Science. Then, using Bibliometrix and AI-driven ASReview software, 35 items concerning leisure and hobby subscription services were selected for a full-text analysis. Research reveals a noticeable concentration of academic discourse in the last five years, coinciding with the onset and aftermath of the COVID-19 pandemic. The analyzed papers investigated user acceptance of Internet subscriptions, with particular emphasis on leisure and hobby realm, exploring, e.g., willingness to pay for subscription services and IT infrastructure influence.

12:20
Challenges for making use of welfare technology generated data from a system innovation perspective

ABSTRACT. European governments have pushed digitalization of elderly care to meet the challenges with an increasing ageing population. As a result, welfare technologies (WT), as digital safety alarms, are now implemented, generating huge amounts of data which could be used to improve the quality of care. The aim in this explorative study is to analyze and describe the challenges for making use of WT-data in Swedish elderly care. Our qualitative study revealed that making use of WT-data is not a technical issue and suggests that framing the utilization of WT-data from a system innovation perspective is a promising initial move to synergize efforts and lay the groundwork for the more efficient, high-qualitative data-driven elderly care.

11:40-12:40 Session 5B: Data Science and Machine Learning I
Location: Room C-09
11:40
Dual-Level Decision Tree-Based Model for Dispersed Data Classification

ABSTRACT. The paper proposes a decision tree-based model for dispersed data classification. The dispersed data are stored in tabular form and are collected independently. They may have different objects as well as attributes, but some of them may be common among the tables. The proposed model has a two-level hierarchical architecture that uses decision trees at each level. At the lower level, bagging is used with decision trees for each table. For a classified object, prediction vectors are generated for each table, showing the probabilities that the object belongs to various decision classes. A global tree is trained based on vectors generated for validation set and it makes the final classification for a test object. This paper outlines experimental findings for our proposed approach and contrasts them with established methodologies from the literature. Statistical analysis, based on 16 dispersed data sets, confirms that our model improves classification quality for dispersed data.

12:00
Equal Criteria Influence Approach (ECIA): Balancing Criteria Impact in Multi-Criteria Decision Analysis

ABSTRACT. In multi-criteria decision-making, determining the importance of individual criteria remains a crucial problem. Traditional approaches to weighting criteria often result in insufficient consideration of the varying influence of criteria on decision outcomes. This paper introduces a novel iterative method called the Equal Criteria Influence Approach (ECIA) to tackle this problem. ECIA aims to equalize the influence of criteria by iteratively adjusting their weights. Unlike conventional methods, ECIA emphasizes the impact of criteria on the preference of alternatives. This approach is comprehensively analyzed through a small simulation study and analysis of the decision problem of assessing crisis management systems. Studies show that ECIA offers a unique solution to dealing with variability in the impact of criteria, leading to a more balanced and stable decision-making model.

12:20
Relative Relation in KNN Classification for Gene Expression Data. A Preliminary Study

ABSTRACT. This paper introduces an innovative approach to the classification of gene expression data using the k-nearest neighbors (KNN) algorithm. High dimensionality and limited sample sizes continue to present significant challenges for conventional classification techniques, including KNN. In response, we propose the Relative Relation Metric (RRM), a novel metric that diverges from traditional distances which typically rely on direct numerical or spatial comparisons. RRM instead focuses on the count of relational changes between pairs of data points, drawing conceptual inspiration from Relative Expression Analysis, which identifies the most discriminating gene pairs between classes, and Kendall's Tau. Applied to real gene expression datasets for disease classification and compared with established metrics, our preliminary study suggests that RRM has potential as an effective alternative for high-dimensional data classification, especially in contexts requiring resistance to methodological variations and the transformational aspects of biological data.

11:40-12:40 Session 5C: Information Systems Modelling
Location: Room C-21
11:40
An Approach to Assess Operational Business-IT Alignment

ABSTRACT. Business-IT alignment (BITA) remains a challenging topic for enterprise architects, and especially operational BITA that focus on the alignment between business processes and IT applications. A major challenge is determining the level of alignment between these Business and IT layers. Some approaches propose to assess this alignment, but they are often context-specific. Moreover, no approach intends to combine various assessment means for a more in-depth alignment evaluation. Thus, we propose an alignment assessement approach combining different assessment means (metrics, consistency rules, anti-patterns). We also provide a methodology that integrates this approach by relying on an established cartography expressing the current state of alignment. This cartography is composed of Business and IT models, and of explicit links between them. The proposed methodology and assessment approach are illustrated on the SoftSlate case study, an open-source Java E-commerce solution. In this practical experiment, we considered four metrics, two consistency rules, and four anti-patterns.

12:00
E-Government Interoperability Enterprise Architecture. Systematic Literature Review

ABSTRACT. Public administrations have been introducing innovations such as digital initiatives and those initiatives are related to interoperability between systems managed by different government agencies. Despite those efforts citizens and businesses are still claiming for better digital public services. To understand interoperability challenges, this paper presents a systematic literature review addressing 1) the levels of interoperability that must be considered in government services, 2) the key motivation for interoperability, and 3) the challenges in the e-government ecosystem. From 680 papers we selected 28 to conduct a deep analysis. As a result, we have identified three core interoperability layers: technical, semantic, and organizational. We also present e-government interoperability project challenges related with strategic, policy, technological and barriers, and common modeling language. On the other hand, using ArchiMate, we have identified the elements of the e-Government interoperability motivation layer and used them to test how Enterprise Architecture can manage e-Government interoperability

12:20
A Use Case Grammar for Requirements Specification

ABSTRACT. This paper proposes a use case grammar for the specification of functional requirements. The proposed grammar is defined in EBNF, tested in ANTLR and provides syntactic and semantic rules for writing use case specifications in semi-formal natural language. Such formalization not only helps to make the expression of requirements more disciplined, understandable, well-structured and validated, but it also makes easier their conversion into diagrammatic notations, such as use case and sequence diagrams. It also helps in reducing the time to identify and specify requirements, in diminishing redundancies, inconsistencies and omissions, and, generally, in producing better requirements.

12:40-13:40Lunch Break
13:40-15:00 Session 6A: New Topics in IS Development II
Chair:
Location: Room C-20
13:40
Blockchain-Based Self-Sovereign Identities: Current Landscape and Research Opportunities

ABSTRACT. Managing identities has become increasingly crucial for various organizations, including banks, government agencies, and healthcare providers. Furthermore, the privacy of personal data has gained increasing importance and concern. In this context, Self-Sovereign Identity (SSI) systems have emerged as a cutting-edge solution that builds on traditional identity management. They empower individuals with unprecedented control over their data, allowing them to selectively share information with authorized entities while retaining their privacy. This paper presents a Systematic Literature Review (SLR) of blockchain-based Self-Sovereign Identities. We analyzed ninety-four articles from eight academic databases and categorized the findings into six major groups, identifying eighteen distinct areas of interest. Our results underscore the growing importance of SSI in digital identity research and its implications for privacy, security, and regulatory compliance. Furthermore, we provide a comprehensive map of the literature, elucidating areas of substantial scholarly attention and those necessitating further exploration.

14:00
A Heritage Digital Twin for Serra da Estrela Cheese Production

ABSTRACT. This paper presents the design of a heritage digital twin of the Serra da Estrela cheesemaking process. The proposed solution integrates traditional knowledge with digital capabilities to ensure the preservation and sustainable practices of this intangible cultural heritage. Design science research is the selected approach to create a digital replica of the cheesemaking process, refined through iterative design and development phases, stakeholder interviews, and field demonstrations. The findings reveal that a heritage digital twin enables (1) memorization of local practices, (2) real-time monitoring and decision support, and (3) audit traces. It upholds traditional methods while enhancing resource efficiency and compliance with health standards. This work pioneers the application of digital twins to the cultural heritage of local food production, adhering to the requirements of protected designation of origin.

14:20
Towards Universal Visualisation of Emotional States for Information Systems

ABSTRACT. The paper concerns affective information systems that represent and visualize human emotional states. The goal of the study was to find typical representations of discrete and dimensional emotion models in terms of color, size, speed, shape, and animation type. A total of 419 participants were asked about their preferences for emotion visualization. We found that color, speed, and size correlated with selected discrete emotion labels, while speed correlated with arousal in a dimensional model. This study is a first step towards defining a universal emotion representation for use in information systems.

14:40
Heterogeneous Technology Acceptance Models as a Tool for Product Analysis: Example from Comparison of Handheld Gaming Consoles

ABSTRACT. While technology acceptance models (TAM) are widely used, questions remain concerning practical utility. Recent research suggests TAM applicability requires model predictors to exhibit heterogeneity across products and contexts. This study explored TAM's application for comparative analysis of handheld gaming consoles by examining heterogeneity in the effect sizes of Perceived Usefulness and Ease of Use coefficients across different console products. User reviews were collected from Amazon.com for four major consoles and annotated using NLP-LLM system to obtain numerical scores for TAM variables. Separate TAM regressions were fitted for each console, and models were compared using coefficient difference tests and ANOVA. Results supported the heterogeneity hypotheses, revealing significant differences in TAM coefficients across consoles. The observed heterogeneity enables utilizing TAM for practical applications such as product comparison and design improvement. Companies can identify product strengths, weaknesses, and user priorities by jointly examining model coefficients and mean user sentiment scores on TAM variables.

13:40-15:00 Session 6B: Data Science and Machine Learning II
Location: Room C-09
13:40
Voting Classifier Using Discretisation in Aggregating Decisions

ABSTRACT. In popular approaches to classification by aggregating decisions, there are two main trends. One path leads to the construction of a classifier ensemble, where a group of diversified inducers vote on a label to be assigned a sample. The second direction is to obtain a decision based on dispersed data, through some form of information fusion. The paper proposes a new mode of operation for a voting classifier, where one and the same inducer can reach a final decision relaying on labels assigned through partially dispersed data, but also different forms of the same data, resulting from discretisation. The experiments were carried out on several datasets, classifiers, and algorithms for aggregating decisions. They resulted in observation of cases and scenarios for improved predictions, showing the merits of the presented research methodology.

14:00
The influence of loss function on oblique survival tree induction

ABSTRACT. Survival trees are a common machine learning tool designed to handle censored data, where only partial information about failure events is available. Most survival tree models work by recursively dividing the feature space using splits defined by single attributes in internal nodes. However, there is a less common type known as oblique survival trees, which use more attributes to create splits in the form of hyperplanes. In this paper, we depart from the typical top-down approach and focus on globally induced oblique survival trees, aiming to optimize both prediction accuracy and model complexity. We propose using two different loss functions— the integrated Brier score and a likelihood-based loss— in the process of oblique survival tree induction. We then compare the resulting models in terms of their predictive performance and complexity.

14:20
Customer Churn Prediction by Rough Neuro-Fuzzy Classifier with CA Defuzzification

ABSTRACT. Given that churn management is a crucial endeavour for firms aiming to retain valuable customers, the capacity to forecast customer churn is indispensable. We use rough and fuzzy set based classifier to predict customer churn on the example of the Bank Customer Churn dataset. Rough set theory offers techniques for handling incomplete or missing data. By utilizing lower and upper approximation concepts, the system can still perform prediction even when certain feature values are missing, what we show in the paper for every combination of missing features. Moreover, we determine feature importance coefficient evaluated through two different means: directly from data and from the working classifier. Rough set-based systems can be integrated with other machine learning and data mining techniques, and we use the LEM-2 rule induction algorithm to create a rule base for the rough-fuzzy classifier.

14:40
Artificial Intelligence in Optimizing the Selection of Incoterms Rules in Cross-Border Trade. State of Knowledge and Needs for Further Research

ABSTRACT. The decision-making areas ideally suited to support AI (Artificial Intelligence) are decisions regarding the choice of Incoterms in cross-border trade. AI makes it possible to analyze huge datasets of historical transactions, considering all relevant decision factors in Incoterm's choice. Based on this data, AI might recommend Incoterms for maximum control and clear landed cost estimation, and real-time landed cost estimation. Consequently, using AI to model Incoterms decisions can streamline buying and selling processes. The article aims to assess the current state of knowledge and identify directions for future research on optimizing decision-making processes related to the choice of Incoterms in cross-border trade based on AI solutions. The study used the Scoping Review method and the VOSviewer IT tool. The keywords co-occurrence analysis showed that there is a lack of in-depth research relating AI issues to choosing Incoterms and modeling and optimizing these decision processes in supply chains.

13:40-15:00 Session 6C: Lean and Agile Software Development I
Location: Room C-21
13:40
Unlocking Feedback in Remote Retrospectives: Games, Anonymity, and Continuous Reflection in Action

ABSTRACT. Conducting engaging and productive sprint retrospectives has been a long-standing challenge, further complicated by the shift to remote work due to the COVID-19 pandemic. This transition has introduced new complexities, such as diminished team trust, loss of non-verbal communication, and reduced effectiveness of brainstorming activities. This paper aims to explore strategies to enhance remote retrospectives for Scrum teams struggling with low engagement and reluctance in offering critical feedback during these meetings. Our study involved three Action Research cycles, which sequentially introduced retrospective games, anonymous feedback, and continuous issue documentation throughout the sprint. The use of retrospective games resulted in increased meeting engagement and active participation, while anonymity created a more secure environment for more comprehensive and truthful feedback. Additionally, continuous reflection ensured no crucial matters were overlooked and promoted proactive problem-solving in real-time. This research adds to the existing knowledge on agile software development in remote settings, providing agile practitioners with actionable strategies to enhance their continuous improvement practices.

14:00
On the Business Analyst's Responsibilities in an Agile Software Project - a Multi-Method Study

ABSTRACT. [Context] Agile methods are now used in the majority of software projects, but the definitions of such methods rarely include the role of a business analyst (BA). [Objective] This paper investigates the responsibilities assigned to BAs participating in agile software projects. [Method] We identified potential responsibilities through a systematic literature review (3 databases) and interviews with 6 practitioners. The most commonly mentioned responsibilities were further evaluated in a questionnaire survey study with 72 respondents. [Results] The combined findings from the SLR and interviews resulted in 89 unique responsibilities grouped into 7 areas. 49 of these were ranked according to the frequency with which they were assigned in the survey respondents' organizations. [Conclusions] Our findings show that BAs typically support Product Owners (rather than taking on that role) and focus on requirements engineering, business needs, and working closely with development teams.

14:20
Waste and Its Elimination in Software Development Projects in Europe

ABSTRACT. The article presents the concept of waste and its elimination in software development projects. Empirical research in 142 European computer programming companies shows the types of waste and their causes, as well as the techniques used to minimise them. The most important types of waste are delays, unnecessary meetings and switching between tasks. The most common causes of waste are overly ambitious client requirements and multi-tasking. The techniques used by companies to reduce waste include in particular agile project management, proper planning and better communication. Some differences were found between companies from Central Eastern and Western European countries and between companies of different sizes. Statistical analysis showed that delays are associated with unnecessary processes and movements. Poor communication in particular causes various types of waste, while software bugs especially have multiple causes. Logistic regressions showed that waste elimination is correlated with better competitive position and development prospects of firms.

14:40
Impact of Work from Home on Agile Software Project Execution -- the Empirical Study

ABSTRACT. Background: The outbreak of a Covid-19 pandemic changed the working patterns of software projects delivery. Aim: The study examines how the work from home (WFH) impacted the software project execution for emergence of differentiating patterns. Method: The data on project execution in two country locations was examined. The population is 3711 projects across 52 months (26 pre- and 26 post-pandemic) is analyzed. The paper identifies the changed patterns of execution. Results: WFH resulted in a more frequent reporting of the project status, significantly higher granularity of reporting, small changes in the statuses reported and significant changes in the duration of a project in a given status. Conclusion: The study concludes that the WFH have had overall positive impact on the software project execution, but notices that it was achieved with increase in reporting frequency and granularity.

15:00-15:40 Session 7: Poster Session over Coffee II

All posters from Session 4 (Poster Session over Coffee I) should be presented once more during this session.

15:40-16:40 Session 8A: New Topics in IS Development III
Location: Room C-20
15:40
A Maturity Model for Data Governance in Decentralized Business Operations: Architecture and Assessment Archetypes

ABSTRACT. Organizations increasingly participate in inter-organizational partnerships that exploit business opportunities supported by shared data assets. Hence, data governance is required to establish collaborative operations between the partners, ensure accountability for shared data assets, define data ownership, identify data provenance, and comply with data-related regulations. This paper presents (1) the structure of a data governance maturity model for inter-organizational operations and (2) a set of maturity assessment archetypes for data governance. These results emerge from a research partnership with a major European technology and service provider involved in data collaboration ecosystems for digital and green logistics. Our contribution extends the state-of-the-art on distributed data governance, specifically for increasingly common business ecosystems built on shared data processing, and provides practical tools for organizations to conduct a data governance maturity assessment tailored to their role in such collaborative operations.

16:00
Scaling Technology Acceptance Analysis with Large Language Model (LLM) Annotation Systems: A Validation Study

ABSTRACT. Technology acceptance models effectively predict how users will adopt new technology products. Traditional surveys, often expensive and cumbersome, are commonly used for this assessment. As an alternative to surveys, we explore the use of large language models for annotating online user-generated content, like digital reviews and comments. Our research involved designing an LLM annotation system that transform reviews into structured data based on the Unified Theory of Acceptance and Use of Technology model. We conducted two studies to validate the consistency and accuracy of the annotations. Results showed moderate-to-strong consistency of LLM annotation systems, improving further by lowering the model temperature. LLM annotations achieved close agreement with human expert annotations and outperformed the agreement between experts for UTAUT variables. These results suggest that LLMs can be an effective tool for analyzing user sentiment, offering a practical alternative to traditional survey methods and enabling deeper insights into technology design and adoption.

16:20
Preliminary Eye Tracking Scale for Cognitive Load

ABSTRACT. The article examines the role of technology in social campaigns and cognitive overload in advertising. It critiques traditional cognitive load measurement methods and suggests using eye tracking for more accurate assessments. The authors recommend a cognitive load assessment scale that considers differences between static and dynamic presentations and eye tracking correlations. Differences in correlations based on stimuli type led to identifying a condensed set of measures for videos and images. These findings refine eyetracking methodologies and enhance cognitive load assessment tools, positioning eye tracking as a reliable method for measuring cognitive load in advertising and improving communication strategies in social campaigns.

15:40-16:40 Session 8B: Data Science and Machine Learning III
Location: Room C-09
15:40
Combining Deep Learning and GARCH Models for Financial Volatility and Risk Forecasting

ABSTRACT. In this paper, we develop a hybrid approach to forecasting the volatility and risk of financial instruments by combining econometric GARCH models with deep learning networks. For the latter, we employ Gated Recurrent Unit (GRU) networks, whereas four different specifications are used for GARCH: standard GARCH, EGARCH, GJR-GARCH and APARCH. Models are tested using daily returns on the S&P 500 index and Bitcoin prices. As the main volatility estimator, also the target function of our hybrid models, we use the modified Garman-Klass estimator. Volatility forecasts resulting from the hybrid models are employed to evaluate the assets’ risk using the Value-at-Risk (VaR) and Expected Shortfall (ES). Gains from combining the GARCH and GRU approaches are discussed in the contexts of both the volatility and risk forecasts. It can be concluded that the hybrid solutions produce more accurate point volatility forecasts, although it does not necessarily translate into superior risk forecasts.

16:00
Hedging Properties of Algorithmic Investment Strategies using Long Short-Term Memory and Time Series models for Equity Indices

ABSTRACT. This paper proposes a novel approach to hedging portfolios of risky assets when financial markets are affected by financial turmoils. We introduce a novel approach to diversification on the level of ensemble algorithmic investment strategies (AIS) built on the prices of these assets. We employ four types of diverse models (LSTM, ARIMA-GARCH, momentum, contrarian) to generate price forecasts, which are used to produce investment signals in single and complex AIS. We verify the diversification potential of different types of investment strategies consisting of various assets classes in hedging ensemble AIS built for equity indices (S&P 500). Our conclusion is that LSTM-based strategies outperform the other models and that the best diversifier for the AIS built for the S&P 500 index is the AIS built for Bitcoin. Finally, we test the LSTM model for 1-hour frequency of data. We conclude that it outperforms the results obtained using daily data.

16:20
Predicting Prices of S&P 500 Index Using Classical Methods and Recurrent Neural Networks

ABSTRACT. This study implements algorithmic investment strategies based on classical methods and a recurrent neural network model. The research compares the performance of investment algorithms on time series of the S&P 500 Index covering 20 years of data from 2000 to 2020. We present an approach for the dynamic optimization of parameters during the backtesting process by using a rolling training-testing window. Each method was tested in terms of robustness to changes in parameters and evaluated by appropriate performance statistics, such as the Information Ratio and Maximum Drawdown. The combination of signals from different methods was stable and outperformed the benchmark of the Buy&Hold strategy, doubling its returns while maintaining the same level of risk. Detailed sensitivity analysis revealed that classical methods utilizing a rolling training-testing window were significantly more robust to changes in parameters than the LSTM model.

15:40-16:20 Session 8C: Learning, Education, and Training I
Location: Room C-21
15:40
Designing Trainee Performance Assessment System for Hands-on Exercises

ABSTRACT. Practical hands-on exercises for trainees in information technologies and information systems provide tools needed to develop, operate, and test cloud-based infrastructures. Exercises, frequently carried out in simulated settings, offer a practical approach to highlight the significance of skills related to structured decision-making and detailed configuration in the command line. Therefore, the paper proposes a unique data analysis solution that encompasses safe sandboxing and examines user-provided command-line data throughout the exercises. The development of the solution focuses on command-line input parsing, tokenization, and structured analysis, providing a viewpoint on the knowledge level in simulated scenarios. The work provides insight into structured command-line data analysis and the complexities of command execution. The prototype is based on modular Bourne-Again Shell and Python modules and asynchronous data collection. The paper contributes to the educational processes by improving training performance assessment techniques and offering insights into the exemplified field of penetration testing of information systems.

16:00
A Game-like Online Student Assessment System

ABSTRACT. The paper introduces a web application for student assessment in any subject area which implements an original game-like scheme to improve students’ engagement and fun, as well as to reduce their examination stress. The proposed scheme capitalizes more on the fear of failure rather than on reward collection and player status, thus resembles more video games than known educational systems featuring gamified assessment. The first obtained results of the survey-based evaluation of the tool show that it has met its goals of instilling fun and engagement and can be considered applicable to various forms of assessment, providing grounds for future work on analyzing the tool’s effects on learning.

17:00-19:00 Welcome Reception
  • Informal Concert
  • Mead, and Polish Donuts
Location: Room C-09