ISD 2025: 33RD INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS DEVELOPMENT
PROGRAM FOR THURSDAY, SEPTEMBER 4TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:00-09:30Registration
09:30-10:30 Session 7: Keynote Talk 2

Jelena Zdravković, Stockholm University, Sweeden

Title: Designing Three Rs of Digital Business Ecosystems: Roles, Responsibilities, and Resilience

Abstract: Digital Business Ecosystem implies several distinctive features including the heterogeneity of involved actors, their interdependence in the exchange of resources, the dynamic nature of their relationships, and the need for self-organization. To successfully design and develop such ecosystems, it is essential to clearly define the business scope, delineate the roles and responsibilities of each participating company, organization and individual, map out their interactions and dependencies, and leverage a range of underlying technologies and diverse data. Furthermore, this process must include an assessment of the ecosystem’s resilience, gauging its ability to achieve its objectives in the face of challenges. This involves identifying and measuring resilience indicators to ensure the ecosystem’s capacity to adapt and thrive under changing conditions.

10:30-11:30 Session 8: Poster Session P2
A novel approach: continuous cascade model for assessing security and resilience in IIoT

ABSTRACT. The paper presents a novel methodology for a continuous cascade model that defines the current state of security and resilience of an Industrial Internet of Things (IIoT) system. The approach integrates system objective definition, critical process and asset identification, and hybrid threat modelling (STRIDE/LINDDUN). Identified threats are correlated with attack techniques using the MITRE ATT&CK for Industrial Control System framework (ICS), while Common Vulnerability Scoring System (CVSS) is employed for vulnerability assessment. Risk quantification adheres to ISO/IEC 27005 guidelines. The paper concludes by discussing the methodology's strengths and limitations, alongside avenues for future research.

Advanced Data Processing Algorithms and Structures for Technical Debt Management with Generative Artificial Intelligence

ABSTRACT. Technical debt management is increasingly critical in modern software systems, where organizations grapple with complex digital infrastructure. This paper aims to explore innovative data processing algorithms and frameworks that leverage generative AI to improve the diagnosis and management of technical debt problems. We conduct a literature review and synthesize findings on the application of generative AI in technical debt management, focusing on algorithmic approaches and frameworks designed for this purpose. Analysis reveals that generative AI-driven methods show promise in enabling more accurate diagnosis of technical debt, particularly in automating the identification of complex patterns and generating targeted remediation strategies.

AI-Assisted HCI Design and Sprint Cadence in Scrum Software Development

ABSTRACT. This article explores how AI-assisted prototyping impacts the cadence of Scrum software development sprints. Through a qualitative pilot study across ten case studies (cross-sector organizations), we observed that integrating AI-assisted generative co-design tools into Scrum teams significantly shortened sprint feedback loops, enabling UX designers and product owners to rapidly generate and refine prototypes. However, AI’s inherent opacity introduced process debt, potentially slowing down later iterations. To address these challenges, we propose two practical guidelines: adding a “model state” checkpoint to daily meetings and including explainability criteria in the Scrum definition of done. Our findings underscore the critical balance between speed gains and the need for transparency and user trust in AI-assisted human-computer interaction (HCI) design. This study serves as a precursor to further extended research in this area.

DIGITAL TRANSFORMATION OF THE INSURANCE SECTOR - DEVELOPMENT GUIDELINES

ABSTRACT. Changing economic conditions and technological challenges are shaping the future of the insurance industry. Traditionally associated with a conservative approach to risk, the industry is faced with the need to make bold decisions that will allow it not only to manage new risks, but also to take advantage of the opportunities presented by the digitalisation of processes. Thus, the insurance market needs to adapt quickly to technological change and changing consumer expectations. The aim of this article is to analyse the areas and tools related to the digital transformation of the insurance sector. The research question posed is: how does digital transformation affect the development of the insurance market globally? The research presented in the article can contribute to the theoretical knowledge of digital transformation in the insurance sector. The research is based on data available in international reports published by Swiss Re, GlobalData, among others.

Feedback-enabled anonymous deliveries

ABSTRACT. Anonymity of the sender plays important role in modern delivery models such as discreet shipping or drop shipping. At the same time, existing anonymous delivery systems focus on the protection of recipients and employ elements that weaken their security. This paper introduces FEAdelivery – a system that takes advantage of unique delivery identifiers to protect senders' anonymity while retaining the option of each delivery being responded to. The solution was developed through empirical design research with prototyping and conceptual study.

Context aware evolution of emoji sentiment reactions in a large Telegram community

ABSTRACT. This study examines how reaction emojis evolve beyond their traditional emotional associations within a large Russian-language Telegram community. Using lexicon-based sentiment scoring, temporal frequency charts, and cluster analysis, we analyzed 220,972 reacted comments from August 2021 to April 2025, focusing on four high-frequency reactions: "thumbs-up", "thumbs-down", "red heart", and "clown face". Our findings revealed that emoji meaning is not constant and can change over time. Spikes in the use of certain emoji reactions coincide with periods of social turbulence, indicating them in real time and potentially enabling prediction. These findings expose community-driven semantic shift and demonstrate that reaction patterns provide weak supervision cues for identifying sentiment-context mismatches, which aids in moderation and crisis detection. The results are also important for training neural network models using community-annotated messages.

What Seniors Need vs What Providers Deliver: A Cross-Country Perspective on Value (Mis)Alignment in Digital Transformation of Elderly Care in Poland and Sweden

ABSTRACT. This explorative study examined how technology providers are meeting seniors’ expectations for developing ICT solutions enabling digital transformation of elderly care in two contrasting socioeconomic settings: Poland and Sweden. To this end, we have analyzed the characteristics emphasized as important by technology providers on their websites and compared these factors with seniors’ needs identified in previous research. Our preliminary findings suggest that technology providers in Poland and Sweden fall short to fully meet the diversity of seniors’ needs or address their expectations. In this respect, it appears that technology providers focus mainly on those needs of seniors that are most strongly recognized and unmet in the context of the prevailing socio-economic conditions of a given country.

Radiomic Medical Data Transformation for Radiologists Support

ABSTRACT. We present a process of transforming medical data into a system that supports radiologists' interpretation and understanding of Computed Tomography (CT) images. The system is based on a pipeline that includes image conversion, organ segmentation, feature extraction, and report rendering. The final report presents organ visualisations and information about organ measurements, with marked outliers, to the radiologist. The system was created using data from the database containing over 40,000 CT scans and a pre-trained Swin UNETR architecture. The system obtained 89.09\% DICE for five segmented organs. The created solution can go through the process in less than five and a half minutes, and its usability was confirmed by radiologists.

Academic Entrepreneurship Opportunities in Digital Transformation: Case Study of Conference Management System Development

ABSTRACT. This paper investigates the concept of academic entrepreneurship and its promotion through strategically designed institutional programs, which is an increasingly common practice. Emphasis is placed on a case study of the development of a digital platform intended to support and transform the conference management process. The proposed conference management system is compared to existing solutions to identify areas for future development and differentiation. Additionally, commercialization potential is assessed using a Business Model Canvas framework. The main contribution of this study is the identification and dissemination of good practices for fostering entrepreneurial thinking within academic environments. The paper may serve as a valuable reference point for institutions seeking to leverage academic entrepreneurship and digital transformation for their strategic development goals.

The Application of Generative AI in Public Administration Units – Big Data Analysis

ABSTRACT. This paper explores the potential application of Generative Artificial Intelligence (GenAI) in enhancing citizen communication within public administration units. A big data analysis was conducted based on the content of 108 official municipal websites from Poland and Romania. The study focused on identifying references to GenAI through relevant features, aiming to assess the current level of adoption and future prospects. Three potential application areas of GenAI in public administration were identified: citizen communication, process optimization, and strategic planning. However, this study specifically addresses the first area—communication with residents. The analysis reveals a growing awareness among municipalities regarding the potential of AI technologies. Notably, current implementations of AI primarily involve the generation of visual content, indicating an early but promising stage of GenAI integration. These findings suggest a foundation upon which more advanced GenAI-based communication tools can be developed in the future, contributing to more efficient and interactive governance.

AI in Local Government Practice - interview-based study from Poland

ABSTRACT. The application of Artificial Intelligence (AI) mechanisms in public administration is gaining increasing attention globally. However, empirical research in the Polish context remains limited. This study addresses a notable research gap by exploring the level of AI-related knowledge among public administration officials through semi-structured interviews. The primary goal was to assess public officials’ knowledge regarding the use of Artificial Intelligence (AI) in the support of daily tasks and to identify key functional areas where AI holds development potential. Additionally, research was aimed to assess awareness regarding the potential of AI to optimize administrative processes. Research method was an interview with managers and IT specialists from a dynamically developing municipal office in Poland. The study explores the potential for integrating Generative AI (GenAI) solutions into local government structures in the future. These insights contribute to the discourse on AI readiness in public administration and inform future strategies for digital transformation in the public sector.

Initial Search Point Tunneling – A New User Experience Perception Factor in Web Development

ABSTRACT. This research initially aimed to evaluate the usability of a university website. A qualitative approach was employed, involving a short website questionnaire and eye-tracking recordings. The findings revealed significant and unexpected results, leading to the formulation of new research questions - specifically, regarding (1) the credibility of questionnaire-based data and (2) the observed phenomenon of varying information search effectiveness depending on the user's initial search point (i.e., starting from the website’s homepage versus an external entry point). The second phenomenon has been termed Initial Search Point Tunneling. The results suggest potential for faster and more effective information retrieval, which is a foundational activity in Digital Transformation. These insights may have implications beyond the academic context, particularly in the design of websites and internet services, with relevance to User Experience (UX), Customer Experience (CX), and Consumer Behaviour Studies.

Digital innovation in the United States: Spatial determinants of transformation as exemplified by Kickstarter campaigns
PRESENTER: Krzysztof Lorenz

ABSTRACT. Digital transformation is reshaping innovation processes and capital allocation models, fostering the emergence of alternative financing mechanisms such as crowdfunding platforms. This study investigates the spatial determinants of digital innovation development using Kickstarter campaigns in the United States as a case study. Empirical data were preprocessed and classified into digital and traditional categories. Advanced AI methods, including Deep Autoencoders and Self-Organizing Maps (SOM), revealed spatial clusters of digital innovation in crowdfunding. Cluster visualizations exposed geographic concentration patterns and links to local infrastructure. AI uncovered latent ties between campaign structure and regional context, underscoring the role of AI and crowdfunding in decentralized, localized digital transformation.

Influence of Augmentation of UAV Collected Data on Deep Learning Based Facade Segmentation Task

ABSTRACT. Data augmentation is crucial for image segmentation, especially in transfer learning with limited data, however it can be costly. This study examines the cost-benefit of augmentation in facade segmentation using unmanned aerial vehicles (UAV) data. We analysed how dataset size and offline augmentation impact classification quality and computation using DeepLabV3+ architecture. Expanding the dataset from 5 to 480 thousand tiles improved segmentation efficiency by an average of 3.7%. Beyond a certain point, further dataset expansion yields minimal gains, in our case, just 1%, on average, after segmentation accuracy plateaued at around 76%. These findings help avoid the computational and time costs of ineffective data augmentation.

Fuzzy scalable neural network for IPv6 network security

ABSTRACT. This publication focuses on the use of a fuzzy neural network for data classification in the context of IPv6 routing attack detection. The research methodology includes a comparison of the proposed scalable fuzzy neural network, utilizing Ordered Fuzzy Numbers, with well-known solutions, such as Artificial Neural Networks. A portion of the ROUT-4-2023 dataset was used in the experiment. The results demonstrate that this implementation could be effectively utilized for data classification in small IoT solutions. The conclusions provide a discussion on the limitations, future research prospects, and recommendations for further work.

Enhancing the Identification of Corrosion in Reinforced Concrete Structures Using Association Rules Analysis and the Non-Destructive M5 Method

ABSTRACT. This study aims to develop a novel AI-driven approach for detecting and evaluating corrosion in reinforced concrete (RC) structures, answering one of the most significant challenges in the construction industry. The research aims to overcome the difficulties of identifying corrosion with limited data. Gathering representative learning databases is challenging due to problems obtaining adequate samples and the high diversity in rebar, concrete, and structural parameters. The research quantitatively analyzes measurements obtained through Magnetic Force Induced Vibration Evaluation (M5), a nondestructive testing (NDT) method. The process is enhanced by employing the specialized Association Rules Analysis (ARA) with a dedicated feature extraction technique. The findings suggest that utilizing a variety of patterns and features enhances the method's identification effectiveness.

Determining Multi-Class Trading Signals for Bitcoin: A Comparative Study of XGBoost, LightGBM, and Random Forest

ABSTRACT. We investigate a multi-class machine learning (ML) framework to generate daily Bitcoin trading signals—Buy, Sell, or Hold. Three algorithms—XGBoost, LightGBM, and Random Forest—are compared with a naive buy-and-hold strategy. Using BTC/USD daily data (2015–2024), we apply a range of technical indicators across trend, momentum, volatility, and volume, later pruned by correlation analysis. A ±1% threshold defines the "Hold" zone to avoid minor fluctuations. Empirical tests show that LightGBM outperforms other models and even surpasses buy-and-hold in final portfolio value. Our findings support the design of tri-class ML strategies tailored for high-volatility markets like cryptocurrency.

Interoperable Agritech Data Pipelines with NGSI-LD and Smart Data Models

ABSTRACT. The increasing use of drones, robotic platforms, and IoT sensors in agriculture has resulted in a growing volume of heterogeneous data that is difficult to integrate due to lack of interoperability. This paper presents three data pipelines designed within the Norwegian research project SMARAGD, targeting the transformation of siloed agritech data into interoperable NGSI-LD-compliant entities using Smart Data Models and the FIWARE framework. The pipelines cover aerial imagery, robotic imagery from ROS-based systems, and IoT sensor measurements, enriching the data with temporal and geospatial context and integrating it into a shared FIWARE-powered ecosystem. This architecture provides a foundation for decision-support tools and interoperability in land-based food production systems.

11:30-12:00Coffee Break
12:00-14:00 Session 9A: T4: Data Science and Machine Learning 3
Location: Room 1
12:00
Using SSP-TOPSIS in Sustainable Resource Selection for Mobile Crowd Computing

ABSTRACT. The widespread adoption of smart mobile devices (SMDs) with advanced computing capabilities presents a valuable resource for mobile crowd computing (MCC). Efficient task scheduling in MCC relies on selecting the right SMDs, which poses a complex multi-criteria decision-making challenge due to the diverse hardware specifications of the devices and the presence of non-compensatory parameters. Traditional multi-criteria decision analysis (MCDA) methods, such as the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), typically assume full compensability between criteria. However, this assumption may conflict with strong sustainability principles. To tackle this issue, the authors introduce the Strong Sustainability Paradigm based Technique for Order Preference by Similarity to Ideal Solution (SSP-TOPSIS) method, an extended version of TOPSIS that incorporates linear compensation reduction. This enhancement allows for a more accurate reflection of sustainability requirements in the decision-making process. The SSP-TOPSIS method demonstrates improved analytical capabilities compared to classical TOPSIS and provides a framework that supports sustainability-driven decisions.

12:20
Comparing Speech Synthesis Models for Polish Medical Speech Naturalness

ABSTRACT. This research investigates the perceived naturalness of synthesized speech in the context of Polish medical terminology, a critical factor for applications such as voice-enabled medical dialogue systems. We conducted a comparative analysis of three speech synthesis models: SpeechGen, ElevenLabs, and a version of ToucanTTS fine-tuned on a specialized corpus of Polish medical recordings. The evaluation employed objective measures, the NISQA metric, and subjective assessments through Mean Opinion Score (MOS) surveys. Our findings indicate that SpeechGen and ElevenLabs produce synthesized speech that closely rivals the naturalness of human speech, as evidenced by both NISQA scores and MOS ratings. In contrast, despite improvements, the fine-tuned ToucanTTS model did not achieve comparable levels of perceived naturalness. Notably, participants occasionally rated the advanced synthesized speech as more natural than human speech recorded in non-studio environments, underscoring the potential of these technologies in real-world applications. This study emphasizes the significance of naturalness in enhancing user experience, particularly in specialized linguistic domains. It provides insights into speech synthesis's current capabilities and limitations for less-resourced languages like Polish.

12:40
Time Series Classification with MuRBE: The Multiple Representation-Based Ensembles

ABSTRACT. Time series classification has emerged as a pivotal endeavor in the realm of machine learning applications. This task is considered supervised learning, aimed at categorizing distinct classes within time series data. The present study introduces MuRBE (Multiple Representation-Based Ensembles), an innovative meta ensemble structure explicitly designed for time series classification. The MuRBE leverages the power of diverse representation domains, including feature-based, dictionary-based, interval-based, and shapelet-based methods. Exploiting complementary information from different representations makes it particularly effective to improve classification performance. A total of thirty distinguished benchmark datasets were utilized to evaluate the effectiveness of the proposed method, leading to competitive performance results. Notably, our approach secures a second rank among current state-of-the-art techniques.

13:00
Expert Versus Metric-Based Evaluation: Testing the Reliability of Evaluation Metrics in Large Language Models Assessment

ABSTRACT. This study examines the reliability of automatic evaluation metrics in assessing responses generated by large language models (LLMs) in the context of university recruitment. A total of 113 domain-specific questions were used to prompt five prominent LLMs, each in three configurations: basic, document-context, and internet-context. The generated responses were evaluated using three categories of metrics: lexical, semantic, and LLM-asa-Judge. These metric-based assessments were subsequently compared with expert evaluations conducted using a 5-point Likert scale. The findings indicate that although automatic metrics offer considerable efficiency, their consistency with expert judgments varies substantially. Moreover, the results suggest that both the model configuration and its underlying architecture significantly affect evaluation outcomes. Among the metric categories, LLM-as-a-Judge appears to yield the highest alignment with expert assessments, suggesting greater reliability in this approach.

13:20
Gradient or Not? Predicting Football Action Sequences Using Boosting vs Neural Networks

ABSTRACT. This research compares gradient boosting methods and neural network architectures for predicting football action sequences. Using detailed event annotations and spatial-temporal positional data, we evaluate the models’ ability to forecast goal-scoring opportunities several actions in advance. Through feature engineering and ensemble strategies, our results reveal key contextual and spatial factors that influence goal probabilities. Ensemble models combining CatBoost, LightGBM, and XGBoost outperform individual models, achieving an F1 Score of 0.707 and PR AUC of 0.734. These findings can provide valuable insights for real-time match analysis and player evaluation.

13:40
Building and Enriching an Ontology on the basis of a Labeled Corpora of Opinions

ABSTRACT. This paper presents a methodology for building and enriching an ontology from opinionated text, developed in collaboration between industry and academia. The focus of this work is on semantic alignment with Wikidata. Key contributions include a leading category-based scoring and approach and LLM-assisted refinement. Experimental results show that our leading category-based approach significantly improved alignment accuracy, reaching 86.5%. Furthermore, the incorporation of LLM-based refinement further increased accuracy to 90.6%, indicating the potential of this approach for automated ontology enrichment.

12:00-14:00 Session 9B: T3: Lean and Agile Software Development 3
Location: Room 2
12:00
AI-based Functionalities for Project Communication Management

ABSTRACT. The paper explores the integration of artificial intelligence (AI) functionalities in project communication management (PCM), highlighting its application areas and associated risks. It outlines how AI technologies support key PCM activities such as communication planning, execution, and monitoring, while also addressing key challenges such as data privacy, bias, and over-reliance on automation. The paper defines PCM and presents a comprehensive list of its tasks and processes. A content analysis of AI tool webpages is conducted to identify applications offering AI-based functionalities supporting specific PCM tasks. The empirical section presents findings from a survey of project professionals, revealing which communication management tasks are currently supported by AI in practice and highlighting existing gaps. Notably, over 40% of respondents reported working primarily in agile project environments, providing insights into how AI tools are used in adaptive, fast-paced contexts. The study offers a grounded perspective on the evolving role of AI in PCM across various project management approaches.

12:20
Determinants of IT Project Management. The Experts Comparative Study between Poland and Serbia

ABSTRACT. The aim of the article is to identify success factors and barriers in IT project management conducted by experts from Poland and Serbia in 2023. Data was obtained using the CAWI method, conveniently using the opinions of experts gathered at industry conferences. The collected data were subjected to comparative analysis, and the obtained results were discussed. The obtained results indicated differences in opinions on success factors and barriers in IT project management in Poland and Serbia. The reason for this was the age structure of experts and their accustomedness to existing standards, as well as IT development in Poland and Serbia. Statistically significant differences occurred in views on additional knowledge or skills in the area of business process management necessary in the organization to manage IT projects, as well as in areas where the introduction of IT systems would translate into the greatest increase in revenues. The originality of the work lies in comparing views on IT project management between two countries with different cultures and histories. The analysis can be used by both experts at universities and business practitioners. Its limitation was the analysis conducted on a sample of experts associated with universities, only in two selected countries.

12:40
Non-functional Requirements Documentation Techniques in Agile Software Development: A Focus Group Study

ABSTRACT. Non-functional requirements are an essential part of any IT project, including agile projects. Addressing these requirements is a well-known challenge of agile projects because the common documentation techniques used in agile projects may be insufficient. This study examines which non-functional requirements documentation techniques industry practitioners working on agile projects use in different contexts. To this end, we conducted a literature review to identify documentation techniques proposed for agile projects. Based on the results, we organized focus groups with industry practitioners to determine when and to what extent they use these techniques in their projects. These were followed by a validation interview with a domain expert. We present our findings in the form of a list of recommendations. The list includes the conditions under which a given technique can be selected.

13:00
Effects of Remote Work on Communication in Agile Software Development: Is the Focus Still on People and Collaboration?

ABSTRACT. The Covid-19 pandemic has changed the way people work, leading to an increase in remote work. Although remote work brings numerous benefits, it also comes with novel challenges. For instance, remote work has been shown to affect communication, collaboration, and inter-actions. These are key aspects of agile methodologies, which are frequently used in software development. Against this background, we did a literature review to gain an overview of how collaboration is affected in remote agile teams. Our results demonstrate that, in fact, there is a loss of communication, collaboration and social interaction, potentially leading to harmful long-term effects. Among them are reduced trust, psychological safety, and engagement. In summary, we contribute to the nascent research on remote agile settings in two ways. First, we provide a consolidated overview of the intricate effects of remote work that introduces novel challenges for organizations and leaders. Second, based on our findings, we question whether the focus in remote agile work is still on people and collaboration.

13:20
Comparing Code Generation Capabilities of ChatGPT-4o and DeepSeek V3 in Solving TypeScript Programming Problems

ABSTRACT. The rapid development of large language models significantly impacts software development, particularly in code generation. This paper focuses on the analysis of the performance and features of ChatGPT and DeepSeek chatbots, based on their GPT-4o and V3 models, respectively, with an emphasis on code generation. Particular attention is given to the architecture of the models, multimodality, open-source status, and token limits. Through experimental evaluation of 60 TypeScript LeetCode problems across different difficulty levels, we evaluated accuracy, debugging ability, and the number of attempts needed for correct solutions. The results show that DeepSeek achieved an accuracy of 68.3%, while ChatGPT achieved 61.7%. The paper highlights the advantages of DeepSeek as an open-source option and points to the potential to improve generated code, contributing to the understanding of the application of large language models in programming.

12:00-14:00 Session 9C: T2: Information Systems Modelling 1
Location: Room 3
12:00
Formal Modelling of Information System Evolution using B

ABSTRACT. In this paper, we explore the problem of predictable evolution of Information System (IS). In this context, a part of the Information System evolution is modelled through phases and interphases in order to specify evolution rules. A formal model is provided using the B language to check the rule effectiveness.

12:30
Data Science meets BPMN: A Taxonomy of Data Objects Modeling

ABSTRACT. Data science is transforming business processes. Therefore, process modeling approaches must evolve to capture the entire lifecycle of data transformations. Business process modeling and notation (BPMN) offers a possible solution. However, data objects in BPMN are usually relegated to a secondary role in process flows, missing the complex interactions between data sources and pipelines. This paper presents a systematic literature review and a taxonomy of five types of modeling approaches for data objects in BPMN. The work is conducted in the context of a green and digital transformation project in ports and logistics. Data scientists and process owners may find our proposals interesting for adopting BPMN in their data-driven projects, detailing in a transparent way how (1) data inputs are obtained, (2) processed, and (3) used at a process level of analysis. Theoretically, our work contributes to BPMN literature, comparing five types of modeling approaches for data objects.

13:00
Towards the Enrichment of Conceptual Models with Multimodal Data

ABSTRACT. Conceptual models are essential for designing, analyzing, and communicating complex systems. However, traditional modeling languages such as UML, BPMN, and ArchiMate primarily rely on textual and symbolic representations, which can limit their expressiveness and accessibility, especially for non-expert stakeholders. To address this challenge, we introduce a framework for Multimodal-Enriched Conceptual Modeling (MMeCM) that integrates videos, images, and audio directly into model elements. Our approach enables modelers to attach contextual multimedia references to processes, entities, and relationships, effectively grounding abstract concepts in tangible real-world artifacts. We make three key contributions: (1) a quantitative analysis of concept enrichability using the OntoUML/UFO Catalog, identifying which elements benefit from multimodal representation; (2) the design and implementation of a generalizable framework for embedding multimodal data across different modeling languages; and (3) a qualitative user study, grounded in the Technology Acceptance Model, evaluating the perceived usefulness and usability of multimodal-enriched models, together with a dataset of more than 12K multimodal-enriched natural language elements found in conceptual models. Our evaluation shows that a majority of natural language elements in conceptual models can be effectively augmented with multimedia, and user feedback indicates a strong positive reception of MMeCM.

13:30
SpeeD: An Online Multilingual Speech-based Database Design Tool
PRESENTER: Dejan Keserovic

ABSTRACT. The paper presents SpeeD - the first online speech-based tool for automated database design. SpeeD enables automatic derivation of conceptual database models from offline (previously recorded) speech, as well as from online (real-time recorded) speech, whereby several different natural languages are supported. Once upon conceptual design is finished, SpeeD enables automated subsequent steps of forward database engineering for several contemporary database management systems.

12:00-14:00 Session 9D: T5: Digital Transformation 3
Location: Room 4
12:00
From Post-Disaster Support to Educational Equity: Conceptualizing a Volunteer-Driven Online Peer-to-Peer Learning Ecosystem at Scale

ABSTRACT. This study examines a grassroots, volunteer-driven peer-to-peer (P2P) educational initiative that emerged following a catastrophic earthquake and evolved into a sustainable educational program lasting over two years. Employing an interpretive case study approach—including participant observation, focus groups, and questionnaires—we explore the motivations and experiences of both tutors and learners, the perceived effectiveness of the online P2P model, and the barriers and enablers to scaling such initiatives. Our key findings indicate that while age proximity fosters trust and effective communication, it also poses authority challenges for tutors. Tutor engagement was fueled mainly by intrinsic motivators such as pursuing educational impact and community belonging. Learners reported significant gains in confidence, self-expression, and accelerated comprehension, attributing this to personalized, interactive sessions. Both cohorts called for a dedicated platform with enhanced features, such as built-in scheduling to overcome logistical hurdles. Finally, we articulate transferable principles for scaling P2P models, including flexible micro-volunteering pathways, strategic recruitment, and diversified funding.

12:30
Role of Innovative Technologies in Supply Chain Management. Analysis based on BERTopic and Bipartite Graph Models
PRESENTER: Paweł Lula

ABSTRACT. The supply chain represents a complex network managing the flow of goods, services, and information. Revolutionary technologies including Internet of Things (IoT), blockchain, and artificial intelligence (AI) are fundamentally reshaping supply chain segments, accelerating digital transformation while enhancing operational efficiency. This study examines above 70 thousand publications, indexed in the SCOPUS database, related to SCM (analyzing titles, abstracts, and keywords) from 2010 to March 2025, employing BERTopic modeling and bipartite graph analysis to identify emerging patterns. The research uncovers eleven distinct innovation technologies and systematically maps their relationships with various supply chain management (SCM) domains. The results highlight that innovations in SCM are not isolated but broadly integrated across domains, suggesting the importance of cross-departmental collaboration to maximize system-wide benefits. These findings offer strategic insights for practitioners and establish analytical framework for scholars investigating the dynamic intersection of technological innovation and modern supply chain management.

13:00
Impact of Hybrid Work on Project Management System in an ERP Implementation Company

ABSTRACT. The hybrid work model, which integrates remote and on-site work practices, has become a standard in many IT implementation companies, particularly in the wake of the COVID-19 pandemic. This paper examines how such a model influences the functioning of the project management system within these organizations. Developed over the years within individual ERP implementation companies, the project management system had to be adapted to the new operational reality. The study is based on a case analysis of the ERP implementation company Anonymized, covering 70 projects carried out between 2013 and 2025, implemented using both traditional and hybrid models. The research methods included employee surveys and an analysis of data from internal systems. The findings offer practical insights that can support implementation firms in optimizing their project management systems within distributed work environments, particularly in the context of ERP system deployments.

13:30
A Method for Implementing Engaging Digital Workplaces Using Design Science Research

ABSTRACT. Technological advancements and the COVID-19 pandemic have accelerated digital transformation, bringing employees from different generations into online work environments. This shift highlights the need to understand the experiences of Generation Y (Millennials) and Z in digital workplaces. However, existing studies often overlook the diverse needs of a multigenerational workforce, leading to disengagement. In 2022, only 23% of employees were engaged, while 59% were "quietly quitting." This paper presents a framework to help organizations create engaging digital workplaces for a multigenerational workforce. Using Design Science Research (DSR), the study combined two Systematic Literature Reviews (SLRs), a phenomenological study, and a confirmatory study. Findings revealed the effectiveness of the proposed framework and method to support organizations implementing engaging digital workplaces, tailored for a multigenerational workforce, with future research recommended to explore broader factors affecting digital workplace engagement and different organizational contexts.

14:00-15:00Lunch
15:00-15:30 Session 10A: Journal First Paper 1
Location: Room 1
15:00
Production processes modelling within digital product manufacturing in the context of Industry 4.0
PRESENTER: Marko Vještica

ABSTRACT. Industry 4.0 aims to establish highly flexible production, enabling effective and efficient mass customisation of products. Modelling techniques and simulation of production processes are among the core techniques of the manufacturing industry that facilitate flexibility and automation of a shop floor in the era of Industry 4.0. In this paper, we present an approach to support production process modelling and process model management. The approach is based on Model-Driven (MD) principles and comprises a Domain-Specific Modelling Language (DSML) named Multi-Level Production Process Modelling Language (MultiProLan). MultiProLan uses a set of concepts to specify production process models suitable for automatic instruction generation and execution of the instructions in a simulation or on a shop floor. By using MultiProLan, process designers may create process models independent of the specific production system. Such process models can either be automatically enriched by matching and scheduling algorithms or manually enriched by a process designer via MultiProLan’s modelling tool. In this paper, we also present an application of our approach in the assembly industry to showcase its dynamic resource management, generation of production documentation, error handling and process monitoring.

15:00-15:30 Session 10B: Journal First Paper 2
Location: Room 2
15:00
Quality assurance strategies for machine learning applications in big data analytics: an overview

ABSTRACT. Machine learning (ML) models have gained significant attention in a variety of applications, from computer vision to natural language processing, and are almost always based on big data. There are a growing number of applications and products with built-in machine learning models, and this is the area where software engineering, artificial intelligence and data science meet. The requirement for a system to operate in a real-world environment poses many challenges, such as how to design for wrong predictions the model may make; How to assure safety and security despite possible mistakes; which qualities matter beyond a model’s prediction accuracy; How can we identify and measure important quality requirements, including learning and inference latency, scalability, explainability, fairness, privacy, robustness, and safety. It has become crucial to test thoroughly these models to assess their capabilities and potential errors. Existing software testing methods have been adapted and refined to discover faults in machine learning and deep learning models. This paper covers a taxonomy, a methodologically uniform presentation of all presented solutions to the aforementioned issues, as well as conclusions about possible future development trends.

15:00-15:30 Session 10C: Journal First Paper 3
Location: Room 3
15:00
Deep Learning-Based Recognition of Unsafe Acts in Manufacturing Industry

ABSTRACT. Despite technological progress and the tendency for automation, the majority of manufacturing workplaces still rely on human labor. Although industrial tasks are frequently composed of simple operator actions, non-ergonomic execution of such repetitive tasks has been reported as the primary cause of musculoskeletal disorders. Considering the sizes of manufacturing halls and large numbers of employees, there is an increasing need for tools that can improve the recognition of unsafe acts. Herein, a deep learning-based procedure for pose safety assessment is proposed and validated using monocular videos captured with a conventional IP camera. The two key composing components of the proposed pipeline are the three-dimensional (3D) pose estimator and mesh classifier. The proposed method was validated experimentally by considering three different methodologically selected industrial tasks: a laborious task that requires all-body effort (pushing and pulling), a task that requires an upper-limb action comprising intensive interaction and motion control (drilling), and a typical collaborative task (polishing with a collaborative robot with variable mechanical impedance). Accuracies of 84.67%, 92%, and 98%, respectively, were achieved. Besides higher accuracy, the proposed method has shown practical advantages over existing alternatives based on analyzing the parameters derived from the human poses.

15:30-17:00 Session 11A: T4: Data Science and Machine Learning 4
Location: Room 1
15:30
Feature Evaluation Through Decision Trees Structure

ABSTRACT. Feature selection plays a significant role in the development of categories of information systems related to decision support, such as diagnostic or recommendation systems. Such systems should ensure the possibility of identifying the most important features as well as analysing data from different locations, taking into account the specificity and characteristics of the local data sources. In the process of data analysis, the stage of data preparation, including the transformation of the attribute domain from continuous form to intervals, plays an important role, as the outcome of this process influences the subsequent stages of the analysis. In the paper, an approach to creating a global feature ranking that takes into account the specifics and characteristics of different discretisation algorithms was proposed. A new weight for the estimation of attribute importance was defined and compared with a measure that is implemented in the Python programming language library. Both types of weights were used to create a hierarchical structure of the global ranking of features. The experiments were carried out on datasets from the stylometry domain dedicated to the task of authorship attribution.

16:00
Multi-Valued Dependency Analysis Methodology. A Novel Approach to Modeling Uncertainty in Data

ABSTRACT. This paper presents a novel approach to the analysis of data with uncertain information. The classic approach of bivariate logic does not allow for the modeling of intermediate states, which are particularly important when studying phenomena characterized by uncertain information. The proposed methodology, based on assumptions of Łukasiewicz's logic, introduces a transformation of logical values to the set {-1, 0, 1}, which allows for an intuitive interpretation of truth, falsity, and lack of information. An unusual aspect of the approach is the use of a contiguity matrix, determined from the implication operator, to assess the degree of dependence between variables. The method was applied in the analysis of real-world data. The results confirm its effectiveness and efficiency in analyzing dependencies while accounting for uncertainty due to missing or ambiguous data, with a linear time complexity.

16:30
Large Language Models for Structuring and Integration of Heterogeneous Data

ABSTRACT. The implementation of artificial intelligence (AI) in the public sector offers great potential. Repetitive and labor-intensive tasks can be automated to improve overall efficiency. Generative AI, in particular, opens up new possibilities for structuring and integrating heterogeneous data sources. At the same time, AI introduces challenges such as technical complexity and ethical issues that must be addressed during development and implementation. This paper investigates the potential and challenges of using AI in the extract, transform, load (ETL) process in a public sector study. Our findings demonstrate that open-source large language models (LLMs) can efficiently transfer over 5,000 unstructured documents into the structured format of a relational database, achieving a success rate of approximately 96%. The quality of the results was significantly improved through optimization measures, particularly in terms of prompt engineering and post-processing. While the results are encouraging, challenges remain, including processing extensive documents and adapting the data model to greater complexity.

15:30-17:00 Session 11B: T1: Managing IS Development and Operations 1
Location: Room 2
15:30
An Empirical Study in Lithuania: Perceptions of Individual and Social Factors Influencing Software Product Quality

ABSTRACT. Software product quality is a multidimensional concept that depends directly on the software development process, the competencies and skills of software engineers, and the knowledge of product quality assurance. Assessing the quality of software products is challenging due to the complexity of systems, the different expectations of stakeholders, and perceptions of quality characteristics in the development process. Furthermore, the perception of quality is inherently subjective and varies significantly between different stakeholder groups involved in the software lifecycle. Developers, testers, project managers, end users, and clients often have distinct views on which quality characteristics are most critical, shaped by their roles, experiences, interactions with the system, and individual backgrounds. This research investigates the influence of social and individual factors on the perceptions of software engineers about the quality characteristics of software products using ISO/IEC 25010:2023. The findings reveal that social factors are crucial in shaping perceptions of software product quality characteristics. Understanding these influences can help software teams enhance development processes, improve product quality assurance, and effectively achieve customer satisfaction.

16:00
Automation of Selected Processes in IT Project Management with Natural Language Processing

ABSTRACT. The paper presents the use of deep learning models to support the automation of management in IT projects on the example of task assignment. Managing IT projects is a complex process that requires the coordination of multiple tasks, resources, and individuals involved in the project. For this purpose, datasets were created to simulate various project environments, and models based on the GraphSAGE architecture were trained, enabling efficient modeling of relationships between tasks and programmers. It was observed that improving data quality could significantly enhance the performance of the models, suggesting the potential for further development and improvements in this area.

16:30
Exploring the Impact of Generative Artificial Intelligence on Software Development in the IT Sector: Preliminary Findings on Productivity, Efficiency and Job Security

ABSTRACT. This study investigates the impact of Generative AI on software development within the IT sector through a mixed-method approach, utilizing a survey developed based on expert interviews. The preliminary results of an ongoing survey offer early insights into how Generative AI reshapes personal productivity, organizational efficiency, adoption, business strategies and job insecurity. The findings reveal that 97% of IT workers use Generative AI tools, mainly ChatGPT. Participants report significant personal productivity gain and perceive organizational efficiency improvements that correlate positively with Generative AI adoption by their organizations (r = .363, p < .05). However, increased organizational adoption of AI strongly correlates with heightened employee job security concerns (r = .591, p < .001). Key adoption challenges include inaccurate outputs (64.2%), regulatory compliance issues (58.2%) and ethical concerns (52.2%). This research offers early empirical insights into Generative AI's economic and organizational implications.

15:30-17:00 Session 11C: Tutorial Session

Organizers: Wilfrid Utz, OMiLAB NPO, Berlin, Germany, Iulia Vaidian, OMiLAB NPO, Berlin, Germany

Title: Enhancing Agility in Business Model Design using Digital Twins: The Scene2Model Approach to Design Thinking

Goal: The tutorial will introduce participants to storyboards as a selected design thinking method. A use case is chosen for the participants to work on during the tutorial. Haptic paper figures (SAP Scenes) are used to develop innovative ideas and build a visual storyboard of the identified challenge and proposed innovative solution/offering. Participants will observe the end-to-end process of a tool-supported transformation from haptic scenes into digital twins of the realized design artifacts. The representation of digital twins as conceptual models from a haptic design is input for a detailed analysis on different levels of abstraction: business, organizational, and technological aspects to be assessed on the design level before experimental validation.

Location: Room 3
17:00-17:30Coffee Break
17:30-19:30 Session 12A: T4: Data Science and Machine Learning 5
Location: Room 1
17:30
A Lightweight Approach to Table Recognition in Digital Invoices

ABSTRACT. This study investigates table recognition techniques for digital documents, focusing on the challenges posed by diverse invoice layouts. A comparative evaluation of traditional pattern recognition and deep learning approaches highlighted their respective strengths and limitations. Special attention was given to ProjectionP, a proprietary lightweight method for resource-constrained environments, which combines morphological line extraction with pixel-based thresholding. A modified evaluation procedure adapted from ICDAR2013 was introduced to better balance over- and under-segmentation errors.

Comparative analysis showed that while Camelot leverages PDF metadata, it struggles with visual segmentation. Nanonets achieve high grid detection accuracy but can misplace text in complex tables. ProjectionP, optimized for desktop hardware, delivered competitive results, outperforming Camelot and matching Nanonets in specific cases.

17:50
Impact of Conflict Set Resolution Strategies on Inference Efficiency in Rule-Based Systems

ABSTRACT. This study explores the impact of four conflict set resolution strategies—random, recency, textual order, and specificity—on the efficiency of forward reasoning in rule-based expert systems. Experiments were conducted on seven diverse datasets, with knowledge bases ranging from over 100 to 150,000 rules. We evaluated inference time, success rate, and the number of new facts generated. The recency strategy proved most efficient, yielding the shortest inference time and fewest new facts, while the specificity strategy was the slowest. Inference failures occurred only with minimal input data (1$ of facts), affecting less than 5\$ of cases.

18:10
Evaluating the Chronos Foundation Model for Daily Stock Index Forecasting

ABSTRACT. This study empirically evaluates the performance of Chronos, a recent foundation model pre-trained on a large corpus of time series data, for the task of daily stock index forecasting. Using a rolling window framework on historical Nasdaq-100 and S&P 500 data from 1995 to early 2025, we compare zero-shot and fine-tuned Chronos variants against a diverse set of established forecasting methods, including statistical benchmarks (AutoARIMA, ETS), standard deep learning models (DeepAR, DLinear, SimpleFeedForward), other Transformer-based architectures (PatchTST), and ensemble approaches. Our results, based on standard forecasting metrics and simulated trading performance, indicate that zero-shot Chronos provides competitive forecasting accuracy. It is statistically comparable to the best traditional methods, but its derived trading performance lags top benchmarks. The fine-tuned Chronos variant statistically underperformed the zero-shot version in forecast accuracy. These findings highlight the potential of foundation models and underlines the significant challenges in effective fine-tuning.

18:30
Hybrid Symbolic-Neural Domain Adaptation via SymbSteer. Markov-Guided Prompting and Decoding for Resource-Efficient Language Model Steering

ABSTRACT. Adapting large language models (LLMs) to formal, low-resource domains-such as public procurement or regulatory writing-remains a significant challenge, particularly in non-English contexts. We present a lightweight hybrid framework that combines symbolic 3-gram Markov models with neural generation using DistilGPT2. The approach introduces symbolic guidance in two stages: domain-specific few-shot prompting and decoding-time probability adjustment. This enables domain-consistent generation without model retraining. Evaluated on Polish public procurement documents and deployed on CPU-only infrastructure, the method improves domain fidelity, structure, and semantics, as measured by BLEU, ROUGE-L, and BERTScore. The proposed framework offers a scalable, inference-only alternative to fine-tuning for generating formal texts under strict resource constraints.

18:50
Generative artificial intelligence applying an adaptive algorithm with real-time dynamics allocation for ecological monitoring

ABSTRACT. This article presents an innovative approach to monitoring river water quality in real time by generating estimates of difficult-to-measure signals such as biochemical oxygen demand. Laboratory tests take too long for real-time monitoring. Therefore, an adaptive PDALM algorithm (Proportional Differential Algorithm with a Latch Mechanism) was developed, integrating mathematical modelling with measurement data to enable instantaneous estimation of water quality signals using a special latch mechanism. The forced eigenvalue distribution guarantees system dynamics and ensures stability and robustness to disturbances. In the proposed RTMS system, the PDALM algorithm functions as an adaptive soft sensor generating high-quality training data. This data is then used by a generative neural network for anomaly detection and forecasting of atypical scenarios in dynamic environmental systems. The system can function as an intelligent environmental monitoring module capable of learning, predicting, and responding to changing environmental conditions.

19:10
Decentralized Neural Network Modeling from Heterogeneous Data Sources: A Feature Mapping Approach

ABSTRACT. This paper presents a privacy-preserving framework for distributed neural network modeling across heterogeneous data sources, where local datasets differ in both objects and attributes. To enable collaborative learning without sharing raw data or model parameters, each local decision table is independently transformed into a unified feature space using multiple dimensionality reduction techniques – Principal Component Analysis (PCA), Singular Value Decomposition (SVD), and Uniform Manifold Approximation and Projection (UMAP). Various types of neural networks – Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Simple Recurrent Network (SIMPLE), Multilayer Perceptron (MLP) and the Radial Basis Function Network (RBF) – are trained locally, and their outputs are aggregated using soft voting (simple average) to generate final predictions. Experimental results on benchmark datasets confirm the approach’s effectiveness, scalability, and robustness in decentralized learning settings.

17:30-19:30 Session 12B: T1: Managing IS Development and Operations 2
Location: Room 2
17:30
Skills Shortages as a Driver of AI Adoption: Evidence from Developed Countries

ABSTRACT. The aim of the paper is to explore the relationship between skills shortages and the adoption of artificial intelligence (AI) in organizational contexts. While existing literature often considers AI as a substitute for labor, this study addresses a less examined perspective – AI adoption as a strategic response to persistent difficulties in recruiting highly educated and skilled workers. Drawing on theoretical frameworks from human capital theory and labor economics, and using empirical data from small and medium-sized enterprises (SMEs) across 34 developed countries, the analysis demonstrates that labor shortages – particularly at the bachelor’s and master’s education levels – significantly increase the probability of AI implementation. The paper shows that AI is not merely a labor-saving technology, but a means of maintaining competitiveness and scalability under conditions of human capital constraints. However, the adoption of AI is not uniform: it depends on internal readiness and sectoral dynamics, pointing to a paradox: while AI can mitigate talent shortages, it also introduces new demands for technical capabilities. These insights suggest that addressing the talent gap requires integrated strategies across organizational design, workforce development, and public policy.

18:00
Benefits and Challenges of Generative AI in Software Development: A Survey-Based Study

ABSTRACT. This paper investigates how generative artificial intelligence (GenAI) tools affect software development from the perspective of 62 software professionals. Using a mixed-methods approach, the study combines survey-based quantitative data with thematic analysis of open-ended responses. Results show that experienced GenAI users report enhanced efficiency and creativity, but also raise concerns about code quality, overreliance, and increased expectations. Younger and less experienced developers feel more job insecurity. Organizational support appears to have limited influence on perceived pressure. The paper offers practical recommendations for the adaptive integration of GenAI tools and highlights directions for future research.

20:00-23:30Banquet Dinner (Hotel M)

Hotel M, Bulevar Oslobođenja 56a, Belgrade (550m from FON, 7min walking South direction)