ICAICTA 2025: 2025 12TH INTERNATIONAL CONFERENCE ON ADVANCED INFORMATICS: CONCEPT, THEORY AND APPLICATION (ICAICTA)
PROGRAM FOR SUNDAY, SEPTEMBER 21ST
Days:
previous day
all days

View: session overviewtalk overview

09:15-10:15 Session 8A: Security
Location: Room 7603
09:15
Steganalysis for Secret Message Length Estimation by Using GBRAS Net Regressor

ABSTRACT. The main objective of steganalysis is to predict whether a suspect image is a cover image or a stego image . After predicting the presence of a secret message, further steganalysis research continues by estimating the length of the secret message. Research on estimating the length of secret messages aims to validate the existence of secret messages by providing measurable evidence that a digital medium, particularly an image, contains a secret message of a certain length. Estimation of the length of secret messages embedded using the S-Uniward adaptive steganography algorithm in previous works, which utilized a pretrained ResNet-50, shows high MAE values. This performance indicates the need for improvements in the deep learning regressor architecture. Therefore, this study proposes the development of GBRAS Net for estimating the length of secret messages by modifying the classification layer into a regression layer. The modification involves replacing the Softmax loss function with Mean Squared Error (MSE) and using continuous values as a substitute for payload class labels. This study aims to develop a predictive model to estimate the length of secret messages using the GBRAS Net regressor on the Bossbase 1.01 dataset. The proposed model shows the lowest MSE (0.0182), RMSE (0.1349), and MAE (0.1064) values among ResNet 50, VGG-16, and Ye Net regressor.

09:30
Enhancing Career Alignment Prediction Models by Mitigating Data Leakage and Addressing Temporal Concept Drift

ABSTRACT. Tracer study play a crucial role in evaluating educational outcomes and graduate employability. However, predictive models in this domain face significant challenges from data leakage and temporal concept drift. This paper presents a machine learning approach to predict job-education alignment using Institut Teknologi Bandung (ITB) alumni data while systematically addressing these critical issues. We implement a "without_leaky" scenario to mitigate data leakage and analyze temporal concept drift through multiple experimental trials. Our ensemble methodology combines XGBoost, CatBoost, Random Forest, and MLP models using soft voting. Experimental results across seven trials reveal significant prediction variability (43.5% to 96.4% positive predictions) and model agreement ranging from 31.5% to 88.7%, indicating substantial temporal drift. Despite these challenges, our approach maintains reasonable confidence levels (67.4% to 84.3%) and demonstrates the importance of robust model validation in educational data mining. Feature importance analysis using SHAP reveals that Grade Point Average (GPA), business sector, and major are the most predictive factors. This research contributes methodologies for handling data leakage and temporal drift in educational studies.

09:45
Deep Reinforcement Learning for Intrusion Detection System on the Internet of Things

ABSTRACT. The exponential growth of the Internet of Things (IoT) has significantly increased system security risks. Given the resource constraints of IoT devices, security solutions such as an Intrusion Detection System (IDS) must not only be accurate but also efficient and lightweight for practical deployment. This research proposes an IDS specifically designed for IoT environments, comprising a traffic collector module, an analysis engine, and a user interface. At its core, the system utilizes a Deep Reinforcement Learning (DRL) model based on the Deep Q-Network (DQN) algorithm, trained on the CICIoT2023 dataset. The entire system was implemented and tested on a Raspberry Pi 4 Model B device. Evaluation results show that the developed DRL model achieves an F1-Score of 0.81, balancing a precision of 0.87 and a recall of 0.78. From a lightweight perspective, the model demonstrates high efficiency with a file size of only 100.40 KB and a fast average inference latency of 11.53 ms. Although static memory (RAM) usage was identified as a primary challenge for future optimization, the user interface successfully visualized detection results in real-time, validating the system's feasibility as an edge-device security solution.

09:15-10:15 Session 8B: IoT and Microservice
Location: Room 7604
09:15
Design and Implementation of IoT-Driven Full Stack Fleet Management System Leveraging DevOps Approach

ABSTRACT. The Indonesian B2B construction material distribution sector faces significant operational challenges due to inefficient route planning, particularly for heavy vehicles, inadequate tracking of inaccessible routes, lack of timely alerts for deviations and idles, also unreliable fuel consumption monitoring. This paper presents the design and implementation of an Internet of Things (IoT) driven full stack fleet management system (FMS) leveraging a DevOps approach to address these issues. The system delivers optimized route planning considering vehicle constraints, automated alerts, and enhanced fuel tracking and validation capabilities. Key results include the successful implementation and validation of all designed functionalities for its defined user roles (planner, management, and driver) and demonstrated system stability under load and stress testing, achieving average response times of 215 ms with 28 concurrent virtual users and 2.82 s with 100 concurrent virtual users (handling mixed WebSocket and HTTP traffic), all with no errors. The successfully deployed and monitored FMS offers a cost-effective, scalable, and maintainable solution for medium-scale companies.

09:30
Comparative Analysis of API Development in Microservices Architecture

ABSTRACT. The widespread adoption of microservices architecture necessitates efficient API design to optimize interservice communication. This study conducts a comparative performance analysis of REST and GraphQL APIs within a microservices-based thesis management system, focusing on response time, throughput, error rate, CPU usage, and memory consumption. A controlled experiment was designed using Docker containers to isolate services and Apache JMeter as a testing tool to simulate simultaneous user loads (10, 50, and 100 threads). Both APIs were implemented in the Go language following Clean Architecture principles and tested under identical conditions. The results indicate that REST achieves slightly lower latency for simple requests (6–12 ms vs. 7–14 ms for GraphQL), while GraphQL demonstrates superior resource efficiency, maintaining stable CPU usage (0.32 – 0.39% vs. 0.46–1.4% for REST) and memory (≈265 MB vs. ≈413–452 MB for REST) across all scenarios. Throughput and error rates were comparable, with no statistically significant deviations.These findings suggest that REST remains more suitable for simple, low-latency data operations, whereas GraphQL excels in resource-constrained environments and complex data aggregation tasks. This empirical analysis provides practical recommendations for developers to select APIs based on workload characteristics, balancing latency, scalability, and infrastructure costs. Future research should explore multiregion implementations and dynamic caching strategies to generalize these findings to large-scale production systems.

09:45
A QoS Evaluation of HTTP and CoAP for Image Data Transfer from Edge to Cloud in IoT Systems

ABSTRACT. This study investigates the Quality of Service (QoS) performance of HTTP and CoAP protocols for image data transfer within an IoT edge-to-cloud architecture. Our setup involved ESP32 CAM/WROOM devices sending image data to a local PC intermediary, which then transmitted it to a cloud server. We thoroughly analyzed latency, throughput, packet loss, and jitter. Each scenario (10, 30, and 50 image batches, approx. 75 KB each) was rigorously tested 10 times to ensure statistical reliability. Key findings reveal CoAP significantly outperformed HTTP in latency (20.67-24.03 ms vs. 33.93-35.52 ms) and throughput (44,618-48,972 B/s vs. 4,391-5,721 B/s), largely due to its lightweight design. However, HTTP demonstrated superior jitter performance (1.32-2.49 ms) compared to CoAP (1.71-39.90 ms). CoAP's jitter notably increased with larger datasets and exhibited distinct outliers, primarily attributed to its UDP-based nature. Packet loss remained 0% for both protocols. Ultimately, selecting the optimal protocol for IoT edge-to-cloud systems hinges on balancing CoAP's efficiency and speed against HTTP's consistency and reliability, depending on specific QoS priorities.

10:30-12:00 Session 9A: Classification, forecasting and Deep Learning
Location: Room 7603
10:30
Deep Learning Approach for Classifying Indonesian Traditional Houses Using Fully Connected Layers

ABSTRACT. Indonesia is a culturally rich country with diverse ethnic groups, each represented by unique traditional houses that differ in shape, color, size, and philosophical meaning. This study proposes a method for identifying Indonesian traditional houses using a Convolutional Neural Network (CNN) with fully connected layers for feature extraction. The model processes image data to extract visual features, focusing on combinations of color, shape, and size. The effectiveness of different feature combinations was evaluated using the Average Variance Extracted (AVE) metric to determine validity. Combinations such as color and shape, size and shape, and color–size–shape yielded AVE scores above 0.5, indicating high validity, while other combinations scored below 0.5. The training process was conducted for 30 epochs, resulting in a training accuracy of 0.961 and a final test accuracy of 0.925. These results demonstrate that the proposed CNN-based approach effectively classifies traditional houses and highlights the importance of using multiple visual features for improved validity and accuracy.

10:45
Evaluating CNN and SVM Models in Smart Agriculture: A Case Study on Bell Pepper Leaf Disease Classification

ABSTRACT. This research aims to develop a classification model for bell pepper leaf disease images to support modern agriculture in Smart Green House (SGH). Diseases on bell pepper leaves caused by pests, fungi, and viruses have caused a decrease in crop yields. To overcome these problems, an image classification model approach is used, namely Convolutional Neural Network (CNN) and Support Vector Machine (SVM). The datasets are grouped into four classes: healthy (healthy leaf), pest (leaf disease caused by pest), fungus (leaf disease caused by fungus), and virus (leaf disease caused by virus). The dataset is split into two subsets: 80% for training and 20% for testing. To increase the amount of data diversity, an augmentation process was performed using horizontal, vertical and rotation flip techniques (15° to 270°). The dataset consists of 1,390 training images and 34 testing images. CNN models were developed and tested on three training scenarios, namely early stopping, 50 epochs, and 100 epochs. CNN has the ability to automatically extract features from images. Meanwhile, the SVM model was developed using manually extracted features from color (RGB, HSL) and texture (Sobel) components, and tested using three types of kernels: linear, polynomial, and RBF. The evaluation results show that the CNN model with early stopping scenario gives the highest accuracy of 97%, followed by the SVM model with polynomial kernel which achieves 94% accuracy. These findings show that CNN is superior in leaf disease classification and the results of this study are expected to contribute to the development of a bell pepper plant disease detection system.

11:00
Test-Time Training in CALF for Time Series Forecasting via Decomposition and Adaptive Freezing

ABSTRACT. Time series forecasting is a critical task in various domains, including finance, energy, and climate monitoring. Although Transformer-based models have shown strong results, they still struggle with adapting to dynamic and previously unseen data distributions. This paper presents an enhancement to the CALF framework by integrating Test-Time Training (TTT), Time Series Decomposition, and Adaptive Freezing to improve the accuracy and computational efficiency of the forecasting model. TTT allows the model to adjust its parameters during testing, helping it to adapt to new data distributions that were not present during training. Time Series Decomposition separates trend, seasonal, and residual components, enabling the model to focus on the most relevant features of the time series. Adaptive Freezing reduces computational costs by selectively freezing certain layers during fine-tuning, thus optimizing memory usage and training time. The combination of these techniques not only improves forecasting accuracy but also reduces computational burden, particularly in large-scale datasets. Extensive experiments demonstrate that the integrated approach significantly outperforms traditional models like ARIMA, LSTM, and state-of-the-art Transformer-based models in both forecasting accuracy and computational efficiency. The proposed method offers a robust solution for real-world applications, providing high adaptability to unseen data while maintaining efficiency in resource-constrained environments.

11:15
Improving CNN Performance through Trainable Linear Combination of Fixed and Learnable Filters for CNNs

ABSTRACT. Convolutional Neural Networks are known for their remarkable performance in image processing and computer vision, primarily due to their use of convolutional layers that specialized in extracting local spatial features. This notable performance, however, generally demands deep architectures and substantial computational resources, making the network less suitable for lightweight or resource-constrained environments. To alleviate these constraints, we propose a learning method that utilizes certain fixed filters in the convolutional layers to extract general features. The method is applied to the initial convolutional layers of the network by partially integrating fixed filters alongside existing learnable filters. This represents a novel attempt to guide the training process through partial kernel fixation, aiming to improve efficiency and performance simultaneously. Our approach reduces training time while enhancing training accuracy. For validation, the CIFAR-10 and CIFAR-100 datasets were used, and the LeNet-5 and AlexNet architectures were employed. Experimental results showed that, across all datasets, classification accuracy increased by up to 3.04 percentage points compared with the baseline without the proposed method, while training time reduced by up to 24.4%.

11:30
Value-Gradient-Based Subgoal Discovery for Deep Reinforcement Learning

ABSTRACT. Reinforcement learning agents often struggle with inefficient exploration and slow convergence in environments with large state spaces and sparse rewards. Hierarchical Reinforcement Learning (HRL) addresses this by leveraging temporally extended actions, or options, to navigate between critical subgoal states. However, the automatic discovery of these subgoals remains a significant challenge. This paper introduces a novel and computationally lightweight method, Value-Gradient Subgoal Discovery (VGSD), for identifying bottleneck states by directly analyzing the learned value function. We treat the state-action value function, Q(s,·), as a vector that defines a local policy representation. We posit that that the state’s Q-values in the environment map carry valuable information about the utility of strategic regions. Our method quantifies this by calculating a dissimilarity metric between the Q-vectors of a state and its immediate neighbors. We validate our approach in the classic Four Rooms domain, demonstrating that our method successfully identifies the doorway states as the primary bottlenecks. Furthermore, we show that a hierarchical agent utilizing these discovered subgoals as options learns a successful policy significantly faster than a standard DQN agent restricted to primitive actions, confirming the efficacy and utility of our discovery method.

10:30-12:00 Session 9B: Miscellaneous AI Application
Location: Room 7604
10:30
AI Service Quality and Customer Loyalty in the Online Food Delivery Industry

ABSTRACT. This study investigates the determinants of customer loyalty in Indonesia's online food delivery industry, emphasizing AI service quality, e-service quality, food quality, perceived value, customer satisfaction, and customer loyalty. Analysis of survey data from 400 consumers utilizing structural equation modeling (SEM) indicates that e-service quality has a considerable effect on customer satisfaction, however it does not directly influence loyalty. Perceived value increases satisfaction but does not directly affect loyalty. AI service quality plays a key role by directly improving both satisfaction and loyalty, while also strengthening the relationship between satisfaction and loyalty. Customer satisfaction is confirmed as the strongest driver of loyalty. These findings help online food delivery platforms understand how to combine technology and service quality to retain customers in a competitive market.

10:45
AI in Customer Service: The Impact of Human Involvement Disclosure on Customer Trust and Communication Style with Hybrid Agent Services in E-commerce

ABSTRACT. Artificial intelligence (AI) and human capabilities increasingly blur the boundaries between technology and online services in human interactions to enhance customer service interactions. While current studies largely focus on the identity of AI itself, such as chatbots, research on whether the involvement of human employees behind the scenes should be disclosed has received less attention. Therefore, this study addresses this gap by examining the impact of human involvement disclosure on customer interactions with hybrid service agents, which consist of AI-powered chatbots and human employees. This study uses a quantitative explanatory design to investigate the relationship between HID, impression management concerns, customer communication style, customer trust, and customer retention in e-commerce customer service. The instrument used in this study is a questionnaire. The findings show that when companies reveal human involvement, customers tend to communicate in a more human-oriented style., influenced by their desire to manage impressions once they realize there are real people alongside the chatbot. Additionally, this transparency boosts customer trust; businesses that are open about human roles are seen as more reliable. This trust, in turn, plays a significant role in customer retention, as those who feel confident in the hybrid service agent are more likely to remain loyal to the service in the long run. These results offer valuable perspectives for service providers developing hybrid customer service approaches. Clarity regarding the involvement of human agents not only improves the quality of interactions but also bolsters customer trust and loyalty within AI-assisted service contexts.

11:00
Leveraging Deep Neural Networks and Data Augmentation to Identify Hoax Indonesian News

ABSTRACT. The spread of misleading information threatens the reputation of individuals and organizations. While previous studies (Huang et al., 2023) demonstrated that sentiment neutralization improves fake news detection in English and Mandarin datasets, its effectiveness on Indonesian-language data remains unexplored, despite Indonesia’s high internet usage and disinformation vulnerability (Data Reportal, 2024). This study investigates the impact of text neutralization and data augmentation on fake news detection performance. Five models—RNN, LSTM, LSTM with Dropout, Bidirectional LSTM, and Transformer—were trained on original and augmented datasets, including neutralized texts produced by a Large Language Model (LLM). To assess model accuracy, K-fold cross-validation was applied with varying hyperparameters. The results indicate that the Transformer model trained solely on the original data, without neutralization, achieved the highest accuracy in fake news detection. These findings highlight that, in the Indonesian-language context, text neutralization and augmentation do not consistently improve detection performance, and that a model’s effectiveness heavily depends on the dataset used.

11:30
A Transformer Encoder-Based Approach for Cross-Regional Water Quality Prediction with Fine-Tuning

ABSTRACT. Estimating water quality is a fundamental and challenging problem in developing countries. Different from existing works adopting conventional machine and deep learning models, in this paper, we propose to use a Transformer-encoder-based model. Our model can address missing values without imputation. Our study also introduces cross-regional data use: base model training on larger datasets and fine-tuning on a small dataset in a target region. Eight features are adopted and given to the model. We evaluated the model and achieved 97% accuracy with four encoder layers. Our research results demonstrate the potential of Transformer architecture in dealing with diverse environmental data for the management of water resources under different climate and regional conditions

11:45
AI-Driven Font Generation: A Qualitative-Method Approach for Multilingual Typeface Design

ABSTRACT. This study explores the use of machine learning to generate fonts that maintain consistent style across different writing systems. The model was trained on a diverse set of multilingual characters and refined using multiple techniques to balance structure and style. To understand how well the generated fonts performed, a user study was carried out, where participants evaluated the results based on accuracy, legibility, and overall visual quality. Feedback from users—ranging from those with no design background to others with some creative experience— praised the system’s ability to keep a coherent style across scripts. However, they also pointed out issues like rough edges and uneven stroke detail. While the current output is not yet suitable for professional use, it shows clear promise for artistic and decorative purposes. The results highlight the importance of user-centered testing in creative tool development. This work provides a foundation for improving automated font design by focusing on visual polish, cleaner output, and broader stylistic flexibility, with the long-term goal of supporting designers in faster, more accessible typeface creation.

13:00-14:15 Session 10A: Optimization and Efficiency
Location: Room 7603
13:00
A Multi-Objective Location Model for Kitchen Planning in a Centralized School Lunch Program Considering Cost, Environmental, and Social Impacts

ABSTRACT. School lunch programs in low- and middle-income countries are increasingly required to meet nutritional, environmental, and social goals. However, existing planning approaches often emphasize cost minimization while neglecting sustainability trade-offs. This study introduces a multi-objective optimization model for determining central kitchen locations and assigning schools under cost, environmental, and social objectives. A lexicographic optimization approach is applied to six policy scenarios using a case study in Balikpapan, Indonesia, due to its rapid urban growth and persistent challenges in establishing equitable and efficient centralized kitchen systems. Results indicate that cost implications remain relatively stable across scenarios, while emissions and employment are highly sensitive to delivery radius and budget constraints. Socially prioritized scenarios increase job opportunities but raise emissions, whereas environmental-focused strategies reduce emissions at the expense of reduced employment impact. Budget reductions exceeding 15% significantly limit service coverage and reduce social impact. The model provides a decision-support tool for policymakers to design cost-effective, inclusive, and environmentally sustainable centralized school lunch systems.

13:15
Toward Efficient LoRa CRAN Deployments: IQ Data Compression and Gateway Collaboration

ABSTRACT. The Cloud Radio Access Network (CRAN) architecture for Long Range (LoRa) networks has recently emerged as a promising solution for detecting ultra-low SNR signals. Its core principle involves aggregating and processing multiple copies of signals from distributed LoRa gateways in the cloud. When individual gateways fail to decode a signal due to poor channel conditions, the cloud server can jointly process these weak signals to recover the transmitted message. However, this approach faces a significant scalability challenge, transmitting raw IQ samples from each gateway to the cloud demands substantial uplink bandwidth, making real-world deployment difficult. This paper represents an early stage of research focused on identifying key challenges in CRAN-based LoRa systems and presenting preliminary solution ideas. Specifically, we introduce a collaborative IQ sharing approach combined with data compression to reduce bandwidth usage while preserving the signal integrity required for effective joint decoding. This work serves as a foundational step toward the realization of scalable CRAN systems capable of detecting even weak signals in constrained network environments.

13:30
Memory Efficient Quantization-Aware Fine-Tuning Diffusion Models through Implementation of L4Q in EfficientDM Framework

ABSTRACT. Rapid evolution of large-scale machine learning models has created unprecedented demands on computational resources, particularly GPU memory, which has emerged as a critical bottleneck in both research and production environments. While recent advances in memory-efficient training techniques have shown promise in addressing efficiency concerns, the fundamental GPU memory bottleneck during training remains unresolved, limiting its applicability in resource-constrained environments and hindering exploration of efficient diffusion models for complex tasks such as video or 3D generation. This work proposes an integration of L4Q-EfficientDM, a quantization-aware training framework that integrates the immediate gradient flushing mechanism with the temporal calibration strategy. The integration aims to make efficient diffusion model training accessible to researchers with limited computational resources and democratize access to state-of-the-art generative modeling capabilities, enabling broader exploration of diffusion models across diverse application domains. Our experiments demonstrate that this integration yields substantial improvements in both training efficiency and generative quality. Specifically, the proposed method achieves a speed increase of 1.38x in training time, a reduction of 37% in GPU memory usage peak, while also improving FID by 9-19 points

13:45
Enhancing cloud block storage volumes throughput provisioned for virtual server instances via I/O thread

ABSTRACT. In Kubernetes cluster environment, there will be cloud volume provision and cloud volume placement controllers which will provision the cloud block storage volume from the storage platform. The provision controller services(pcs) is a method which will determine which storage node the storage device and storage volumes will be provisioned. And the placement controller service is a method which will place the storage device and will provision the virtual image (storage volume) to the Kubernetes cluster as a persistence volume. This approach benefits client and customer as it minimizes infrastructure cost, whereas cloud provider incur increased expenses to host more virtual machines and attach more data volumes to the virtual machines. IO performance is significant of overall application performance. When multiple volumes are attached to the virtual server instances the volume bandwidth are prorated based on each volumes unattached bandwidth allocation. The problem would arise if the IO operation were happening across volumes and disk IO performance could get affected because of threading models and CPU constrains. It is important to consider all block volumes attached to the virtual server instance while determining if one is reaching the expected bandwidth across all volumes when IO is happening across volumes. In this paper, we experimented a mechanism that helped us in overcoming the threading constrain that was surfaced on hypervisor node.

13:00-14:15 Session 10B: Recommender, software testing, and usability
Location: Room 7604
13:00
Towards a Machine Learning-Based Career and Education Recommendation System Using Content-Based Filtering

ABSTRACT. This study presents a prototype web-based career and education recommendation system based on content-based filtering, machine learning, and natural language processing(NLP). The system analyzes user input, such as interests and skills, and generates tailored career suggestions. It employs a Decision Tree classifier and text-based similarity techniques to offer personalized recommendations. The system comprises two modules: career suggestions using TF-IDF-based text classification and education guidance using categorical classification. For career recommendations, user inputs are vectorized using TFIDF, classified using a decision tree model, and trained on labeled career data. For education recommendations, a separate DecisionTree model trained on structured data suggests suitable options based on the field of study, the education level, skills, career goals, and learning style. The backend is built with Flask and supports RESTful API endpoints, while a chatbot interface enables interactive communication. The evaluation results show that the career recommendation module achieved 81. 58% accuracy,83. 13% precision, 87. 50% recall and an F1 score of 84.13%.The education recommendation module performed better, with the decision tree model achieving 90. 90% accuracy and an F1 score of 82.77%. Gradient Boosting and Random Forest models followed with lower F1 scores of 74.06% and 63.75%, respectively, suggesting that simpler models may gene with lower F1 sralize better in education-related tasks. The chatbot responded to 90% ofthe ten representative queries, with nine rated as fully relevant. These findings indicate that the system can provide accurate,context-aware guidance and holds promise as a practical tool for personalized career and education support.

13:15
Identifying User Bias as Hidden State in A/B Testing with Hidden Markov Model Approach

ABSTRACT. Along with software development, testing continues to be performed, including A/B Testing. A/B Testing is a testing method that compares two software designs to determine which one is more effective. A commonly used metric is click rate, which is the number of users who click on a target in the design. The initial hypothesis is that variation A (control) has a higher click rate than variation B (experimental). The drawback of A/B Testing is user bias which can lead to inaccurate results. Hidden Markov Model (HMM) is used to detect hidden states that affect A/B Testing results. Implementation of HMM into A/B Testing is done to minimize user bias. HMM, which is commonly used to detect hidden patterns, is implemented to identify user behavior bias in A/B Testing. The HMM algorithm changes the display frequency of the design unequally to test the initial hypothesis and produce tests that are affected by user bias as little as possible. The results show that HMM successfully detects user bias and improves the validity of A/B Testing results. This research provides a new solution to overcome user bias in A/B Testing by changing the display frequency of each design variation based on the click rate, and opens up further research opportunities on the application of HMM in software testing optimization.

13:30
Interaction Design of a Mobile Application for Shuttle Service Booking Using a User-Centered Design Approach

ABSTRACT. Shuttle bus transportation services have become a vital alternative for public mobility, particularly in major Indonesian cities. However, the increase in demand for these services has not been matched by an improvement in service quality. Although numerous shuttle bus booking applications currently exist, they do not fully prioritize user needs and experience, thus failing to enhance the overall transportation service quality. This research focuses on the development of an interaction design to improve the booking process for shuttle bus services. The methodology employed is user-centered design, which emphasizes user needs throughout the development process. The interaction design was crafted to achieve the usability goals of being efficient to use and effective to use, alongside the user experience goal of being helpful. Testing results showed that participants completed tasks 19.08% faster on average compared to the previous iteration, indicating the prototype met the 'efficient to use' goal. Furthermore, evaluation using the Single Ease Question (SEQ) method resulted in a 100% task completion rate, proving the prototype fulfills the 'effective to use' goal. The System Usability Scale (SUS) assessment yielded an average score of 93.125, signifying that the prototype has excellent usability and successfully helps users achieve their objectives within the booking application.

13:45
The Utilization of N8N Agent Platform and AI for Job Applicant Candidate Selection Automation

ABSTRACT. Indonesia's Information Technology (IT) sector is reaching a talent crisis caused by the gap between industry needs and graduate qualifications, exacerbated by traditional recruitment processes that are slow, inefficient, and subjective. This research proposes an automated candidate selection system based on the N8N low-code platform as a workflow orchestrator to resolve these challenges. This system is integrated with a Large Language Model (LLM)-based Artificial Intelligence (AI) agent to extract skills from unstructured data, such as resumes and job vacancies, and transform them into a structured JSON format. The candidates' matching scores are objectively calculated through a customizable weighting formula, focusing on must-have skills. The system's main output is the transparent analytical report that displays a list of matching and non-matching skills, enabling an evidence-based selection process that improves the accuracy and objectivity of decisions. The strategic implication is the transformation of the recruiter's role from an administrative task to a strategic partner capable of conducting data-driven interviews and building long-term data assets for trend analysis and strategic workforce planning.