EVO*2025: EVOSTAR 2025
PROGRAM FOR THURSDAY, APRIL 24TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-11:00 Session 7A: EvoCOP: Applications
Location: Room A
09:00
Instance Space Analysis and Algorithm Selection for a Parallel Batch Scheduling Problem

ABSTRACT. This paper addresses the Oven Scheduling Problem (OSP), a parallel batch scheduling problem in semiconductor manufacturing, and identifies strengths and weaknesses of solution methods using the Instance Space Analysis (ISA) methodology. We propose a comprehensive feature set to effectively characterize OSP instances and generate more diverse instances compared to the literature. The performance of two state-of-the-art algorithms for the OSP -- Simulated Annealing (SA) and Large Neighborhood Search (LNS) -- is analyzed using ISA, revealing distinct regions of superior or inferior performance for each, as well as areas of equal performance. Finally, we propose an automated algorithm selection approach that outperforms any single algorithm.

09:25
Generating (Semi-)Active Schedules for Dynamic Multi-mode Project Scheduling Using Genetic Programming Hyper-heuristics

ABSTRACT. Dynamic multi-mode resource-constrained project scheduling problem (DMRCPSP) is crucial for effectively managing complex projects where activities have multiple options of resource demand and durations are uncertain. Efficient solutions to this problem are vital for industrial applications, where optimal scheduling can significantly impact project costs and timelines. Genetic programming hyper-heuristic (GPHH) has been applied to evolve effective scheduling heuristics to make real-time decisions. However, current decision-making procedures for solving DMRCPSP primarily generate non-delay schedules, which are often suboptimal. This paper proposes two new decision-making procedures to generate (semi-)active schedules. The core idea is to expand the decision set and allow activities that cannot immediately start to reserve resources for future execution. We further developed GPHH to evolve scheduling heuristics based on these procedures and a comparison was conducted with existing decision-making procedures. The results demonstrated that the proposed GP approaches managed to evolve rules that generate more effective (semi-)active schedules than the best-known non-delayed schedules in real-time.

09:50
A Genetic Approach to the Operational Freight-on-Transit problem

ABSTRACT. Last-mile delivery problems have become a subject of increasing academic study in recent years. This paper examines the operational level Three-Tier Delivery Problem with Public Transportation, which concerns the conveyance of customers' parcels from a warehouse, typically situated outside the city, to the customers in the city center, using public transport vehicles as an intermediate leg. The parcels are conveyed from the depot to public transport stops, then taken into the city center by public transportation vehicles, and finally delivered to the customers by freighters using green and lightweight means (or even walking). In this paper, we introduce a genetic algorithm (GA) to address the resolution of large-scale instances, which cannot be tackled by exact approaches in practice. We provide a detailed account of its encoding and the distinct genetic operators tailored for it. We undertake a comparative analysis of its performance vis-à-vis that of a compact mixed-integer linear programming formulation on a diverse array of instances of various sizes. The outcomes underscore the efficacy and robustness of the GA approach across different instance sizes, yielding solutions that are near the optimal ones in a relatively short span of time.

10:15
Healthcare Facility Location Problem and Fitness Landscape Analysis

ABSTRACT. This paper presents an initial study of a Healthcare Facility Location Problem, the Two-Level P-Median Location Problem (TLPMLP). The main goal is to analyze problem instances by studying their associated fitness landscape characteristics, with a focus on ruggedness, neutrality and number of local optima. For each instance, we consider two fitness landscapes defined by the following two functions: the direct neighborhood function ($\mathcal{O}(n)$) complexity) and the two-step Neighborhood function ($\mathcal{O}(n^2)$ complexity). We also conduct a preliminary exploration on the correlation between the fitness landscape characteristics and the solving difficulty by using an exact method (binary linear program) and two straightforward local search methods: a hill-climber and sampled walk. Experiments are conducted on 40 instances from the literature and 27 randomly generated ones. We discuss our results and point out several future research directions on these initial approaches to analyzing TLPMLP.

10:40
A Selective Vehicle Routing Problem for the Bloodmobile System

ABSTRACT. Mobile blood collection has the advantage of greater reach compared to blood drives at fixed donation sites and is preferable for individuals with limited time or means of transportation. Bloodmobiles are widely used in healthcare logistics to increase the number of donors, donation frequency, and to better match blood demand with collection. The bloodmobiles are stationed at predetermined locations, while shuttles are assigned to visit these locations to collect the donated blood. This problem is formulated as the Selective Vehicle Routing Problem under the Bloodmobile System (SVRP-BM). This paper extends the Selective Vehicle Routing Problem with Integrated Tours problem (SVRPwIT) by considering: (i) multiple shuttles, (ii) multiple blood types, (iii) multiple trips for the bloodmobiles, and (iv) the visiting availability of the donation sites. The proposed Adaptive Large Neighbourhood Search (ALNS) algorithm, with a Simulated Annealing (SA) acceptance criterion, is tested on generated instances adopted from a real-life case of the Surabaya Red Cross. SVRP-BM provides a strategic solution to Surabaya's blood shortage by optimizing blood collection.

09:00-11:00 Session 7B: EvoApplications: Engineering
Location: Room B
09:00
Optimizing Camera Placement for Chicken Farm Monitoring

ABSTRACT. Animal farming has transitioned from small-scale operations to large commercial ventures, raising concerns about animal welfare alongside productivity and profitability. Artificial intelligence technologies offer significant potential for enhancing welfare through improved monitoring. However, practical solutions for optimizing farm management decisions are still limited. This paper tackles the challenge of optimizing camera placement in chicken farms to achieve maximum coverage for effective monitoring. We employ the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and two Quality Diversity (QD) algorithms, MAP-Elites (ME) and CMA-ME, using two behavior descriptors to identify optimal camera positions. Our findings show that algorithm-derived camera placements outperform human-designed configurations. Importantly, the QD algorithms offer a diverse set of high-quality solutions that can be selected without extra computation, in case the CMA-ES solution does not meet unforeseen constraints. Using our modeling, we have optimized the camera placement in a commercial farm in Cyprus offering maximum coverage surveillance.

09:25
Evolving Dynamic Fault Mitigation Strategies in a Robot Swarm for Collective Transport

ABSTRACT. As robot swarms move to real-world deployment, safety will be a key factor in improving adoption and trust [39]. Robot swarms are composed of many robots: during real-world operation each individual may be susceptible to failure resulting in potentially degraded performance of the swarm overall. A necessary component for safety is then the ability to detect and mitigate faults in the swarm. In this paper, we present a novel approach to learning dynamic fault mitigation via neuroevolution, where mitigation actions are implemented by both faulty and non-faulty robots in a collective transport scenario. In particular, there is no explicit fault detection step and the evolved “mitigation module” maps between a set of locally observed metrics as input and mitigation actions as output. Our approach is able to learn effective mitigation for six types of fault independently. We show that by allowing robots of any state to freely apply actions, “loosely-coordinated” mitigation emerges improving on the baseline where no mitigation is applied.

09:50
Designing Hardware-Friendly Hash Functions for Network Security Using Cartesian Genetic Programming

ABSTRACT. In this study, we propose a novel, hardware-efficient non-cryptographic (NC) hash function, developed using Cartesian Genetic Programming (CGP), to optimize the processing of network flows (e.g., 96-bit vectors in IPv4). With the advent of high-speed terabit Ethernet technologies such as 800G, cybercriminals are increasingly exploiting these networks to launch various attacks, including distributed denial-of-service (DDoS) attacks. To counter these threats, network security applications often employ probabilistic data structures (PDS), such as Bloom filters and Count Min sketches, to monitor network flows. These applications are often deployed on Field Programmable Gate Arrays (FPGAs) to meet real-time processing demands. The performance of PDS depends significantly on the efficiency of NC-hash functions. By utilizing avalanche metrics (avalanche dependence, avalanche weight, and entropy) as the fitness function, CGP-hash ensures robust performance without the need for dataset-specific training. While Genetic Programming (GP)-based NC-hash functions have demonstrated superior computational efficiency on FPGAs in terms of operating frequency, throughput, and latency, they are often less resource-efficient compared to state-of-the-art NC-hash functions. Thus, our hypothesis in this paper is that CGP, with its compact representation, leads to NC hashes with high computational efficiency and fewer resources on an FPGA. Our experimental results confirm that CGP-hash improves computational efficiency by at least 7.3\% while improving area efficiency by 4.5$\times$ compared to state-of-the-art NC-hash functions. This makes CGP-hash more suitable for FPGA implementations than other bio-inspired and handcrafted hash functions. Moreover, the 48-bit hash output can be extended by evolving additional hash functions and concatenating their outputs, all while maintaining superior resource efficiency and computational speed.

10:15
The Importance of Being Earnest: Multiple Heterogeneous Container Loading with a Simple Genetic Algorithm

ABSTRACT. In this study we address the complex practical problem of multiple heterogeneous container loading with a simple genetic algorithm. We demonstrate that with a well-chosen representation including a heuristic and a suitable fitness function the other aspects of the genetic algorithm do not need extensive work for good results. Following systematic study of our method on synthetically generated data, we visually showcase the solution for a company-based problem instance.

09:00-11:00 Session 7C: EvoApplications: Bioinformatics and Health
Location: Room C
09:00
Methodology for Designing Injection Molds: Data Mining and Multi-Objective Optimization

ABSTRACT. Injection molding is a complex process where effective mold design is essential to avoid defects like incomplete parts, flash, sink marks, weld lines, air bubbles, warping, and shrinkage. Proper mold design helps minimize these defects, but it is challenging due to the number of variables across stages such as plasticization, filling, packing, and cooling. This study focuses on optimizing the cooling phase of injection molding using numerical simulation tools (Moldex3D) to evaluate the impact of design variables on process performance. Due to the complexity and multi-objective nature of the problem, Artificial Intelligence (AI) techniques, in-cluding data mining, Artificial Neural Networks (ANN), and Multi-Objective Evolutionary Algorithms (MOEAs), were used to explore solutions effectively. The optimization aimed to improve the thermal efficiency of conformal cooling systems, critical for ensuring part quality and minimizing cycle time. A case study with a cylindrical part and multiple cooling systems initially defined 34 objec-tives, including temperature gradients, cycle time, and defects. These objectives were reduced to four using Principal Component Analysis (PCA) to facilitate op-timization. The results showed significant improvements in temperature uniformi-ty and defect reduction, confirming the effectiveness of integrating AI-based op-timization techniques with numerical modeling for advanced cooling system de-sign in injection molds.

09:25
Analysis of Illicit Drug Mixtures at Festivals Using Portable Near-Infrared Spectroscopy with Genetic Programming

ABSTRACT. Illicit drugs are often mixtures containing cutting agents and other adulterants. The adulterants can cause harm on their own or when consumed in conjunction with the illicit drug. Drug checking services mitigate the risks associated with illicit drugs by providing substance composition and other harm prevention information. A widely embraced delivery model for drug checking involves offering the service at events and festivals. To help reduce drug-related harm at festivals, it is beneficial to have substance identification methods that are rapid, accurate, non-destructive, safe, portable, convenient and easy to use. Near-infrared spectroscopy (NIRS) is a well-known method for identifying and quantifying a range of substances including the analysis of illicit drug mixtures particularly when using high-end bench-top instruments. Providing mixture analysis by a portable NIRS device is advantageous, however, there are still challenges when using a portable NIRS solution due to low sensitivity, limited wavelength range, low resolution, etc. Consequently, a range of artificial intelligence techniques have been utilised to analyse portable NIRS data of various mixtures. Nevertheless, evolutionary computation methods, known to be robust for solving complex combinatorial and optimisation problems, are not as frequently reported in NIRS based analysis of mixtures. This paper proposes a genetic programming-based approach to develop explainable models for analysing illicit drug mixtures using portable NIRS. The experiment results indicate that the proposed approach can effectively provide a model for the analysis of mixtures that provides an accurate identification of components within a drug sample that is comparable to benchmark linear model approaches.

09:50
A Symbolic Regression Screening Approach within Peptide Optimisation

ABSTRACT. The Protein Optimization Evolving Tool is a genetic programming based peptide generation tool which has successfully created novel peptides with improved performance for MRI imaging. However, like all supervised machine learning techniques, it may overfit to it's library of training peptides and create peptides which do not improve functionality. To overcome this problem we create symbolic regression models to act as another predictor of peptide function. We create a set of 76 features of physicochemical, theoretical and composite properties for each peptide and evolve the models using Grammatical Evolution on two datasets, one containing 74 peptides and the other 100 peptides. Models trained using these 76 features can successfully predict peptide functionality with a median MSE of 0.427 on the first dataset and 0.179 on the larger dataset, achieving state of the art results on both. We next investigate if a reduced set of 8 real-world features, which could result in more interpretable models, can accurately predict protein functionality. The models created on this reduced set were outperformed by model with used the full set on features on the first dataset but were statistically equivalent on the second dataset. Finally, we down sample the data at 10\%, 33\% and 50\% to evaluate the robustness of this approach. Our results show that models trained on as little as 7 peptides can be used as an additional measure of functionality within the Protein Optimization Evolving Tool.

09:00-11:00 Session 7D: EvoApplications: Real World Systems
Location: Room D
09:00
Adaptive Local Search for Real-World Multi-Echelon Inventory Control

ABSTRACT. This paper proposes an efficient solving method for a multi-echelon inventory control optimization problem applicable to real-world contexts. We implement a metaheuristic based on a local search algorithm and develop custom neighborhood operators. To validate the proposed method and assess its performance in a realistic setting, we develop a simulation tool that reproduces the inventory control process over several time periods. This simulation tool is used to evaluate the performance of the multi-echelon approach and to compare it to existing single-echelon order engines used in production. The results of the experiments show the significant benefits of the proposed multi-echelon inventory control optimization method, including an improvement of between 13\% and 40\% in average service rate and a profit increase between 23\% and 37\% depending on the type of instance considered.

09:25
Optimizing Dietary Plans Using Evolutionary Algorithms

ABSTRACT. This paper presents a novel application of Computational Intelligence to optimize dietary plans tailored to athletes’ unique nutritional needs across diverse sports disciplines. Leveraging Evolutionary Algorithms (EAs), the system dynamically generates personalized meal plans based on multi-factorial inputs, including age, gender, sport type, and individual food preferences. By integrating multi-objective optimization, the approach ensures precise macronutrient balance for enhanced performance, recovery, and long-term health. The system draws from a structured food database, offering real-time adaptability to dietary restrictions and preferences, and delivering nutrient-dense meal plans optimized for metabolic efficiency. Unlike traditional methods, this solution provides a practical, cost-effective alternative to professional diet planning. The research underscores the transformative potential of EAs in sports nutrition, combining tailored recommendations with scientific accuracy to set a new standard for athlete-focused dietary optimization.

09:50
FedGP: Genetic Programming for Evolutionary Aggregation in Federated Learning with Non-IID data

ABSTRACT. Federated Learning (FL) represents a distributed, privacy-preserving machine learning paradigm that enables decentralized model training across multiple clients. While traditional aggregation techniques, such as Federated Averaging (FedAVG), have demonstrated effectiveness, they often struggle in Not Independent and Identically Distributed (NON-IID) scenarios, where data distributions vary significantly among clients. To address these limitations, this study introduces FedGP, a novel aggregation strategy based on genetic programming (GP). FedGP dynamically evolves aggregation functions, enabling adaptive and personalized model updates that better capture the heterogeneity inherent in distributed data. The proposed method is evaluated on the PathMNIST dataset, employing a comprehensive experimental design comprising 24 configurations, including 8 setups with FedAVG and 16 with FedGP. The comparative analysis highlights FedGP's superior generalization capabilities and reduced biases, outperforming FedAVG in terms of accuracy. These results position FedGP as a robust and scalable solution for real-world FL applications, particularly in environments characterized by data heterogeneity.

10:15
Greater AI Design Control Aids Evolution of Computational Materials

ABSTRACT. Unconventional computing may overcome some of the limitations of traditional silicon-based systems using alternative materials and computational mechanisms. However, due to their complex underlying dynamics and high-dimensional parameter space, the design of these materials such that they instantiate computation is non-intuitive, making AI-driven design attractive. It has been shown that evolutionary algorithms can tune the structural properties of grains within a granular material such that it computes logical functions. In recent years, programmable granular metamaterials have been developed so that multiple physical properties of individual grains can be altered independently. This raises the question of whether allowing evolutionary algorithms to tune more grain features within a granular material frustrates or facilitates its ability to embed computation. In this work, we show that the latter is the case, when grain sizes and stiffnesses are co-evolved to embed Boolean logic gates, compared to evolving just sizes or stiffnesses alone. We report physical verification of evolved designs, taking a further step toward the provision of alternatives to electronic computing.

11:20-12:20 Session 8A: EvoApplications: Games
Location: Room A
11:20
Injecting Combinatorial Optimization into MCTS: Application to the Board Game boop.

ABSTRACT. Games, including abstract board games, constitute a convenient ground to create, design, and improve new AI methods. In this field, Monte Carlo Tree Search is a popular algorithm family, aiming to build game trees and explore them efficiently. Combinatorial Optimization, on the other hand, aims to model and solve problems with an objective to optimize and constraints to satisfy, and is less common in Game AI. We believe however that both methods can be combined efficiently, by injecting Combinatorial Optimization into Monte Carlo Tree Search to help the tree search, leading to a novel combination of these two techniques. Tested on the board game boop., our method beats 96% of the time the Monte Carlo Tree Search algorithm baseline. We conducted an ablation study to isolate and analyze which injections and combinations of injections lead to such performances. Finally, we opposed our AI method against human players on the Board Game Arena platform, and reached a 373 ELO rating after 51 boop. games, with a 69% win rate and finishing ranked 56th worldwide on the platform over 5,316 boop. players.

11:45
Robust search for the underlying objectives in black-box games with binary outcomes

ABSTRACT. Real-world interactions, such as web system access, tournaments, social communication, etc. can be modeled as a game with binary outcomes. Previous works study extracting the game's underlying structure into a coordinate system with the most prominent pairwise Pareto non-dominated strategies, i.e. the underlying game objectives. The objective search was also considered in a coevolutionary context. Our work, however, targets a scenario when mutual adjustment can not occur. While the first player performs moves from nature, the second learns the underlying objectives. Inference of the black-boxed game structure occurs during the simulation. In this regard, we propose the coordinate system extraction (CSE) algorithm, capable of working with sparse interactions. Additionally, we consider a benchmark to probe the search algorithm's robustness on predefined game coordinate system shapes. We compare approaches from coevolution such as parallel hill climbing and Pareto layering to a proposed CSE method on the benchmark and conclude that CSE infers the underlying coordinate system better by performance and robustness.

11:20-12:20 Session 8B: EvoMUSART: Image Making and Visual Art (ii)
Location: Room B
11:20
Balancing Indeterminacy and Structure: Neural Text Generation for Artistic Inspiration

ABSTRACT. This paper explores the potential of neural text generative models to produce poetic lines that can serve as seeds for artistic inspiration. Drawing on theories of aesthetic perception and creativity, we hypothesize that lines characterized by indeterminacy and occupying an intermediate space between randomness and predictability are most effective at inspiring creative work. We compare two architectures: Long Short-Term Memory Variational Autoencoders (LSTM-VAE) and Transformer-based Large Language Models (LLMs), analyzing their outputs using metrics that capture linguistic, stylistic, and poetic qualities. Our analysis reveals that LSTM-VAE generates lines with higher global entropy and more variable syntactic patterns while maintaining lower levels of pretentiousness compared to LLMs. While LLM-generated lines demonstrate richer conventional poetic imagery, they often present overly finished forms that may inhibit rather than stimulate creative exploration. The study's findings suggest that smaller, more focused models trained on curated datasets might be more effective at generating semantically open lines than larger, general-purpose language models. This is particularly relevant during the Seed phase of creativity, where the goal is not to produce polished artistic output but to help artists enter a state of heightened perception and creative possibility. Our work contributes to understanding how computational systems can effectively support human creativity while providing a framework for evaluating AI systems designed to bring artists into the creative state.

11:45
Exploring Bridges Between Creative Coding and Visual Generative AI

ABSTRACT. How to bridge generative procedural art and visual generative artificial intelligence (AI) for visual content creation is an under-explored topic.

In this paper, we explore how to bridge generative procedural art creation and visual generative AI (specifically diffusion models) by programming functionalities integrated into the creative environment. Specifically, we want to explore methodologies to condition/stylize art content and perform style learning upon procedural art via accessible interactions for artists and programmers.

We propose two methods: GenP5, a novel p5.js library enabling generative procedural art creation with flexibly stylizing canvas content and conveniently condition art creation with pre-determined patterns; and P52Style, an extended library built upon p5.gui allowing flexible adjustment of art content and leverage of visual generative AI for style learning tasks to create new artworks with style from generative procedural art.

11:20-12:20 Session 8C: EvoApplications: 30 years of PSO (i)
Location: Room C
11:20
A Survey of Modern Hybrid Particle Swarm Optimization Algorithms

ABSTRACT. Bio-inspired, population-based meta-heuristic for global optimization are very popular algorithms for addressing complex computational problems that traditional methods struggle to solve. Among the existing algorithms, the swarm intelligence algorithm Particle Swarm Optimization (PSO) is one of the most popular, thanks to its simplicity and effectiveness in multiple scenarios. This article focuses on recent hybrid optimization methods that extend the basic functioning of PSO. Hybridization, in this context, is defined as the integration of PSO with a different technique, to take advantage of the strengths of both algorithms. According to our findings, many variants have been proposed. The most frequent solutions consist of the hybridization of PSO with evolutionary operators (e.g. Genetic Algorithms and Differential Evolution); such strategies usually maintain a high degree of diversity into the population, enhancing global search capability, while reducing the risk of stagnation. Meanwhile the most widespread applications are from the areas of energy optimization, structural engineering and machine learning problems, demonstrating the versatility of these hybrid approaches.

11:45
We are Sending you Back... to the Optimum! Fuzzy Time Travel Particle Swarm Optimization

ABSTRACT. Particle Swarm Optimization (PSO) is a swarm intelligence meta-heuristics whose performance highly depends on the selection of its hyper-parameters, which control the particles' exploration and exploitation of the search space. Since the tuning of the hyper-parameters is problem-dependent, settings-free methods are preferable. Fuzzy Self-Tuning PSO (FST-PSO) is a settings-free variant of PSO that exploits a Fuzzy Rule-Based System to dynamically determine the optimal hyperparameter values for each particle. Despite this advantage, the optimization process might be stuck in local optima. Here, we propose a uchronic-time strategy to explore alternate swarm histories, whereby one particle "was" different at the beginning of the optimization. Specifically, whenever the global best particle does not improve for a number of iterations, it is terminated and re-initialized, meanwhile sending the rest of the swarm back to the initial configuration. This approach, called "time travel" FST-PSO (FTT-PSO), works under the assumption that the convergence of any particle towards an optimum is strongly related to the influence of the global best particle. We compare the performance of FTT-PSO against FST-PSO on the benchmark suites used in IEEE CEC and GECCO competitions. Our results show that time traveling allows for outperforming both the standard and multistart versions of FST-PSO.

11:20-12:20 Session 8D: EvoApplications: Misc applications (ii)
Location: Room D
11:20
Scalable Evolution of Logically Independent Polycomputational Materials

ABSTRACT. Substrates that compute using vibration rather than electricity offer the potential of creating and deploying computers into electronics-denying environments. Another advantage of these materials is that some of them can compute multiple functions in the same place and at the same time, providing computational results at different frequencies. These so-called polycomputational materials may eventually compete with more traditional computers in terms of computational density because there is no currently known upper bound on how many functions can be simultaneously computed by a vibrational substrate. However, three challenges remain for polycomputational materials: how to ensure that the different functions are computed independently; developing evolutionary algorithms that allow for embedding increasingly more functions into these materials in silico; and validating the evolved in silico materials as physical materials. Here we report progress on all three of these issues.

11:45
Search Trajectory Networks Applied to a Real-world Parallel Batch Scheduling Problem

ABSTRACT. We investigate solution methods for the Oven Scheduling Problem (OSP), a parallel batch scheduling optimization problem in semiconductor manufacturing, using Search Trajectory Networks (STNs). STNs are a recently introduced tool to analyze and compare the behavior of metaheuristic algorithms concerning their exploration ability w.r.t. single problem instances. We consider two state-of-the-art algorithms for the OSP, a Simulated Annealing (SA) and a Large Neighborhood Search (LNS), and instances from the literature. The STNs enable us to draw the following conclusions: (i) The two algorithms’ trajectories overlap especially at the beginning of the trajectories, as revealed by a search space partitioning based on Hierarchical Agglomerative Clustering; (ii) The fitness landscape of many instances is multi-modal, with several high-quality solutions scattered in the search space; (iii) SA trajectories are longer, but the number of locations visited by SA and LNS is similar.

14:20-16:00 Session 10A: EvoApplications Best Paper nominations
Location: Room A
14:20
Genetic Programming with Co-operative Co-evolution for Feature Manipulation in Basal Cell Carcinoma Identification

ABSTRACT. As global mortality rates rise alongside an increasing incidence of skin cancer, particularly Basal Cell Carcinoma (BCC), the development of effective automated detection strategies gains urgency. Traditional diagnosis of BCC relies heavily on manual inspection of skin lesions, an approach limited by subjectivity, time constraints, and the invasive nature of biopsy procedures. To address these challenges, this study introduces the two-stage cooperative co-evolution (2SCC)-Criptor, a novel two-stage feature manipulation method that employs genetic programming and co-operative co-evolution for automated BCC identification. The first stage generates an ensemble of models that collaboratively extract discriminative features from decomposed colour channels of skin lesion images, while the second stage constructs additional features to enhance classification performance. The efficacy of the method is evaluated using standard machine learning classifiers, demonstrating statistically significant superiority over traditional and contemporary approaches in the field. Further analysis reveals that the model's success stems from the synergistic integration of colour channels rather than individual channel contributions, with remarkably uniform utilisation of LAB colour space components. The findings underscore 2SCC-Criptor's potential in enhancing BCC diagnostic processes while maintaining interpretability through evolved genetic programming individuals.

14:45
Emergent kin selection of altruistic feeding behaviour via non-episodic neuroevolution

ABSTRACT. Kin selection theory has proven to be a popular and widely accepted account of how altruistic behaviour can evolve under natural selection. Hamilton's rule, first published in 1964, has since been experimentally validated across a range of different species and social behaviours. In contrast to this large body of work in natural populations, however, there has been relatively little study of kin selection \emph{in silico}. In the current work, we offer what is to our knowledge the first demonstration of kin selection emerging naturally within a population of agents undergoing continuous neuroevolution. Specifically, we find that zero-sum transfer of resources from parents to their infant offspring evolves through kin selection in environments where it is hard for offspring to survive alone. In an additional experiment, we show that kin selection in our simulations relies on a combination of kin recognition and population viscosity. We believe that our work may contribute to the understanding of kin selection in minimal evolutionary systems, without explicit notions of genes and fitness maximisation.

15:10
GPBus: Genetic Programming based Automated Machine Learning for Bus Delay Prediction

ABSTRACT. Urban bus services are a vital component of modern cities’ mobility. However, challenges such as traffic congestion, unpredictable weather, and operational incidents often cause delays, negatively impacting service quality and passenger experience. This study introduces GPBus, a genetic programming-based automated machine learning technique designed to predict bus delays. GPBus automates the selection and composition of predictive models while optimizing their hyperparameters. Additionally, the approach integrates transfer learning to reduce computational costs by reusing pretrained models. The proposed model was evaluated using real-world data from the public bus system in Málaga, Spain, spanning several months of operation. Experimental results demonstrate that GPBus significantly outperforms traditional machine learning methods, including Random Forest, Multi-Layer Perceptron, and Support Vector Machines, in terms of prediction accuracy. These results underscore the potential of AutoML approaches to enhance public transport systems by providing accurate and reliable delay predictions, ultimately improving user satisfaction and promoting sustainable urban mobility.

14:20-16:00 Session 10B: EvoMUSART: Music and Sound (ii)
Location: Room B
14:20
Foundations of LLCM: Labelled Lambek Calculus for Music Analysis

ABSTRACT. This paper presents an application of Lambek Calculus, a sequent calculus for categorial grammar, to music analysis. To this end, we propose a Labelled Lambek Calculus for Music (LLCM), where a la- bel represents tonality information. In LLCM, each adjacent category represents a chord interpretation. When combined, they form a caden- tial category. We have enhanced the system with a rigorous and inter- nally consistent framework that clarifies long-distance dependencies and provides a more explicit representation of relationships across different tonalities, including tonal shifts. A key innovation in LLCM is the in- troduction of a method for calculating the “depth” of a harmonic anal- ysis. This measure corresponds to the complexity of chord progressions, enabling analysts to objectively compare different harmonic sequences based on their structural intricacy.

14:45
Combining local search and directed mutation in evolutionary approaches to 4-part harmony

ABSTRACT. Artificial Intelligence assisted music composition has gained popularity during the last decade, but still faces problems and difficulties. This paper approaches 4-part harmonization problem in the context of evolutionary computation, a topic that has been discussed for more than forty years but still represents an open challenge. This research proposes local search to improve directed mutation and guide the algorithm towards qualitatively better solutions. Human learning and Evolutionary Machine Teaching are also used: data from music conservatory students are exploited to guide the algorithm. The results show that local search significantly improves the quality of the mutation performed. Moreover, a series of longer runs are able to find free error scores.

15:10
Cellular Au-Tonnetz: A Unified Audio-Visual MIDI Generator Using Tonnetz, Cellular Automata, and IoT

ABSTRACT. This paper presents a detailed description of an innovative tool for music creation that merges sound and light through a unified system. By leveraging the Tonnetz approach, cellular automata, and embedded electronics, the project offers users an intuitive platform to explore musical harmony and synesthetic experiences. Key features include real-time synchronization of sound and light, the implementation of scales and chord progressions, and the integration of web-based control for multi-unit management. This tool aims to democratize music creation, making complex musical concepts accessible through an interactive, multi-sensory interface.

15:35
The Importance of Context in Image Generation: A Case Study for Video Game Sprites

ABSTRACT. In recent years text-to-image generative models have found a wide range of applications, including the generation of video game assets such as sprites. However, sprites are often reused throughout a game, leading to repetitive visuals. Creating custom sprites tailored to different in-game environments can be a resource-intensive task when done manually. In this paper, we address the challenge of generating visually appealing sprites that not only align with the overall theme of the game but also adapt to the varying environments within it. We investigate how modifying contextual details of these environments, provided as prompts to a text-to-image model, influences the resulting sprites. Our approach is demonstrated through sprite generation for the video game design tool LLMaker. Experiments using objective performance metrics confirm the effectiveness of our method, while a user study shows that the generated sprites resonate with players.

14:20-16:00 Session 10C: EuroGP (ii)
Chair:
Location: Room C
14:20
Exploring the Integration of Cellular Structures in Genetic Programming-based Methods

ABSTRACT. The introduction of a Cellular Automata (CA)-like structure on the population of Evolutionary Algorithms (EAs) has been verified to be a method to improve solutions quality. However, the study of CA-like structures for Genetic Programming (GP) has been, so far, limited. In this work, we focus on the effect of introducing these structures on Geometric Semantic variants of GP, focusing on the well-known Geometric Semantic GP (GSGP) and its recently introduced variant SLIM-GSGP, which emphasizes producing smaller and more interpretable individuals. Here we provide guidance on how CA-like structures can impact the quality and size of the solutions for GSGP and SLIM-GSGP, giving a clear understanding of the trade-offs involved in applying these methods.

14:45
Ghost Swarms: Learning Swarm Rules from Environmental Changes Alone

ABSTRACT. Swarm behaviours result from agents interacting locally with their environment, often encoded through simple rules. While directly observing individual agents can be challenging, their collective behaviour often leave detectable environmental prints that offer insights into the underlying swarm dynamics. However, this task is complex due to the hidden and interconnected relationships between the rules governing agent interactions, the emergent swarm behaviour, and the environmental changes generated by this behavior. In this work, we propose a method for extracting human-readable controllers from demonstrations showing only observable alterations in the environment caused by the swarm. This approach explores whether these environmental changes can reveal the swarm's actions, even when the individual agents are difficult to track. Our approach eliminates the need for prior knowledge about the controller or its structure, enabling the successful learning of controllers from a single demonstration. By focusing on the environmental impact of swarm behaviour, we provide a novel framework for understanding and managing both natural and engineered swarms, even in situations where direct observation of the swarm's actions is not feasible.

15:10
On the Effectiveness of Crossover Operators in Cartesian Genetic Programming

ABSTRACT. The crossover operator is often reported to hinder the search capabilities of Cartesian Genetic Programming (CGP) when compared to mutation-only strategies. This study investigates the effectiveness of CGP by analyzing numerous indicators of evolutionary dynamics when using different crossover operators and the canonical mutation-only (1+4) strategy. Specifically, we divide our analysis between traditional operators based on the random selection of parental genes; Subgraph Crossover, where points in the range of active nodes are considered; and the recently-proposed Deep Neural Crossover (DNC) approach which utilizes a transformer network to learn correlations between genes and predict potentially beneficial crossover points. The performance of different crossovers is evaluated on 11 standard and one real-world regression problem. The results indicate that DNC is not a suitable crossover operator for CGP since it carries over the traditional assumptions that nodes closer together are functionally related and thus should swap parents together. Methods that take this into account such as Subgraph Crossover are much more effective in comparison.

15:35
Ant-based Metaheuristics Struggle to Solve the Cartesian Genetic Programming Learning Task

ABSTRACT. Ant-based metaheuristics have successfully been applied to a variety of different graph-based problems. However, for Cartesian Genetic Programming (CGP) only the impact of Max-Min Ant Systems has been tested. In this work, we try to fill this gap by applying four different popular ant-based metaheuristics as the optimizer (and therefore training algorithm) of CGP. The idea of combining CGP with ant-based metaheuristics is not novel but older works’ experimental design may not meet today’s standard. To compare these metaheuristics to the Evolution Strategies (ESs) commonly used in CGP, we benchmark against a standard CGP variant that uses a simplistic (1 + 4)-ES, mutation, and no crossover. Additionally, we include (μ + λ)-ES and (μ, λ)-ES in our experiments. We analyse the performance on datasets from the symbolic regression, regression, and classification domains. By tuning and evaluating various configurations, we can not affirm a significant improvement by using ant-based methods with CGP as we encounter premature convergence—even with those ant-based metaheuristics that were originally proposed to overcome such problems. Despite our results being of negative nature, this work still gives important and interesting insights into the training of CGP models. The key contributions of our work are thus a more thorough benchmarking of these optimizers than has been done before. This should clear up doubts about the capabilities of ant-based metaheuristics in CGP. Furthermore, we include a roadmap on how they can be addressed to solve this complex optimization problem from the model building domain of machine learning.

14:20-16:00 Session 10D: EvoCOP: Seeds of Inspiration
Location: Room D
14:20
Price-and-branch Heuristic for Vector Bin Packing
PRESENTER: Ze Wang

ABSTRACT. Vector bin packing is an NP-hard problem in which a set of item vectors must be packed into a minimum number of bins such that, in each bin, the sum of the vectors does not exceed the bin’s vector capacity. Vector bin packing has many applications such as scheduling virtual machines in cloud computing. In this paper, we introduce the Pattern Heuristic, a price-and-branch heuristic which applies column generation. It creates a large pool of valid packings called patterns by running fast heuristics. Then it optimally solves the integer linear program of finding the minimum set of patterns that covers the complete set of items. Despite its complexity, it finds optimal or near-optimal solutions for benchmark instances with 200 items in a matter of seconds. The pattern pool is created by a Local Search Heuristic which, while being very fast, achieves surprisingly good results. As a stand-alone heuristic, it is suitable for scenarios with real-time demands. The new heuristics are compared with algorithms from the literature in experiments using established benchmarks. They demonstrate a good trade-off between running time and packing quality. The Pattern Heuristic computes new optimal solutions for a 20-dimensional benchmark.

14:45
Visualizing Pseudo-Boolean Functions: Feature Selection and Regularization for Machine Learning

ABSTRACT. The concept of evolutionary fitness landscapes, along with their visualizations, was introduced in biology by Sewall Wright in the 1930s. The study of fitness landscapes is also important in artificial intelligence, evolutionary computation, and machine learning. In this paper, we discuss the difficulty of visualizing pseudo-Boolean fitness landscapes in the context of feature selection for machine learning. We analyze the challenge of visualizing such landscapes. Visualization techniques for fitness landscapes, specifically hinged bitstring maps and local optima networks, are used to derive information from the landscapes. Specifically, we consider the problem of feature selection for machine learning with random forests in the setting of several real-world datasets. Using these techniques, we carefully evaluate the influence of regularization on the multimodal structure of the feature selection fitness landscapes. This work improves the understanding of feature selection for regularized random forest machine learning, and promises to lead to improved feature selection methods.

15:10
Mixed-Binary Problems Optimized with Fast Discrete Solver

ABSTRACT. Tackling optimization in mixed domains (continuous and discrete decision variables) has recently gained attention, causing the development of various extensions of continuous optimization algorithms. In order to more accurately address the combinatorial nature of the discrete aspects of the search space, we go a different way and combine two algorithms, one for each aspect of the search space. Focusing, in the discrete part, on binary search spaces, we use Population-Based Incremental Learning (PBIL) for discrete optimization. We combine this algorithm with the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) to address the continuous part.

We compare our CMA-ES-PBIL with two leading variants of CMA-ES from the literature: CMA-ES with Margin (CMA-ESwM) and CMA-ES with Probability Distribution Model (CMA-ES-PDM). We conduct run time analysis on some separable and a non-separable benchmark functions and show that our hybrid algorithm significantly outperforms both CMA-ESwM and CMA-ES-PDM. Our results also show that the binary part is optimized much earlier than the continuous part, raising the question: Is run time evaluation the right measure to analyze mixed-binary algorithms?

15:35
Studies on Survival Strategies to Protect Expert Knowledge in Evolutionary Algorithms for Interactive Role Mining

ABSTRACT. To maintain the integrity of information technology infrastructures within enterprises and organizations, it is imperative to implement robust and dependable access control mechanisms. A prevalent method is Role-Based Access Control (RBAC), wherein permissions are groped into roles, which are subsequently assigned to the users of an IT system. Such assignments are referred to as role concepts. The objective of the NP-complete Role Mining Problem (RMP) is to find a role concept with minimal number of roles. For this purpose, evolutionary algorithms have been applied as effective meta-heuristic techniques to approximate optimal solutions. Recent studies have demonstrated that the integration of expert knowledge through user interaction with running evolutionary algorithms can substantially enhance the optimization process. In this paper, expert knowledge is integrated into a evolutionary role mining algorithm by injecting favorable roles into the individuals representing role concepts. The impact of such role injections on the optimization progress is investigated and several survival strategies are presented to ensure that individuals with injected roles survive long enough to exert their beneficial effect on the optimization process. The proposed survival strategies are evaluated in a series of experiments.

16:20-18:00 Session 11A: EvoCOP Best Paper nominations
Location: Room A
16:20
Diversification through Candidate Sampling for a Non-Iterated Lin-Kernighan-Helsgaun Algorithm

ABSTRACT. The Traveling Salesman Problem (TSP) is a famous NP-hard problem that has been extensively studied. The heuristic first introduced by Lin and Kernighan is considered one of the most effective methods for solving the TSP and has undergone numerous modifications and extensions. This study examines the fundamental version of Helsgaun’s adjustments (LKH-1) due to the significant improvements it has brought to the heuristic.

A key adjustment implemented is the disposition of a candidate set to each city using the α-measure, recognized for its efficacy in approximating the lower bound of the tour cost derived from the minimum 1-tree. To enhance diversification and accelerate the process, Helsgaun applied two main techniques across different trials: a judicious initial tour and dynamic candidate set reordering.

In this paper, we propose a candidate selection strategy designed to replace these techniques. Our approach employs a randomized sampling selection strategy integrated directly into the k-opt search in the sense of a Partial Neighborhood Local Strategy (PNLS). Compared with LKH-1.3, which is limited to the 2-opt operator, our approach shows improved results for large instances and comparable results for others.

The goal of this work is to maintain a high level of solution quality while simplifying the overall algorithm.

16:45
Evolutionary Anytime Algorithms

ABSTRACT. Evolutionary algorithms can be considered to be \emph{anytime} algorithms, as they can produce a viable solution at any time and, moreover, the quality of the solution improves over time. However, it may be hoped that more information can be gained when a solution is presented, even if the algorithm is stopped before the optimum has been found. For example, one might want to know which of the bit values in the current best solution are necessary for a good result, and which are still uncertain. We propose heuristics for efficiently gaining such information about bit values, in the context of the $(1+1)$EA. Along the way, we prove two useful general results. The first allows us to show that the runtime for $m$ parallel copies of a process to complete is $O(\tau \log m)$, where $\tau$ is the expected runtime of a single process. The second shows that the $(1+1)$EA, with any mutation scheme, has probability at most $1/2$ of having an incorrect bit at any time step, when optimising a weakly monotonic function. Using these results, we prove bounds for the time it takes for our proposed heuristics to correctly identify bit values when the $(1+1)$EA runs on monotonic and hidden subset problems.

17:10
LON/D — Sub-problem Landscapes in Multi-objective Decomposition

ABSTRACT. We explore the underlying difficulties of sub-problems arising from decomposition in multi-objective optimization. Decomposition algorithms, such as MOEA/D, split the original multi-objective problem into a set of single-objective sub-problems using a scalarizing function. A weighting coefficient vector defines each sub-problem. We examine the relative difficulty of these sub-problems based on their weight vector and the chosen scalar function—either weighted sum or weighted Tchebycheff. Our approach involves creating a landscape for each sub-problem and analyzing its local optima network (LON). We contribute by visualizing the LONs of sub-problems, defining LON features for decomposition, and examining their interaction with problem properties and their impact on algorithm performance. An extensive experimental analysis of bi-objective NK-landscapes reveals that landscape properties depend not only on the weight vector and scalar function but also on the objectives' intrinsic difficulty and their degree of conflict. These factors directly affect the relative performance of MOEA/D for each sub-problem. Among the landscape features explored, the size of each sub-problem's global optimum basin of attraction showed the strongest impact on the performance of decomposition-based multi-objective optimization.

17:35
A Runtime Analysis of the Multi-Valued Compact Genetic Algorithm on Generalized LeadingOnes

ABSTRACT. In the literature on runtime analyses of estimation-of-distribution algorithms (EDAs), researchers have recently explored univariate EDAs for multi-valued decision variables. Particularly, Jedidia et al. gave the first runtime analysis of the multi-valued UMDA on the r-valued LeadingOnes (r-LeadingOnes) functions and Adak et al. gave the first runtime analysis of the multi-valued cGA (r-cGA) on the r-valued OneMax function. We utilize their framework to conduct an analysis of the multi-valued cGA on the r-valued LeadingOnes function. Even for the binary case, a runtime analysis of the classical cGA on LeadingOnes was not yet available. In this work, we show that the runtime of the r-cGA on r-LeadingOnes is O(n^2 r^2 log^2 n log^2 r(log n+log r)) with high probability.

16:20-18:00 Session 11B: EvoApplications: Digital Healthcare and Personalised Medicine + Analysis of EC Methods
Location: Room B
16:20
Addressing Radiotherapy Scheduling with a Bin Packing Problem Formulation: A Comparative Study of Exact Solvers and Genetic Algorithms

ABSTRACT. Radiotherapy is one of the most widely used treatments in cancer care. One of the main problems in radiotherapy is to find an optimal schedule for the radiation delivered to each patient. This is a complex problem that has a major impact on the patient outcome and the use of healthcare resources. This paper tackles the radiotherapy scheduling problem by (i) proposing a modified 1D Bin Packing Problem formulation, (ii) solving it with an Integer Linear Programming model with two different solvers (SCIP and SAT solver), and with a Constraint Programming model with CP-SAT solver, (iii) solving the same problem by using two Grouping Genetic Algorithm (GGAs) with custom crossover and mutation methods, and finally (iv) comparing all these approaches. Results for all methods are presented, highlighting how the GGAs compare to the exact solvers in terms of execution time and quality of solutions.

16:45
Grammatical Feature Construction for Enhanced Interpretability in Breast Cancer Classification

ABSTRACT. Recent advances in Artificial Intelligence have yielded significant progress in developing medical and clinical diagnosis techniques. Machine learning algorithms are among the most promising methods for detection and classification problems. Despite their inherent robustness, the primary challenge in employing these approaches lies in their opaque behaviour, a critical factor in medical diagnosis. Establishing trust between clinicians and patients requires an explainable model. This paper presents a two-stage approach to improving explainability: the first stage, Grammatical Feature Construction (GFC), uses Grammatical Evolution (GE) to perform feature construction. These features are interpretable as they are generated from the original features by applying simple arithmetic operations to the original data. These features are independent of the model/algorithm that will subsequently be used for classification. While any classification algorithm could be used, we focus here on Linear Discriminant Analysis (LDA) to create GFC/LDA, which provides greater explainability than using LDA alone while maintaining comparable performance. To evaluate the effectiveness of GFC/LDA, we conducted a comprehensive comparative analysis against methods including GE as a classifier and LDA using all original features in two Breast Cancer datasets, the Digital Database for Screening Mammography and the Wisconsin Breast Cancer dataset. The results demonstrate that the GFC/LDA approach yields are comparable with the other methods but that it produces more interpretable models.

17:10
Multi-Tree Genetic Programming for Dynamic Tugboat Scheduling

ABSTRACT. The tugboat scheduling task aims to efficiently allocate tugboat resources to assist ships entering and leaving the port in maritime transportation. Many existing scheduling methods focus on finding scheduling solutions. However, they are can not suitable for the large-scale and dynamic characteristics of maritime transportation systems and not be directly applied to tugboat scheduling tasks. This paper designs a dynamic tugboat scheduling problem (DTug-sp) model based on scheduling rules, including two core tasks: allocating ships to tugboats (i.e., the allocation rule) and determining the execution order of ships assigned to a particular tugboat (i.e., the order rule). To solve this problem, we propose a multi-tree genetic programming method (MTGP-DTsp), which uses a dual-tree encoding strategy to represent the ship allocation rule and the tugboat execution order rule, respectively. Additionally, a new crossover operator is introduced in the genetic operations to fully utilize the global information in both rules, enhancing the effectiveness of the generated solutions. Experimental results demonstrate that MTGP-DTsp can effectively evolve scheduling rules suitable for DTug-sp, achieving the goal of minimizing tugboat assisting time and detecting the minimum weighted average tugboat assisting time separately.

17:35
Estimation of total body fat using symbolic regression and evolutionary algorithms.

ABSTRACT. Body fat percentage is an increasingly popular alternative to Body Mass Index to measure overweight and obesity, offering a more accurate representation of body composition. In this work, we evaluate three evolutionary computation techniques, Grammatical Evolution, Context-Free Grammar Genetic Programming , and Dynamic Structured Grammatical Evolution, to derive an interpretable mathematical expression to estimate the percentage of body fat. Our primary objective is to obtain a model that balances accuracy with explainability, making it useful for clinical and health applications. We compare the performance of the three variants on a public anthropometric dataset and compare with the results obtained with the QLattice framework. Experimental results show that grammatical evolutionary techniques can obtain competitive results in performance and interpretability.

16:20-18:00 Session 11C: EvoApplications: Advances in EC
Location: Room C
16:20
The More the Merrier: On Evolving Five-valued Spectra Boolean Functions

ABSTRACT. Evolving Boolean functions with specific properties is an interesting optimization problem since, depending on the combination of properties and Boolean function size, the problem can range from very simple to (almost) impossible to solve. Moreover, some problems are more interesting as there may be only a few options for generating the required Boolean functions. This paper investigates one such problem: evolving five-valued spectra Boolean functions, which are the functions whose Walsh-Hadamard coefficients can only take five distinct values. We experimented with three solution encodings, two fitness functions, and 12 Boolean function sizes and showed that the tree encoding is superior to other choices, as we can obtain five-valued Boolean functions with high nonlinearity.

16:45
Multi-Objective Evolutionary Optimization of Virtualized Fast Feedforward Networks

ABSTRACT. Many embedded applications have challenging energy, memory, and time constraints. Recently, a novel neural network architecture called Fast Feedforward Networks (FFFs) has been proposed to overcome extremely strict time-constrained applications. Yet, the memory footprint of such networks remains a challenge. In this paper, we attempt to overcome this challenge by using a weight-sharing technique, called weight virtualization, proposing different virtualization methods that take advantage of the peculiarities of the FFFs' tree-based architecture. We further optimize the model's size (resulting from the virtualization configuration) and performance via multi-objective evolutionary optimization based on NSGA-II. Our experiments\footnote{The codebase will be made publicly available after the review process.} show that, in different benchmarks, leaf virtualization can reduce the memory footprint by up to 13x with negligible accuracy loss.

17:10
Real Application Challenges in Evolutionary Optimization? People!

ABSTRACT. The application of evolutionary optimization methods for real world problems is often far less straight-forward than expected. One of the main challenges are the people involved in real business situations that decide on whether optimization projects are successful or not. In this work we present a few insights from 20+ years of applying Evolutionary Algorithms (mostly multi- and many-objective) with in-house customers and point to some key psychological insights that can explain several of the major issues that we encountered in our work but are also reported repeatedly in application sessions on major conferences. We argue that more convincing sales messages are needed; and explain, why trust, not better objective values, might be the ultimate goal of any optimization process. We can't provide numbers or new algorithms, but maybe a different perspective on some of the most persistent practical challenges.

17:35
Adjacent Distance Matrix-based Competitive Swarm Optimizer

ABSTRACT. This paper improves the global exploration capability of the Competitive Swarm Optimizer (CSO) by introducing an adjacent distance matrix selection strategy and proposes an Adjacent distance Matrix-based CSO (AMCSO). The traditional CSO updates offspring individuals by randomly selecting two particle individuals for competition, which ignores the differences and similarities of particle individuals. To fully utilize the knowledge of and fitness landscape and particle swarm to guide the selection mechanism in competition, our proposed AMCSO adopts an adjacent distance matrix based on the Euclidean distance between every pairwise particle individual, which allows the selection of particle individuals with closer distance for competition and further ensures population diversity. Comprehensive numerical experiments were conducted on 10-D and 20-D CEC2022 benchmark functions to investigate the performance of AMCSO against the original CSO and four well-known metaheuristic algorithms (MAs), and experimental results and statistical analysis confirm the effectiveness of the integration of the adjacent distance matrix and demonstrate a statistically significant improvement of AMCSO.

16:20-18:00 Session 11D: EvoApplications: 30 years of PSO (ii)
Location: Room D
16:20
An Investigation of Structural Bias in Particle Swarm Optimization

ABSTRACT. Many complex optimization problems, for which standard methods do not provide good-enough solutions, require the utilization of efficient metaheuristics. However, it has been found that several metaheuristics suffer from different types of structural biases, that may undermine their performance. In this paper, we investigate one of the most used evolutionary computation methods, the Particle Swarm Optimization algorithm. We utilize a recently developed modular framework for the construction of 619{,}500 parameterizations and investigate the effect of the different parameter choices on the identified types of structural bias. We also analyze and discuss the impact of the different types of structural bias found in these parameterizations on their performance on functions from a standard test set.

16:45
Proposal of Efficient Particle Swarm Optimization for Constrained Optimization Problems

ABSTRACT. Engineering design problems often involve various requirements and complexities, which are treated as constraints in optimization problems. Constrained optimization problems (COPs) are widely studied, and particle swarm optimization (PSO) has become a popular method for solving them due to its simplicity and effective convergence. However, PSOs still face limitations, particularly in terms of search efficiency and high computation costs for optimal solutions in complex feasible areas or near constraint boundaries. This paper presents a novel particle swarm optimization algorithm named independent 2-group particle swarm optimization (I2GPSO). I2GPSO is based on the following ideas - a constraint handling method, a novel structure of particles and a novel local search operator. The constraint handling method uses the existing penalty function method. The structure of particles defines two particle groups that have original roles, efficiently enabling PSO to search globally and locally. The local search operator is introduced into one group and enables particles to search near the boundary of constraints efficiently or candidates of the optimal solution. These novel approaches effectively reinforce the optimization efficiency of the PSO algorithm. The optimization capability and character of I2GPSO is illustrated in engineering design problems. The results are compared with other state-of-the-art PSOs, and it is shown that the proposed algorithm possesses competitive search efficiency.

17:10
Geometric Particle Swarm Optimization in Program Trace Optimization

ABSTRACT. On the 30th anniversary of Particle Swarm Optimization (PSO), we present a novel integration of Geometric PSO (GPSO) into the Program Trace Optimization (PTO) framework. GPSO extends PSO to diverse representations in a principled manner, while PTO provides automatic representation design for arbitrary problem structures. By specializing GPSO to PTO’s trace representation, we achieve a universal PSO variant applicable out-of-the-box to any representation and problem. We detail the theoretical foundations of this integration and evaluate our approach on diverse optimization problems across multiple representations. Our work demonstrates the power of combining geometric generalizations with automatic representation design, here specifically obtaining a truly general form of PSO that applies to any representation while retaining its original essence.

17:35
Analyzing the Effects of Memetic Variations on Convergence in Overlapping Swarm Intelligence

ABSTRACT. Evolutionary algorithms often struggle when there exists a high degree of interdependence between the variables (epistasis), non-separability, discontinuity, high dimensionality, and sparsity. These complexities can lead to hitchhiking, where poor parameters are associated with good schemata, and ``two steps forward and one step back'' where near-optimal parameters are lost in favor of lower-quality parameters that immediately improve fitness; phenomena that contribute to premature convergence. Overlapping Swarm Intelligence (OSI) has been introduced as a cooperative coevolutionary algorithm that utilizes overlap of variables between subswarms to encourage sharing and competing among variables across the subswarms. OSI has shown success in handling epistasis, however it can still suffer from premature convergence when subswarms get stuck in ``pseudo-optima,'' i.e., when a subset of variables is at a minimum with respect to the reduced search space but not the entire space. We investigate convergence on memetic variations of OSI, CPSO (OSI with no overlap), and Particle Swarm Optimization using block coordinate descent applied to the CEC2010 Benchmark problems. The memetic algorithms show some success in assisting subpopulations in finding more optimal solutions compared to their non-memetic counterparts. We also use the Gini coefficient on a solution's derivative to estimate the proportion of variables stuck in pseudo-optima to help understand what contributes to premature convergence.