EVOSTAR2026: EVOSTAR2026
PROGRAM FOR THURSDAY, APRIL 9TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:40 Session 8A: EvoCOP 1: Best Paper nominations
Location: Room A
09:00
Heuristic Methods for Top-k List Aggregation under the Generalized Kendall Tau Distance

ABSTRACT. Top-k lists are a form of incomplete rankings in which only the best k items are ordered and a larger set of inferior items is unordered. Top-k list aggregation is the problem of combining several ranked top-k lists into a single consensus top-k ranking that best reflects all the input lists. This problem has wide-ranging applications such as information retrieval and recommendation systems. This work introduces heuristic algorithms to perform top-k aggregation under the generalized Kendall tau distance. In order to further improve the solutions obtained by these heuristics, it employs a local search post-processing algorithm. Furthermore, it develops a data reduction technique to facilitate the solution of large-scale instances.

09:25
Methods for Finding Paths of a Prescribed Length in Weighted Graphs

ABSTRACT. This paper tackles the $\mathcal{NP}$-hard problem of finding paths of a prescribed length between a source and a target node in weighted directed and undirected graphs. To address this problem, we propose and analyse two algorithms: a heuristic method based on local search, and an exact backtracking algorithm that utilises several problem-specific operators to improve runtimes. We also present several general methods for reducing problem size. These involve removing nodes that can never be in any path from the source to the target, smoothing nodes that can be replaced with an arc, and removing nodes whose shortest path sum from the source and the target is greater than the prescribed length. Analysis on real-world street networks shows that the local search algorithm achieves solutions within a few metres of the prescribed length and consistently yields lower costs than the backtracking algorithm, even under a more restricted time limit. In contrast, some of our backtracking operators achieve lower-cost solutions on random graphs and dense planar graphs.

09:50
Gray-Box Enhanced Decomposition-based Local Search for Multi-Objective NK-Landscapes

ABSTRACT. We introduce a gray-box enhanced local search for $k$-bounded multi-objective optimization that combines decomposition and cooperation through a flexible, plug-and-play design. We study the management of the gray-box score vector and investigate strategies for designing cooperation, assessing their impact on both approximation quality and computational efficiency. Our experimental evaluation considers large-scale random and adjacent NK landscapes with varying ruggedness, providing a comprehensive comparison of the cooperative modes in terms of CPU time and solution quality. Beyond substantially speeding up the search compared to a black-box approach, our empirical analysis demonstrates the critical role of effective score vector management and provides insights on which cooperative designs achieve the best trade-off between efficiency and quality in gray-box multi-objective optimization.

10:15
Weight Adaptation for Improving Parallel Performance of Adaptive Stochastic Natural Gradient

ABSTRACT. Probabilistic model-based evolutionary algorithms are promising for black-box optimization. Specifically, the adaptive stochastic natural gradient (ASNG) adaptively updates its learning rate, a typical hyperparameter in probabilistic model-based evolutionary algorithms, thereby realizing efficient and robust optimization. Although weight parameters are common hyperparameters, with the increasing demand for parallel evaluation of time-consuming tasks, it remains unclear how to set suitable weights for larger population sizes. In this paper, we propose Weight Adaptation ASNG (WA-ASNG), which incorporates a weight adaptation mechanism into ASNG. We calculated the estimated signal of the update direction from the accumulations of the natural gradient. Then, to maximize the signal, WA-ASNG adaptively updates its weight parameters by a gradient ascent over the optimization. While the learning rate adaptation plays a role in satisfying a sufficient condition for monotonic improvement of the expected objective value, the mechanism of weight adaptation is intended to maximize this improvement. The experimental results demonstrate that WA-ASNG outperforms PBIL and ASNG across various settings with population sizes ranging from 25 to 100 for binary optimization problems. Furthermore, WA-ASNG can perform efficiently in the presence of strong noise.

09:00-10:40 Session 8B: EuroGP 3: GP Encodings
Chair:
09:00
New Perspectives on Cartesian Genetic Programming: A Survey

ABSTRACT. Over the past twenty-five years, certain practices and assumptions in Cartesian Genetic Programming (CGP) have become conventional wisdom, yet recent research challenges their validity. This position paper critically examines these long-standing beliefs and proposes evidence-based alternatives for the CGP community. We address four misconceptions: The purported ineffectiveness of crossover operators; the overlooked impact of positional bias; problems with tournament selection on Boolean benchmarks; and the limitations of single-domain analysis. Through a review of recent literature, we identify a key principle underlying successful CGP operators—the preservation of node structural integrity during genetic operations. We discuss current best practices including rigorous hyperparameter tuning, cross-domain benchmarking, node-preserving operators, and modern fitness function design. Our analysis reveals that many accepted CGP practices, including the ubiquitous (1+4) evolution strategy, lack generalizability across problem domains. We believe that by reconsidering these assumptions and adopting the recommendations presented here, researchers and practitioners can develop more effective and robust CGP implementations.

09:25
Revisiting SLIM: Improved Learning Dynamics and Model Compactness in Symbolic Regression

ABSTRACT. Geometric Semantic Genetic Programming (GSGP) induces unimodal error surfaces for supervised learning problems, enabling efficient gradient-like search in the semantic space. However, its conventional operators cause rapid model growth, limiting models' interpretability. The Semantic Learning algorithm based on Inflate and deflate Mutations (SLIM) addresses this limitation through size-reducing deflate operations, while preserving GSGP's theoretical properties. This work presents a systematic enhancement of SLIM through two sets of algorithmic improvements: (i) theoretically grounded refinements including optimal mutation step, and linear scaling; and (ii) heuristic extensions including Pareto-based tournament selection, multi-objective model identification, bounded mutation steps for implicit regularization and automatic algebraic simplification. Comprehensive evaluation on regression tasks demonstrates that all enhanced SLIM variants, when considered separately, match or exceed baseline performance on test data, while achieving significant model size reductions. However, the best results were achieved when all the variants were employed together. These achievements position the enhanced SLIM as a promising step toward more accurate, efficient, and interpretable symbolic regression methods, reinforcing its practical relevance for real-world data modeling.

09:50
Syntactic Flexibility Enables Compact Solutions in Transformer Semantic GP

ABSTRACT. In genetic programming, semantic variation operators aim to produce new solutions that are semantically similar to their parents. Geometric semantic operators (GSOs) use linear combinations to generate semantically similar offspring; however, this easily leads to bloated solutions. In contrast, Transformer Semantic Genetic Programming (TSGP) uses a pre-trained transformer model as a semantic mutation operator to generate offspring with similar semantics. Experiments for four symbolic regression problems reveal that TSGP’s variation operator exhibits significantly greater syntactic flexibility: while still producing semantically similar solutions, it performs a greater number and more diverse edit operations, frequently replacing nodes and modifying multiple parts of the program tree simultaneously. Unlike GSO-based methods, which rely on local additive or reductive changes, TSGP is able to perform more holistic changes of a program while preserving semantics, which explains its ability to evolve effective and compact solutions.

10:15
Dynamic Vector and Matrix Memory for Tangled Program Graphs

ABSTRACT. This paper investigates dynamic vector and matrix-memory allocation in Tangled Program Graphs (TPG) for reinforcement learning in continuous control tasks. TPG is a genetic programming framework that evolves agents composed of multiple programs which compete to predict actions. We give each program its own memory capacity that can be initialized and mutated during evolution, and we hypothesize that adaptive capacity improves performance on higher dimensional tasks. We evaluate on target-seeking robot control environments in the MuJoCo simulator: Reacher and Ant-Goal. Our results demonstrate that adapting the agent's memory size through interaction with the task produces significantly more proficient controllers in both tasks. Moreover, supporting diverse memory sizes in a single policy results in lower computational cost measured by the approximate number of FLOPs executed per action.

09:00-10:40 Session 8C: EvoApplications: Problem modeling
09:00
A Constructive Method to Build Many Valid Initial Solutions for the Traveling Tournament Problem

ABSTRACT. The Traveling Tournament Problem is infamous for not having fast and high quality methods for mutation, crossover and creating valid random initial solutions, making it particularly difficult for metaheuristic search. As a step forward, we propose a constructive method capable of creating many valid Traveling Tournament Problem solutions. This method is a modification of the circle method to make a schedule for single round-robin tournaments, originally proposed by Reverend Kirkman in 1847. It runs in linear time in the number of teams. The parametrized version of the method can produce up to 2n.n! different valid solutions, and we can increase this number up to (2n-1)2n.n! valid solutions with a simple permutation of rounds in produced solutions. These results can contribute significantly to the adaptation of population-based metaheuristics to this problem.

09:25
Structure-Aware Penalty Weights Scaling for Quadratic Unconstrained Binary Optimization Formulation

ABSTRACT. Formulating constrained combinatorial problems as Quadratic Unconstrained Binary Optimisation (QUBO) models is essential for solving them on quantum computers. A key challenge in this process is calibrating penalty weights, especially when multiple constraints are involved. These constraints often differ in magnitude and coupling strength, and the search space of feasible penalty combinations grows exponentially with the number of constraints. This work introduces a structure-aware penalty scaling method that first determines a common initial penalty weight for all constraints and then automatically adjusts each penalty weight using the structural features of QUBO expressions, removing the need for additional search or manual tuning. We test the proposed method on ten p-Median Facility Location Problem (FLP) instances using three annealing paradigms: simulated quantum annealing (a simulator of quantum annealers) and two types of simulated annealing implemented in D-Wave and OpenJij. The method consistently improves feasibility rates and solution quality compared with both the Verma–Lewis method and a magnitude-controlled averaged variant. The results demonstrate that incorporating structural information into penalty weight setting yields better-conditioned QUBO landscapes and solver-robust performance, providing a pathway for multi-constraint QUBO modelling.

09:50
Investigating the Interplay of Parameterization and Optimizer in Gradient-Free Topology Optimization: A Cantilever Beam Case Study

ABSTRACT. Gradient-free black-box optimization (BBO) is widely used in engineering design, and provides in particular a flexible framework for topology optimization (TO), enabling the discovery of high-performing structural designs without requiring gradient information from simulations. Yet, its success critically depends on two intertwined choices: the geometric parameterization defining the design space and the optimizer exploring it. This study investigates this interplay through a compliance minimization problem for a cantilever beam subject to a connectivity constraint that enforces physically meaningful, connected structures. We benchmark three geometric parameterizations, each combined with three representative BBO algorithms: differential evolution, covariance matrix adaptation evolution strategy, and heteroscedastic evolutionary Bayesian optimization, across 10D, 20D, and 50D design spaces.

Results reveal that parameterization quality has a stronger influence on optimization performance than optimizer choice: a well-structured parameterization enables robust and competitive performance across all algorithms, whereas weaker representations increase optimizer dependency. Overall, this study highlights the dominant role of geometric parameterization in practical BBO-based TO and underscores the need for the optimization community to place greater emphasis on problem representation. Our findings show that algorithm performance, and consequently selection, can be strongly influenced by the chosen parameterization, warning that performance cannot be fairly assessed without accounting for the design space they work in.

10:15
Helios: A Co-Designed Landscape-Aware Optimization System Bridging Serial intelligence and GPU Parallelism

ABSTRACT. The efficacy of state-of-the-art optimization algorithms often stems from complex, serial decision making logic that is fundamentally incompatible with the throughput oriented nature of modern GPU architectures. This conflict presents a major bottleneck for solving large-scale problems in scientific computing. We present Helios (Heterogeneous, Evolutionary, and Landscape-aware Interacting Optimization System) to resolve this algorithm–architecture challenge. We present the co-design of two distinct implementations: Helios-AS (Adaptive Serial), a novel CPU-based algorithm that employs heterogeneous agent roles and a state machine that dynamically triggers specialized search procedures based on real-time analysis of the fitness landscape and Helios-MP (Massively Parallel), its GPU-native counterpart. Our core contribution is a principled methodology for translating the intent of the serial adaptive logic into novel, parallel-friendly operators, effectively distilling complex state-based control into a high-throughput swarm intelligence. Validated on a suite of challenging benchmarks and the Lennard-Jones atomic cluster problem, we demonstrate that Helios-AS achieves performance statistically comparable to state-of-the-art methods like CMA-ES. Furthermore, we show that Helios-MP matches this high solution quality while delivering speedups of up to 15×. Helios provides both a powerful, validated tool for computational science and a successful blueprint for porting algorithmic intelligence to massively parallel hardware.

09:00-10:40 Session 8D: EvoApplications: Misc applications (ii)
09:00
Feasibility-Preserving Multi-Objective Evolutionary Algorithms with Local Search for the Bi-Objective Maximal Covering Location Problem with Compactness

ABSTRACT. This paper addresses the Bi-objective Maximal Covering Location Problem with Compactness (BOMCLP-C), a new variant of the classical maximal covering location problem that simultaneously maximizes total demand coverage and minimizes the spatial dispersion of selected facilities. The problem is NP-hard and exhibits a highly multimodal search space due to the combinatorial interaction between coverage and compactness objectives under cardinality constraints. To effectively approximate its Pareto front, two evolutionary multi-objective optimization paradigms are investigated: NSGA-II, representing dominance-based search, and MOEA/D, representing decomposition-based search. For each paradigm, two variants are implemented: a penalty-based formulation that relaxes the facility-count constraint through additive penalties, and a customized constraint-handling variant enhanced with local search (LS) that maintains feasibility and refines solutions in the neighborhood structure. Computational experiments on real-world instances drawn from the literature demonstrate that LS-based variants consistently achieve higher-quality Pareto fronts, attaining full feasibility and superior hypervolume values. A Wilcoxon signed-rank analysis confirms the significant performance difference between NSGA-II-LS and MOEA/D-LS. The study shows the effectiveness of integrating problem-specific constraint handling and local improvement in evolutionary multi-objective frameworks for large-scale discrete location optimization. It emphasizes the need for concise, feasible space representation when dealing with integer or combinatorial constraints.

09:25
Unconventional Hexacopters via Evolution and Learning: Performance Gains and New Insights

ABSTRACT. Evolution and learning have historically been interrelated topics, and their interplay is attracting increased interest lately. The emerging new factor in this trend is morphological evolution, the evolution of physical forms within embodied AI systems such as robots. In this study, we investigate a system of hexacopter-type drones with evolvable morphologies and learnable controllers and make contributions to two fields. For aerial robotics, we demonstrate that the combination of evolution and learning can deliver non-conventional drones that significantly outperform the traditional hexacopter on several tasks that are more complex than previously considered in the literature. For the field of Evolutionary Computing, we introduce novel metrics and perform new analyses into the interaction of morphological evolution and learning, uncovering hitherto unidentified effects. Our analysis tools are domain-agnostic, making a methodological contribution towards building solid foundations for embodied AI systems that integrate evolution and learning.

09:50
Evolutionary Design of Specialized Image Compression Operators

ABSTRACT. Image compression is a fundamental component of digital communication and storage. General-purpose codecs such as JPEG, PNG, and WebP are optimized for average performance across diverse images, while neural-network-based approaches can improve compression ratios but often incur high computational cost and low throughput, limiting their practical use. Domain-specific compression, which exploits repeated patterns and redundancies in homogeneous datasets, offers an attractive alternative. This paper presents an evolutionary framework for automatically designing lossless image codecs specialized for domain-specific data, such as static-camera footage, industrial scans, or satellite imagery. The framework co-evolves pixel scan orders and Cartesian Genetic Programming (CGP) predictors to minimize residual entropy, achieving a balance between compression efficiency and throughput. Experiments on astronomical, medical, natural, and synthetic image datasets demonstrate that the evolved codecs can outperform baseline and standard codecs in bits per pixel and processing speed, highlighting their potential for fast, adaptive, and interpretable compression in embedded and edge computing applications.

11:00-12:15 Session 9A: EvoMusArt Creative AI for Sound, Embodiment, and Media (includes BPA)
Location: Room A
11:00
Asɛmpayɛtsia: An Afrocentric Framework for Computational Creativity in Sound and Image

ABSTRACT. This paper introduces Asɛmpayɛtsia, an Afrocentric compositional framework derived from Ghanaian-Akan-Mfantse folklore (Kodzi), as a new model for computational creativity. Developed through artistic research in music and audiovisual composition, the framework proposes a Triadic Process of Artistic Translation—cultural excavation, compositional translation, and audiovisual reinscription—as a generative system through which intangible heritage can inform artificial intelligence–based creative processes. Inspired by UNESCO’s 2003 Convention on the Safeguarding of Intangible Cultural Heritage, the Fourth Industrial Revolution, and the African Union’s Agenda 2063 (Aspiration 5), Asɛmpayɛtsia redefines African cultural forms as dynamic creative algorithms rather than archival artifacts. The paper maps the logic of oral reasoning, cyclical temporality, and communal improvisation found in Akan artistic reasoning onto computational principles of iteration, recursion, and emergence. By reframing indigenous artistic cognition as a computational aesthetic model, this work contributes to current discourses on Decolonial AI and Cultural Computation, offering an alternative to Eurocentric paradigms of generative art and music. Ultimately, Asɛmpayɛtsia positions artistic research as a method for designing AI systems that think with—rather than about—African heritage, advancing new possibilities for intercultural creativity and algorithmic imagination.

11:25
Fluid Body: An Adaptive Embodied Sonification System for Cross-Cultural Performance

ABSTRACT. Fluid Body is an experimental, body-driven instrument that maps the classical ballet movement vocabulary onto the playing of the Guzheng. An interactive machine-learning framework that combines classification and regression models to form an embodied sonification system is used. It learns correspondences between bodily dynamics and sound synthesis parameters, allowing participants to “play” a digital Guzheng through ballet postures. Based on granular synthesis, six ballet arm positions are associated with Guzheng pitch phrases, allowing continuous motion features to modulate sound texture in real time. The multi-modal feedback loop connects bodily movement and sound, transforming the body’s shape into soundscapes. Rather than viewing culture as a fixed heritage, Fluid Body presents performance as a process of intercultural co-adaptation. This work shows how machine learning can serve as a creative translator between different artistic traditions. It encourages a dialogue among bodily movement, sound synthesis, and cultural meaning, using common ideas of rhythm, modulation, and flow.

11:50
A Novel Diffusion Model based Approach for Sleep Music Generation

ABSTRACT. Sleep disorders, particularly insomnia, and mental health conditions affect a significant fraction of adults worldwide, posing serious mental and physical health risk. Music therapy offers promising, low-cost, and non-invasive treatment, but current approaches rely heavily on expert-curated playlists, limiting scalability and personalization. We propose a low-cost generative system leveraging recent advances in diffusion models to synthesize music for therapy. We focus on insomnia and curate a dataset of waveform sleep music to generate audio tailored to sleep. To ensure real-world feasibility, we optimize our system for training and use on a single GPU, balancing quality and efficiency through extensive ablation studies. We show through subjective human evaluations that our generated music matches or outperforms existing baselines in both perceived quality and relevance to sleep therapy, while using only a fraction of the computational cost.

11:00-12:15 Session 9B: EvoApplications: Real World Systems
11:00
Multi-Constrained Evolutionary Molecular Design Framework: An Interpretable Drug Design Method Combining Rule-Based Evolution and Molecular Crossover

ABSTRACT. This study proposes MCEMOL (Multi-Constrained Evolutionary Molecular Design Framework), a dual-layer evolutionary approach for drug design that integrates rule-based evolution with molecular crossover. Unlike deep learning methods requiring extensive datasets and computational resources, MCEMOL efficiently evolves from minimal starting molecules through two key mechanisms: (1) evolving transformation rules at the rule level, and (2) applying crossover and mutation at the molecular level. The framework employs message-passing neural networks for property prediction and comprehensive chemical constraints to ensure valid, drug-like molecules. Experimental results on ZINC250K demonstrate that MCEMOL achieves 100% molecular validity while maintaining high structural diversity (0.834) and excellent drug-likeness compliance (100% Lipinski, 98% Ghose, 99.6% Veber). Most importantly, MCEMOL provides interpretable transformation pathways that chemists can understand and trust, addressing the critical need for explainable AI in drug discovery. The framework delivers dual value: interpretable design rules for mechanistic understanding and a high-quality molecular library for practical applications, bridging the gap between computational innovation and real-world drug development needs.

11:25
Novel Explainable Graph–Decision Forest Framework for Interpretable Pancreatic Tumor Detection on CT Images

ABSTRACT. Early and accurate detection of pancreatic tumors remains a critical challenge because of the organ’s deep anatomical location, the low contrast of CT images, and the subtle morphological differences between malignant and normal tissues. In order to get over these restrictions, this work presents Graph-ForestXAI, an innovative and comprehensible hybrid framework that combines neural decision forests, convolutional texture learning, and region-based graph representation for transparent tumor classification. To describe the structural continuity and regional interdependence within pancreatic tissue, a Region Adjacency Graph (RAG) is created after each CT image is initially divided into homogenous superpixel sections using the SLIC algorithm. A lightweight CNN learns deep texture embeddings, which are then combined with descriptive graph characteristics. A Neural-Decision-Forest (NDF) classifier that combines rule-path reasoning and prediction processes the combined feature space. The suggested model is unique in that it combines topological and deep descriptors in an understandable way, displaying CT data as relational structures rather than raw pixels. Furthermore, its multi-layer explainability module produces human-readable evidence through rule-based decision paths, Grad-CAM saliency maps, and concept-level indicators such as texture irregularity, boundary asymmetry, and regional contrast. Experiments on pancreatic CT datasets demonstrate that Graph-ForestXAI achieves 91.3\% accuracy and outperforms conventional CNN and handcrafted-feature methods, while maintaining full interpretability. The proposed approach thus bridges data-driven precision and clinical transparency, providing radiologists with a region-aware and trustworthy decision-support tool for early pancreatic tumor detection.

11:50
Adaptive Curriculum Learning in Genetic Programming–Guided Local Search for Large-Scale Vehicle Routing Problems

ABSTRACT. Algorithm design for large-scale Vehicle Routing Problems (VRPs) is computationally expensive, making it crucial to improve training efficiency without degrading solution quality. Recent work has shown that embedding curriculum learning into a Genetic Programming Guided Local Search (GPGLS) framework can reduce training time. However, existing methods rely on manually specified curriculum schedules with fixed phase lengths, which require extensive trial-and-error tuning and limit transfer to new problem settings. We propose an Adaptive Curriculum Learning GPGLS that automatically controls curriculum progression by monitoring the structural stability of the generation-best GP tree and the saturation of population-level fitness improvement, triggering stage transitions when further progress in the current phase is unlikely. Experimental results on large-scale VRP benchmarks show that the proposed strategy reduces training time by about 10\% on average while maintaining comparable or better final solution quality than both fixed-schedule curriculum and non-curriculum baselines.

11:00-12:15 Session 9C: EML 3
11:00
PITL-DE: Problem-Independent Transfer Learning in Differential Evolution for Continuous Optimization

ABSTRACT. Evolutionary Transfer Learning (ETL) aims to accelerate optimization by reusing knowledge. While transferring knowledge between similar problems is straightforward, it becomes challenging when the source and target tasks are disparate. To address this, this study introduces the Problem-Independent Transfer Learning framework for Differential Evolution (PITL-DE). The PITL-DE framework consists of two main phases: offline training and online deployment. In the offline phase, a standard differential evolution (DE) algorithm is used to solve a variety of simple source problems. During these runs, the displacement (moves from a current to an updated position) of ‘successful’ individuals in the population is recorded along with the gradient information at their position. A Neural Network(NN) is then trained on this data, to learn a search predictor that maps gradient information to effective population displacements. In the online phase, this learned search predictor is integrated into DE by probabilistically replacing the standard crossover operator with the NN’s prediction. Extensive validation is performed on 21 classical functions and the IEEE CEC 2017 Benchmark suite, where the results are also compared against other variants of DE (JADE, SHADE) and four IEEE Competition winners (L-SHADE, jSO, NL-SHADE-RSP, MadDE). The proposed PITL-DE is shown to statistically and significantly outperform these competing methods. The main contribution is a novel ETL framework for transferring search experience to entirely new continuous, same- dimensional problems by using a learned ML predictor to enhance a fundamental genetic operator.

11:25
A Comparative Study on Robustness in Evolved Image Classifiers

ABSTRACT. Deep Learning (DL) models, despite high accuracy, often degrade under domain shifts, common in medical imaging. This brittleness hinders reliable Artificial Intelligence deployment in real-world applications. Lipschitz-constrained networks offer formal robustness guarantees but add architectural complexity. We explore program synthesis as an alternative, examining the inherent robustness of image classification programs evolved via Multi-Modal Adaptive Graph Evolution (MAGE). Using the PatchCamelyon histopathology dataset, we compare evolved programs against a standard ResNet-18 and a 1-Lipschitz ResNet-18 under Gaussian noise, Poisson noise, and brightness perturbations. On unperturbed test data, all models achieve competitive balanced accuracy. Remarkably, MAGE programs, evolved without data augmentation or explicit robustness objectives, exhibit inherent resilience comparable to, and often surpassing, the 1-Lipschitz ResNet-18 model. Moreover, as the evolutionary search yields a diverse population, it enables the selection of highly robust programs suited to specific perturbations.

11:50
Exploring the impact of fairness-aware criteria in AutoML

ABSTRACT. Machine Learning (ML) systems are increasingly used to support decision-making processes that affect individuals. However, these systems often rely on biased data, which can lead to unfair outcomes against specific groups. With the growing adoption of Automated Machine Learning (AutoML), the risk of intensifying discriminatory behaviours increases, as most frameworks primarily focus on model selection to maximise predictive performance. Previous research on fairness in AutoML had largely followed this trend, integrating fairness awareness only in the model selection or hyperparameter tuning, while neglecting other critical stages of the ML pipeline. This paper aims to study the impact of integrating fairness directly into the optimisation component of an AutoML framework that constructs complete ML pipelines, from data selection and transformations to model selection and tuning. As selecting appropriate fairness metrics remains a key challenge, our work incorporated complementary fairness metrics to capture different dimensions of fairness during the optimisation. Their integration within AutoML resulted in measurable differences compared to a baseline focused solely on predictive performance. Despite a 9.4% decrease in predictive power, the average fairness improved by 14.5%, accompanied by a 35.7% reduction in data usage. Furthermore, fairness integration produced complete yet simpler final solutions, suggesting that model complexity is not always required to achieve balanced and fair ML solutions.

11:00-12:15 Session 9D: EvoApplications: Cloud, edge and fog computing
11:00
Self-Organized Criticality for Green Distributed Computing: A Sandpile-Inspired Model of Energy-Efficient Load Balancing

ABSTRACT. Achieving sustainability in distributed computing requires adaptive algorithms that reduce energy consumption without centralized control. We present a nature-inspired sandpile automaton that exhibits self-organized criticality (SOC) for green load balancing. Each node redistributes excess tasks locally once a critical threshold is exceeded, triggering cascades that dynamically activate and deactivate resources and maintain energy proportionality to demand. Through purely local interactions, the network shows emergent behavior that (1) automatically turns resources on and off, (2) yields scale-invariant load shifts, and (3) attains near-optimal energy efficiency with minimal overhead. We validate the model on synthetic ramp-up workloads and real traces from the Grid Workloads Archive, demonstrating robust, scalable self-organization with competitive makespan. This work bridges distributed systems and complex-systems practice, showing how critical dynamics can address practical scheduling challenges without global coordination.

11:25
Genetic Programming for Energy-Efficient Device–Edge Collaborative Inference

ABSTRACT. The growing demand for intelligent mobile and embedded applications has intensified the need for real-time and energy-efficient inference at the network edge. Device–edge collaborative inference (co-inference) enables computation to be partitioned between a resource-constrained device and a nearby edge server, improving responsiveness and reducing device energy compared to fully local or cloud-only execution. However, selecting effective partition configurations under dynamic network, device, and input conditions remains challenging. This paper introduces GENCO, a grammar-guided genetic programming framework that automatically evolves symbolic and interpretable co-inference policies. GENCO learns compact rule-based controllers that adapt to runtime context—including bandwidth, channel quality, device battery, edge load, and input difficulty—to determine partition location, early-exit depth, and quantization precision. The evolved policies minimize device energy consumption while enforcing latency and accuracy constraints. Experiments under stochastic network and device conditions show that GENCO achieves up to 73% lower device energy consumption than fully on-device inference while maintaining deadline compliance. These results demonstrate the effectiveness of evolutionary optimization in developing adaptive and energy-efficient edge intelligence systems.

11:50
FedGP Resilience: A Comparative Study with Standard Federated Aggregation Methods under Adversarial Scenarios

ABSTRACT. Federated Learning (FL) enables decentralized model training while preserving data privacy. However, classical aggregation strategies such as FedAVG, FedPROX, and FedNOVA show significant performance degradation when client data distributions are highly non-IID. They are also vulnerable to adversarial perturbations that corrupt client updates. To overcome these limitations, FedGP has been introduced in literature, a Genetic Programming (GP)–based aggregation approach that evolves symbolic aggregation functions instead of relying on fixed rules.This study investigates the resilience of FedGP in challenging FL scenarios. Thanks to its adaptive mechanism, the server can dynamically combine, reweight, or discard client contributions according to their reliability, improving robustness against noisy or malicious updates. We conduct a comparative evaluation on PathMNIST and FashionMNIST under standard and adversarial conditions, including Gaussian noise, label-flipping, and sign-flipping attacks. All methods are tested under controlled non-IID settings with an imbalance rate of 0.8 and identical training hyperparameters to ensure fairness. Experimental results show that FedGP consistently outperforms traditional aggregation methods in terms of robustness and accuracy, particularly in the presence of corrupted clients.

16:20-18:00 Session 12A: EvoApplications Best Paper nominations
Location: Room A
16:20
Toward Reliable Uncertainty Quantification in Surrogate-Assisted Evolutionary Algorithms via Temporal Conformal Prediction

ABSTRACT. Surrogate-assisted evolutionary algorithms approximate expensive objective functions, but they introduce prediction uncertainty that can misdirect the search. We propose uncertainty quantification using conformal prediction with temporal weighting for non-stationary population dynamics. As populations concentrate around promising regions, older training data becomes less representative. Our temporal weighting adaptively prioritises recent observations when calibrating prediction intervals. We evaluate on BBOB and CEC2013 benchmarks (2--50 dimensions) with varying budgets. Temporal weighting achieves empirical coverage of 0.71--0.89 (CMA-ES) and 0.63--0.84 (GA), improving 20--45 percentage points over non-adaptive methods. Coverage varies by structure: unimodal functions achieve 0.85--0.91, compositional functions 0.54--0.76. Compared to purely surrogate-based selection, our approach reduces evaluations by 8--15\% while maintaining comparable fitness. The framework provides practical uncertainty estimates for efficient surrogate-assisted optimisation under computational constraints.

16:45
On Effcient Binarization of Scanned Historical Documents by Training Local Rules of Neural Cellular Automata

ABSTRACT. In this paper we propose a new architecture based on Neural Cellular Automata and evaluate it on the task of binarization of scanned historical documents. We show that this approach allows us to obtain neural models that outperform existing Neural Network-based solutions while exhibiting substantially lower complexity measured as the number of parameters of underlying neural network. The proposed model will be evaluated over several settings including different forms of neighborhood, numbers of steps and initialization strategies. On the basis of this evaluation we can select the most suitable setup in order to produce models exhibiting high performance and quality of results.

17:10
Reinforcement Learning-Based Adaptive Boundary Constraint Handling for Particle Swarm Optimization

ABSTRACT. Boundary constraint violations during particle swarm optimization require correction strategies. Existing adaptive schemes rely on probability-based method selection with hand-crafted update rules, limiting their ability to learn problem-specific patterns. This work formulates adaptive boundary constraint-handling method selection as a Markov Decision Process and integrates Deep Q-Network learning for policy optimization. Following comprehensive empirical evaluation of hybrid methods on CEC2006 benchmark, a data-driven pool of top-performing strategies is constructed. The reinforcement learning agent observes population-level state features to select methods applicable to all violating particles. Experimental validation demonstrates effective generalization from training to unseen test problems, achieving statistically significant improvements.

16:20-18:00 Session 12B: EvoMusArt 5 - AI for Music
16:20
EvoLiveDJ: An LLM-Based Agentic System for Interactive Evolutionary Live Music Performance

ABSTRACT. We present EvoLiveDJ, a framework that unifies large language model (LLM)--driven music generation with interactive evolutionary feedback in a live coding context. The system uses an LLM to generate and iteratively refine Strudel code, while audience listening behaviour serves as a real-time fitness signal that guides selection. Each generation produces multiple musical variants that are played in parallel and selectively bred through LLM-guided semantic crossover and multi-scale mutation informed by audience preference and the model’s own critique. Analysis of captured generations reveals coherent evolutionary dynamics in symbolic musical code, including the emergence of stable and diverging lineages, multi-level mutation processes, and role-aware inheritance across rhythmic, melodic, and timbral structures. This hybrid approach combines the knowledge-driven musical competence of LLMs with the exploratory power of evolutionary search, enabling a continuous creative dialogue between AI and the audience. The results point toward a new agentic paradigm of AI in music, which actively collaborates, self-evaluates, and evolves musical ideas in real time with human co-creators.

16:45
Artificial Intelligence in Music: Towards an Aesthetics of Co-Creation

ABSTRACT. The rapid rise of generative artificial intelligence (GAI) is re- shaping the paradigms of artistic creation. In music, this transformation is particularly striking: new systems capable of composing, performing, and producing original works challenge traditional boundaries between author, instrument, and machine. Platforms such as SUNO and UDIO, capable of generating complete songs with vocals, instrumentation, and mixing from simple text prompts, represent an unprecedented technical and aesthetic revolution. This paper presents a research-creation project exploring the aesthetic and practical potentials of co-creation between humans and AI systems in recording studio settings. Based on ethno- graphic observation of music creators working with GAI tools, we analyze how AI transforms collaborative dynamics, creative processes, and the very notion of musical authorship. Our findings reveal that rather than replacing creators, AI acts primarily as an inspiration tool and collabo- rative catalyst, while exposing fundamental questions about creativity, agency, and artistic value in the age of algorithmic intelligence

17:10
Addressing Dataset Scarcity in Music Emotion Recognition with LLMs

ABSTRACT. Music Emotion Recognition is an important task in the field of music information retrieval, with applications in music generation, song recommendation, and playlist generation. However, given copyright concerns and the expense of human subject studies, researchers lack large datasets annotated with emotions. This dataset scarcity significantly limits the ability of researchers to train machine learning models that recognize emotion directly from audio. To address this need, we introduce a novel approach that leverages language models to generate emotion annotations. Specifically, we task ChatGPT to provide (1) numeric estimations (2) set of emotion words, and (3) long-form descriptions that characterize the emotive qualities of a piece of music. We consider $22,968$ songs across five public datasets to facilitate a comparison of our synthetic annotations against human and algorithmic annotations. Although indirect, these annotations have the potential to provide insight into the emotional content of music, opening new possibilities for research and applications in the area of music emotion recognition.

17:35
Algorithms for Collaborative Harmonization

ABSTRACT. We consider a specific scenario of text aggregation, in the realm of musical harmonization. Musical harmonization shares similarities with text aggregation, however the language of harmony is more structured than general text. Concretely, given a set of harmonization suggestions for a given musical melody, our interest lies in devising aggregation algorithms that yield an harmonization sequence that satisfies the following two key criteria: (1) an effective representation of the collective suggestions; and (2) an harmonization that is musically coherent. We present different algorithms for the aggregation of harmonies given by a group of agents and analyze their complexities. The results indicate that the Kemeny and plurality-based algorithms are most effective in assessing representation and maintaining musical coherence.

16:20-18:00 Session 12C: EuroGP 4: GP Evolution
16:20
On the Effects of Down-Sampling for Tournament and Lexicase Selection in Program Synthesis

ABSTRACT. In recent years random and informed down-sampling have been used to increase the performance of genetic programming using both tournament and lexicase selection. In the domain of symbolic regression these down-sampling approaches have closed the gap between both selection methods. Building on this prior work we evaluate if these findings transfer to the program synthesis domain, a problem domain mostly consisting of complex modal problems, a property that lexicase was specifically designed for. We conduct experiments for a diverse set of program synthesis benchmark problems, using grammar guided genetic programming, and investigate the effects of different down-sampling schemes on both tournament and lexicase selection. We find that while tournament selection does indeed benefit more from down-sampling in terms of performance gains, generalization, code growth, and diversity, lexicase still outperforms tournament selection due to its the superior promotion and preservation of specialists in more complex problems.

16:45
Sinking the Bloat in Genetic Programming Using Equality Saturation

ABSTRACT. Program Synthesis (PS) aims to automatically generate computer programs from high-level specifications. Genetic Programming (GP) is a prominent metaheuristic for PS, treating synthesis as a search problem over the vast space of all possible programs. A recent approach, Higher-Order Typed Genetic Programming (HOTGP), uses a purely functional and typed paradigm to constrain this search space and improve performance. However, like many search techniques, the evolved programs often suffer from "bloat", an unnecessary growth of code that makes them complex and difficult to interpret. This paper investigates the integration of Equality Saturation, a term rewriting optimization technique, into the HOTGP framework. The objective of this integration is to simplify generated programs and reduce bloat, thereby enhancing both computational efficiency and program readability. Experimental results on a set of benchmarks indicate that equality saturation has a varied impact on the final synthesis success rate. Conversely, qualitative analysis confirmed a reduction in code growth (bloat), which manifested as simpler solutions in some benchmarks and a more frequent convergence toward minimal program forms.

17:10
A Hybrid LLM-Coevolution Framework to Generate Abusive Tax Strategies

ABSTRACT. We present a new framework to automatically generate abusive tax strategies. The first design of its kind, it combines a competitive coevolutionary algorithm, CompCoevAlg, and a large language model (LLM). Its CompCoevAlg iteratively adapts a population of strategiestowardsabuseandconcealmentbyadversariallycompetingitagainst a population of audit patterns that seek to expose abusive strategies. While, per convention, the CompCoevAlg oversees selection, replacement, and fitness evaluation, it uses the LLM to employ natural-language representations for strategies and patterns. The LLM also performs similarity based matching between audit patterns and transactions in each strategy and it mutates the transactions and pattern at the natural language level. We demonstrate the framework by using the Installment Bogus Optional Basis (iBOB) scheme. It is able to generate variants of a prototypical iBOB strategy. In addition to its novel integration of an LLM, results reveal the dual value of combining an LLM and Evolutionary Algorithm. Genetic operators and a search approach can make up for the LLM’s shallow knowledge of a problem domain, while the LLM can eliminate the need to laboriously translate and encode the semantic rules and behavior of the problem domain.

16:20-18:00 Session 12D: EvoCOP 3: Learning-enhanced approaches
16:20
Synergistic Adaptive Tabu Search: Multi-Agent Learning with Shared Solution Patterns for the K-Traveling Repairman Problem

ABSTRACT. The K-Traveling Repairman Problem (K-TRP) is a challenging NP-hard optimization problem focused on minimizing total customer waiting time. Existing solution methods, however, frequently lack the adaptive mechanisms and sophisticated AI integration necessary to achieve state-of-the-art performance. This paper introduces Co-MARL-ATS, a hybrid solving framework founded on four synergistic pillars: a Multi-Agent System (MAS), Adaptive Tabu Search (ATS), Cooperative Reinforcement Learning (CRL), and an intelligent shared memory. A population of autonomous agents explores the search space in parallel, each guided by a DQN policy that controls six high-level actions: adjusting tabu tenure (increase/decrease), choosing intra- or inter-route operators, triggering diversification, and integrating shared solution patterns. Agents share solution fragments (patterns) via a smart repository that enables collective learning. Each agent's policy learns not only how to search but also when to leverage the population's discoveries. Rigorous computational experiments demonstrate clear superiority over all ablated versions (non-adaptive, non-cooperative, single-agent). Compared to state-of-the-art heuristics, Co-MARL-ATS establishes 8 new Best Known Solutions on large-scale instances, achieving an average Gap of -2.02\% while maintaining runtimes competitive with existing methods (e.g., 3.20s vs. QPSO 3.17s). This synergistic design emphasizes the significant value of integrating adaptive learning, parallel search mechanisms, and coordinated collaboration to effectively address challenging combinatorial optimization problems.

16:45
On vehicle routing problems with loading constraints

ABSTRACT. This work studies the problem of vehicle routing with two-dimensional loading constraints (2L-CVRP), which integrates two of the most important and complex problems in distribution logistics. We propose a hybrid GLS-ALNS methodology to address the 2|UO|L version of the problem. Guided Local Search (GLS) handles the routing optimization while Adaptive Large Neighborhood Search (ALNS) performs solution repair after the packing phase. The packing phase is executed through parallel packing heuristics. In addition, we introduce an adaptive packing heuristic designed to minimize both the number of unserved customers and the total routing cost. The proposed GLS-ALNS method was evaluated on established benchmark instances from the literature and successfully achieved best-known solutions for the majority of test cases. Particularly, our approach identified 8 new best-known solutions, demonstrating its effectiveness in solving this complex optimization problem.

17:10
A Denoising Diffusion Adaptive Search for the Alpha-Domination Problem on Social Graphs

ABSTRACT. The alpha-domination problem is a generalization of the classical dominating set problem and serves as a way to model influence structures in social networks. In this work, we build on previous research and explore the use of denoising diffusion models to generate high-quality solutions for this problem. Our main contribution is the introduction of a novel variation of the Greedy Randomized Adaptive Search Procedure (GRASP), utilizing a denoising diffusion model for the parallel construction of a diverse set of initial solutions, which we refer to as the Denoising Diffusion Adaptive Search Procedure (DDASP). By focusing our efforts on real-world social network graphs, we present a superior alternative to conventional metaheuristic algorithms, as prior work has shown that such algorithms often struggle with the unique structural properties of these graphs. Furthermore, we address the challenge of generating viable training data for our models, since the graphs under consideration are typically too large to be practical for direct use. Our results on a benchmark set of Facebook graphs show that DDASP consistently outperforms the leading metaheuristic approach given the same time budget and also surpasses a mixed-integer linear program solved by Gurobi on all but the smallest graphs, despite having a substantially smaller time budget.

18:00-18:30 Go to pick up point

The bus pick up point is 21 Bd Armand Duportal, 31000 Toulouse (the exact number might vary)

https://maps.app.goo.gl/5uc2sDum4aThU6ah9