ABSTRACT. Formal verification using computer algebra has emerged as a powerful approach for circuit verification. This technique encodes a circuit, represented as an and-inverter graph, into a set of polynomials that generates a Gröbner basis. Verification relies on computing the polynomial remainder of the specification. But there is a catch: a monomial blow-up often occurs during specification rewriting — a problem that often necessitates dedicated heuristics to manage the complexity. In the first part of the talk, I will provide an introduction to arithmetic circuit verification and discuss how computer algebra can be used in this setting. In the second part, I will present a novel approach, which shifts the computational effort to rewriting the Gröbner basis itself rather than the specification. By restructuring the basis to include linear polynomials, we tame the monomial blow-up and simplify the rewriting process.
Data-Locality-Aware Scheduling for Serverless Containerized Workflows across the Edge-Cloud Continuum
ABSTRACT. Modern serverless workflows deployed across the
edge-cloud continuum are facing placement challenges where
functions with varying data and computational requirements are
often placed far from required data or on insufficiently capable
hardware. The stateless execution model forces functions to
exchange data through high-latency storage service APIs rather
than direct communication, creating performance bottlenecks
where data movement dominates execution time in data-intensive
domains such as in smart cities, healthcare, machine learning,
and earth observation. Current container orchestration platforms
like Kubernetes and serverless platforms like Knative mainly
consider CPU and memory resources while neglecting data
locality, network topology, and specialized hardware capabilities.
Current scheduling approaches in this direction are limited
to static weight configurations that cannot adapt to changing
workload patterns, manual data dependency annotations forcing
developers to predict access patterns at development time, and
simplified network models ignoring actual topology relationships.
We develop a data-locality-aware scheduler extending Kubernetes
with a Multi-Criteria Decision-Making (MCDM) framework
that integrates with Knative for serverless workloads and that
dynamically weighs workload characteristics, cluster topology,
bandwidth constraints, data placement, and node capabilities
while adapting MCDM priority weights in real-time based on
function requirements. Our system allows for dynamic per-
invocation data dependency specification through CloudEvents
and implements automatic storage discovery with topology-aware
network modeling. Experimental evaluation against the default
Kuberentes scheduler demonstrates an improvement from 25.1%
to 73.3% in local data access, a 46.7-79.4% reduction in network
transfers, and a 87.2% reduction in cross-region transfers, with
minimal scheduling overhead averaging 3.1 seconds across our
tested workflows.
Parametric Actors Networks to Efficient Hardware: CORDIC-Based QR Decomposition
ABSTRACT. In this work, we present an actors-to-hardware design flow where applications are described as networks of actors and compiled to hardware. Specifically, we use the CAL actor language to describe these applications. CAL has previously been used to specify applications such as the MPEG-RVC decoder and is well suited for compilation to hardware.
A key limitation of CAL is that the applications described have a fixed structure, limiting its applicability. In this work, we show how recent additions to the CAL language and upgrades to the StreamBlocks CAL compiler have overcome these challenges, enabling parametric networks to be specified and compiled to hardware.
To illustrate our approach, we implement a CORDIC-based QR Decomposition application as a systolic array in CAL, targeting Massive-MIMO systems used in telecommunications. We describe how this toolchain converts the CAL description into a combination of Verilog and C++ HLS code and demonstrate how this format allows for performance tuning.
Our approach expands the capabilities of CAL and CAL toolchains allowing it to be used to compile a broader range of applications to hardware.
Metric Continuation-Passing Semantics for Multiparty Interactions
ABSTRACT. We explore the abstractness of semantic models for concurrent languages designed using continuation-passing style. By employing techniques from metric semantics and continuations for concurrency, we establish a formal relationship between denotational semantics and operational semantics for a concurrent language based on Hoare’s CSP. We extend CSP by introducing a mechanism for communication and synchronization across multiple channels, which supports multiparty synchronous interactions among an arbitrarily large (finite) number of concurrent processes. It is demonstrated that the denotational semantics is weakly abstract with respect to the corresponding operational semantics.
Effective Translation of Algebraic PNs with Active Tokens in Maude: a Model for Adaptive Systems
ABSTRACT. Reisig’s Algebraic (or SPEC-inscribed) Petri nets
are considered among the most robust and advanced high-
level Petri net formalisms ever suggested. However, it appears
that there are few rigorous implementations derived from this
model based on an algebraic framework. We present an effective
implementation of algebraic Petri nets (APN) utilizing Maude, a
declarative language characterized by rewriting logic semantics.
We contend that our approach is the first systematic utilization
of Maude as an effective rewriting engine for APNs. By explor-
ing two distinct definitions, we address the nuanced modeling
challenges arising from Maude’s operational semantics based
on pattern-matching. We further demonstrate the advantage of
utilizing rewritable terms as active tokens, a feature naturally
supported by Maude. The paper presents simple examples
throughout, with an advanced example of adaptive Multilevel
Feedback Queue scheduling.
ABSTRACT. Challenges in digital forensics often arise from the need for local analysis due to highly sensitive data and the examination of malicious software. GView is an open-source (text-based user interface) TUI-based framework that emphasizes plugin-drive analysis and visualization. In this paper, we propose an architecture for GView enables secure remote execution and display by transmission only the character-based output, while also maintaining the TUI during the interactions. Our approach offers a low-latency, encrypted alternative to SSH or remote desktop systems, optimized for forensics workflows. We present the architecture, implementation, and evaluation of the system.
ABSTRACT. A vast and ever-growing volume of Earth observation data is made freely available through initiatives like the ESA Copernicus programme. While these datasets—such as Sentinel-1 and Sentinel-2 imagery—offer immense potential for environmental monitoring, land use assessment, and climate analysis, efficiently accessing and processing them at scale remains a technical challenge. In this tutorial, we demonstrate how STAC (SpatioTemporal Asset Catalog) can serve as a powerful interface for organizing and discovering geospatial assets, and how it can be combined with modern open-source tools such as Dask, Xarray, and Zarr to enable scalable data processing. We will walk participants through workflows that range from querying STAC APIs and loading analysis-ready EO data to performing cloud masking, mosaicking, and time series analysis, all within distributed computing environments (i.e. Dask). The session concludes by showcasing how this pipeline can be extended to machine learning tasks such as land cover classification and deforestation detection, offering participants a robust foundation for building reproducible geospatial analytics workflows.
Metacognitive AI Model for Target Selection in Military Operations
ABSTRACT. In modern conflict environments, the responsible development and integration of AI (Artificial Intelligence) into targeting workflows demands both rigorous legal and ethical safeguards as well as transparent intelligent decision making. In these lines, this research proposes a metacognitive AI model built as a computational ontology for military target selection in human-AI teaming settings developed for military operations contexts. This implies building a knowledge representation model that encodes cognitive reasoning and metacognitive self-governance within a unified formal framework. At the cognitive layer, aspects such as agents’ belief states, effect assessment, and selection decisions are captured and modelled to enforce the principle of distinction using its four criteria (i.e., nature, location, purpose, and use) and through this guaranteeing that every selection decision universally selects only lawful military targets. At the metacognitive layer, belief thresholds are defined for capturing and tackling uncertainty, self-monitoring modules, and human-review triggers are established to ensure that high-uncertainty inferences automatically invoke human agent oversight and that AI autonomy is adaptively regulated. On this behalf, a series of core axioms are proposed and for demonstration purposes, a Cyber Operation use case is hypothetically conducted in which an AI agent identifies, classifies, and autonomously selects a hostile SCADA command node while reflexively enlisting a human agent when the uncertainty belief threshold is exceeded. Conclusively, the model proposed contributes to ongoing efforts dedicated to building metacognitive, human-AI teaming, and responsible AI decision-support systems that bind legal and ethical dimensions through intelligent modelling.
Zonal Estimation Method for Offline Travel Time Prediction in Sparse Data Environments
ABSTRACT. Accurate travel time estimation plays a pivotal role in modern navigation, logistics, and urban transportation planning. Nevertheless, many practical environments face constraints in data availability, leading to sparse or incomplete historical records that are insufficient for large-scale machine learning approaches. In such contexts, conventional travel time estimation models tend to underperform or fail, given their reliance on dense, long-term datasets or continuous sensor inputs.
In this paper, we present a Zonal Estimation Method designed to operate effectively in sparse data environments. The method organizes the urban space into concentric zones derived from a Gaussian Mixture Model (GMM) applied to the spatial distribution of points of interest (POIs), ensuring that the city’s concentric developmental patterns are captured. Each zone is then discretized into a grid of hexagonal cells, an approach that facilitates coherent spatial representation and local averaging of speed profiles. By incorporating time-based aggregation procedures, particularly a scheme that distinguishes weekdays from weekends, the method accommodates temporal fluctuations without over-segmenting the data. This combination of zonal and temporal structuring makes it feasible to obtain stable speed estimates and provide reasonably accurate travel time predictions using datasets that extend over short periods of only a few days.
We evaluated our method on a set of approximately 23,000 route traces collected within a ten-day interval in a mid-sized European city. Although the available data are comparatively limited, our approach achieves a mean absolute percentage error (MAPE) as low as 12.91%, achieving competitive accuracy compared to widely-used commercial APIs like Google Maps. The architecture of the method is simple to implement and is especially well-suited for early-stage transportation analytics, data-constrained urban environments, or fallback situations in which real-time data streams cannot be maintained.
Age-Related Differences and Machine Learning-Based Insights in Patients with Non-Functioning Adrenal Adenomas
ABSTRACT. Adrenal incidentalomas, although frequently non-functional in hormonal terms, have been increasingly associated with subtle metabolic alterations and low-grade systemic inflammation. This study proposes a machine learning-based predictive framework for identifying patients with elevated high-sensitivity C-reactive protein (hsCRP), a surrogate marker of systemic inflammation, using only routine clinical and laboratory data from patients diagnosed with non-functioning adrenal adenomas (NFAs). The dataset, provided by the Faculty of Medicine and Pharmacy of Timișoara, comprises 150 patients admitted between 2018 and 2024. After stratifying patients by hsCRP levels, we addressed the class imbalance using Borderline-SMOTE, which selectively synthesizes new data points near the decision boundary to improve model sensitivity. We trained a lightweight multilayer perceptron (MLP) neural network and applied leave-one-out cross-validation (LOOCV) to ensure robust evaluation, achieving an accuracy improvement of over 14% compared to classical algorithms such as Logistic Regression, Random Forest, and Support Vector Machine. To enhance interpretability and support clinical decision-making, we employed SHAP (SHapley Additive exPlanations), a model-agnostic technique that quantifies the individual contribution of each input feature to the model’s predictions. The SHAP analysis identified GENDER, LDL_CHOLESTEROL>100, and EOSINOPHIL_PERCENTAGE among the most influential predictors. These findings align with known clinical mechanisms and suggest that systemic inflammation in NFAs may arise independently of obesity alone, reflecting a more complex interplay between lipid metabolism and adrenal function. Our approach not only achieves strong predictive performance but also provides visual and quantitative insights that can be easily interpreted by clinicians. The integration of explainable AI into endocrine risk modeling offers a promising step toward personalized patient management and supports future investigations into metabolic phenotyping in adrenal pathology.
Using GIS and AI for large-scale orientation analysis of remote heritage sites
ABSTRACT. Around the world, we notice remote heritage sites that cannot be investigated in person due to their location and size. Examples include stone structures in Africa or the Middle East. Some of these structures exhibit main axes that point in certain directions and by analysing their orientation (w.r.t. skyscape), we can gain insights into the customs of past cultures. In this tutorial, we present how QGIS can be used together with AI models and image processing to automate the large-scale analysis of such sites.
Symbolic Computation, Standardisation, and the EU AI Act
ABSTRACT. The EU AI Act is one of the most significant pieces of AI legislation in the world. This is accepted, even if reluctantly, by all in AI. What is not often realised is that this is written as product safety legislation, a point of view that is unfamiliar to many in AI.
Like other product safety legislation, it is written in fairly general terms, using words such as “unbiased”, leaving it to standards to describe what this actually means in detail. Again, this is unfamiliar to many in AI. We will therefore first look at the Act, and the European standardisation process. In particular, we will look at the various actors in this process.
Then we will ask what this has to do with Symbolic Computation: it is the general view that “Artificial Intelligence” as commonly understood, i.e. Machine Learning, has nothing to do with symbolic computation. “General”, but not “universal”. In fact, there is a community applying some of the tools of symbolic computation to prove (as opposed to claim, occasionally supported by statistics) facts about certain AI applications. In particular, this community asks, and partially answers, the “who shall verify the verifications” conundrum that the standard symbolic computation paradigm is also facing.
ABSTRACT. In 1988 Davenport-Heintz showed that Real (polynomial) quantifier elimination had a doubly-exponential (in the number of variables) worst case.
In 2007, Brown-Davenport had a simpler, in particular purely linear, explanation of how Real (polynomial) quantifier elimination had a doubly-exponential (in the number of variables) worst case.
Here we explore how a linear construction can actually produce examples that complicated.
If the double exponent is cn+O(1), where n is the number of variables, Davenport-Heintz had c=1/6, improved by a cunning trick to c=1/5. Brown-Davenport had c=1/3, which we improve to c=2/5 by a different technique.
Scaling Up Reachability Analysis for Rectangular Automata with Random Clocks
ABSTRACT. This paper presents optimizations to improve the scalability of reachability analysis on a subclass of hybrid automata extended with stochasticity.
The optimizations target different components of the analysis, such as quantifier elimination for state set projection, and automated parameter selection during the numerical integration. Most importantly, whereas the original method combines forward and backward reachability, we show that the usage of backward reachability is optional for computing maximal reachability probabilities.
Fraction-Free Comprehensive LU Decomposition and Variants
ABSTRACT. We study the fraction free comprehensive PLU decomposition of rectangular matrices with polynomial coefficients, depending on parameters, possibly subject to algebraic constraints. Computing such a parametric decomposition typically requires to split computations into various cases.
This yields various computational challenges, in particular
the fact that more cases than necessary may be generated.
But also the fact that output may be more or less
legible to an end-user, depending on possible simplifications.
In this paper, we present variants of comprehensive LU decomposition
and evaluate their benefits in a comparative experimentation,
using a number of test problems coming from the literature.
Study on Symbolic Regression Approaches: Balancing Accuracy, Interpretability, and Runtime
ABSTRACT. Symbolic regression (SR) is a machine learning technique used to uncover mathematical relationships in data by generating interpretable expressions. While several SR methods have been developed—ranging from evolutionary algorithms to neural-symbolic models—there remains a lack of standardized benchmarks that assess both prediction accuracy and interpretability. In this study, we present a comparative evaluation of four SR tools: GPlearn, PySR, Fast Function Extraction (FFX), and a neural network-based symbolic regressor. These methods are tested on three synthetic and two real-world datasets. Evaluation criteria include the coefficient of determination (R²), runtime, and expression interpretability. Results reveal that PySR consistently achieves higher accuracy, while FFX and GPlearn often produce more interpretable expressions. We also highlight challenges such as inconsistent input feature usage, runtime variability, and the need for clearer interpretability metrics. This work aims to support practitioners in selecting SR tools based on practical trade-offs and encourages further development of standardized evaluation frameworks for SR.
Leveraging a Small Gemma Model to Predict Movie Ratings from MovieLens Dataset: Preliminary Results
ABSTRACT. In this paper we evaluate the recommendation capa
bilities of Google’s Gemma 2B model on GroupLens’s MovieLens 32M Dataset. We are specifically interested in assessing the in context learning capabilities of the model, since the prompt is designed such that it requires the user’s rating history, the genres the user likes and dislikes and the list of movies used for the prediction of the user’s rating. As evaluation measures, we used Normalized Distributed Cumulative Gain (NDCG), Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) to compare the predicted ratings with the actual ratings. Due to the large size of the dataset, so far we have only selected users that provided exactly 20 ratings. This facilitates the inclusion of the top 15 rated movies in the suitably designed prompt for in-context learning, enabling the prediction of the ratings of the rest of 5 movies that the user watched.
Visualization of Training and Testing Results for Solving Mountain Car Problem Using Q-learning
ABSTRACT. In this paper, we focus on exploring various visualizations for solving the Mountain-Car Problem. We provide insights into how the visualization of training, testing, and comparative analysis of models can help gather more information about the efficiency of the chosen parameters used to train agents. The experimental results of the configurations tested are presented in visually intuitive ways, from simple plots to 3D Action Maps highlighting the certainty of agent movements.
An Agent-based Architecture for Modeling and Analyzing IoT-Edge-Cloud Continuum Systems
ABSTRACT. The explosive growth of the Internet of Things (IoT) has given rise to systems that are increasingly heterogeneous, resource-constrained, and latency-sensitive. Traditional cloud-centric architectures struggle to meet the real-time demands of such systems, motivating a shift toward edge computing. However, edge systems themselves require greater autonomy, adaptability, and intelligence to manage decentralized tasks efficiently. In this paper, we propose a layered agent-based architecture for Edge–Cloud–IoT systems that integrates resource-aware control loops and multi-agent coordination across sensing, edge, and cloud nodes. Each node embeds an autonomous agent executing a MAPE-K control loop, enabling dynamic task offloading, resource monitoring, and decentralized decision-making. We formalize system topology as a layered graph and model key resources such as energy, memory, and bandwidth. A multi-objective cost function guides task delegation based on feasibility and performance trade-offs. The proposed model is applied to a localization-based use case involving realistic hardware profiles and task generation dynamics.
Free Energy Landscape and Markov State Model Analyses as Bencharks for Molecular Dyanamics Large Language Models of Protein-Ligand Interactions
ABSTRACT. All-atom explicit solvent molecular dynamics simulation (MDS) is a very important methodology for quantitatively elucidating the interactions of biomolecular systems. However, the time scale of the MDS is still limited to milliseconds or even microseconds, due to the tremendous computational cost of such simulations. The recent rapid development of large language models in the field of artificial intelligence has allowed us to see not only their wide application in the field of natural language processing, but also significant breakthroughs in structural biology. In this work, we depicted the free energy landscape and Markov state model analyses of protein conformations generated by all-atom explicit solvent MDS of the adenosine A2A receptor, a ligand-activated G protein-coupled receptor, in complexed with different ligands in the 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid bilayer. With the constructed free energy landscapes and Markov states models, the quality of the large language models of the molecular dynamics of protein-ligand interaction could be quantitatively benchmarked.
MTL-LungNet: A Unified Model for Nodule Segmentation and Malignancy Classification
ABSTRACT. Lung cancer is the leading cause of cancer-related deaths globally, with early and accurate detection being crucial for patient survival. While computed tomography (CT) scans are instrumental in identifying pulmonary nodules, current deep learning models typically address segmentation and malignancy classification as separate tasks, potentially overlooking the mutual benefits of joint optimization. In this work, we propose MTL-LungNet, a novel multi-task learning framework that simultaneously performs nodule segmentation and malignancy classification using CT data. By leveraging shared representations and a segmentation-guided classification strategy, our model captures both spatial precision and semantic context, enhancing diagnostic accuracy. We validate our approach on the LIDC- IDRI dataset, employing standardized preprocessing techniques and balanced data splits. The proposed architecture integrates SwinUNETR for segmentation and a ResNet-based classifier with an attention-like mechanism that emphasizes clinically relevant regions. Experimental results demonstrate that MTL-LungNet achieves better segmentation performance than single-task baselines and provides malignancy prediction capabilities, emphasizing the promise of multi-task learning for robust, interpretable, and efficient lung cancer diagnosis.