SYNASC 2025: 27TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING
PROGRAM FOR MONDAY, SEPTEMBER 22ND
Days:
next day
all days

View: session overviewtalk overview

08:00-09:00 Session 1: Registration

The registration desk will be on the ground floor of ICAM (Oituz, 4)

09:20-10:10 Session 3: Invited talk 1
Location: A102
09:20
The Making of Machine Minds: A Retrospective on AI

ABSTRACT. Artificial Intelligence has traveled a remarkable path since its conceptual roots in the 1950s, evolving from symbolic reasoning and rule-based systems to the data-driven, learning-centric approaches that power today’s cutting-edge technologies. This talk traces the major milestones and paradigm shifts that have shaped the development of AI over the past seven decades. We’ll explore the early era of logic and expert systems, the “AI winters” that tested the field’s resilience, and the emergence of machine learning and neural networks that redefined what machines can do. Along the way, we’ll highlight key breakthroughs, the individuals and institutions that propelled progress, and the social and technological forces that influenced each stage. By understanding this historical trajectory, we gain deeper insight into where AI stands today – and where it may be headed next.

10:30-11:20 Session 4: Invited talk 2
Location: A102
10:30
Explainability in AI-Driven Medical Diagnostics

ABSTRACT. As artificial intelligence becomes increasingly integrated into medical diagnostics, ensuring transparency and trust in AI-driven decisions is crucial. This talk explores the challenges and advancements in explainability for AI models analyzing medical images. We will discuss methods for interpreting AI predictions, the role of explainability in clinical decision-making, and strategies to bridge the gap between AI systems and healthcare professionals. By enhancing the interpretability of AI diagnostics, we can improve physician confidence, patient trust, and the overall reliability of AI-assisted medical imaging.

11:40-13:00 Session 5A: Artificial Intelligence track (1)
Location: A102
11:40
Exploring Compression as a Proxy for Mineability in LLM-Generated Text

ABSTRACT. Recent advancements in large language models (LLMs) have led to a surge of interest in evaluating the quality and usability of their outputs, particularly for information extraction and downstream analytics. However, existing evaluation methods often rely on costly human judgments or complex task- specific metrics. This paper investigates whether compression rate—a simple, model-agnostic measure—can serve as a proxy for the mineability of LLM-generated text. We generate text using varying sampling parameters (temperature and top-p) across two prompt types: product reviews and artifact descriptions. We compute compression rates using ZIP and Huffman algorithms, and evaluate mineability through perplexity based on n-gram language models (n = 2 to 8), both with and without stop-word removal. Our results reveal a consistent inverse correlation between compression rate and perplexity. This relationship strengthens with higher-order n-grams and with removing stop words, suggesting that compression captures underlying structure and predictability aligned with mineability. We further observe that ZIP compression is more sensitive to parameter changes than Huffman, and that artifact prompts yield more consistent patterns than product reviews. These findings support the use of compression as a lightweight indicator of structure and mineability in generated text.

12:00
OCWhy: Retrieval-Augmented Question Answering over Open CourseWare Lectures
PRESENTER: Mihai Dascalu

ABSTRACT. Students frequently seek learning materials for tests, exams, and assignments. To facilitate this process, we developed a Retrieval Augmented Generation (RAG) system that combines a Large Language Model with a Knowledge Base, enabling efficient access to course-related information. This study evaluates the utility of information sources available through the Open CourseWare lectures from the Faculty of Automatics and Computer Science, POLITEHNICA Bucharest. Various retrieval system design choices were refined, including document chunking strategies and reranking approaches, while highlighting corpus limitations. Our experiments argue that larger token window sizes, header-level reranking, and course-specific retrieval improved retrieval performance. The adequacy of the collected information was evaluated using multiple benchmarks, namely True/False statements and multiple-choice questions. Results show consistent performance improvements across most model and dataset combinations when using retrieval augmentation, with the highest gains observed in domain-specific technical content.

12:20
CrossRead: An NLP Pipeline for Identifying Similar News Articles Across Multiple Sources

ABSTRACT. The rate at which fake news articles are written and disseminated on social media is alarming, posing a significant threat to both national security and individual well-being. This paper supports mitigating the propagation of fake news by empowering individuals to fact-check articles by comparing them to those from trustworthy news organizations on a platform that helps them find alternative sources and identify discrepancies between the articles. The backbone of the solution is a 3-staged processing pipeline that processes the article, fetches alternative sources, and generates the final similarity report. For fine-tuning the models in the pipeline, two new corpora were created, as no existing datasets were available for Romanian. One corpus consisted of synthetically generated search queries for a given article, whereas the second consisted of human annotations of pairs of news articles, labeled as either similar or not similar. Our pipeline, named CrossRead, enables users to easily compare sources and quickly fact-check articles while working reliably with articles in Romanian. The presented platform also constitutes an excellent base for a more feature-rich solution, with numerous improvements possible to assist its users in their search for the truth.

12:40
LLMic: Building a Romanian Foundation Language Model

ABSTRACT. Recent advances in Large Language Models (LLMs) have shown remarkable capabilities across various tasks, with commercial models leading the way. While open models usually operate at smaller scales due to constraints on available corpora and hardware resources, they maintain competitiveness through specialization and fine-tuning. However, a significant challenge persists: the under-representation of low-resource languages in open datasets results in weak model capabilities in many languages. In this paper, we document the complete process of pretraining a foundation model for Romanian, a low-resource language, including corpus construction, architecture selection, and hy- perparameter optimization. As part of this work, we introduce FuLG, a hundred-fifty-billion-token Romanian corpus extracted from CommonCrawl, alongside a 3-billion-parameter bilingual model, LLMic. Our evaluation shows that it is worthwhile to train language-specific models for specialized tasks, achieving results comparable to other much larger open and closed models. We show that fine-tuning LLMic for language translation after the initial pretraining phase outperforms existing solutions in the English-to-Romanian translation task. We hope through this work to advance the standing of the Romanian language in the world of LLMs.

11:40-13:00 Session 5B: IAFP workshop (1)
Location: B022
11:40
Unsaturated versus saturated classes of contractive type mappings

ABSTRACT. We revisit some of the main results in our paper [Berinde, V.; Păcurar, M. Fixed points theorems for unsaturated and saturated classes of contractive mappings in Banach spaces. Symmetry 13 (2021), Article Number 713 https://doi.org/10.3390/sym13040713] in the light of the recent developments on enriched classes of mappings.

12:00
Covers of fractal interpolation surfaces with finite families of octahedrons

ABSTRACT. In a previous work (Chaos Solitons Fractals, 173 (2023), 113674), we presented a method for finding a finite family of closed balls whose union contains the attractor of a given iterated function system. In this paper, for the particular framework of fractal interpolation surfaces, we provide an improved version of it. This approach is more efficient, from the computational point of view, as it is based on finding the maximum of certain sets, in contrast to the previous method which uses a sorting algorithm.

12:20
On the continuous dependence of the attractors generated by mixed possibly infinite iterated function systems

ABSTRACT. In a previous paper, [On the fractal operator of a mixed possibly infinite iterated function system, Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat., 119 (2025), 31], we introduced the concept of a mixed possibly infinite iterated function system (mIIFS). Such a system comprises a possibly infinite family of Banach contractions and a possible infinite family of nonexpansive functions which are Banach contractions, not on the whole space, but just on the orbits of the space’s elements. As a consequence, the associated fractal operator turns out to be weakly Picard. Therefore, to every bounded and closed set C corresponds a fixed point A. In this paper we prove not only the continuous dependence of the attractors of an mIIFS with respect to the associated set, but also with respect to the constitutive functions. In addition, we give an evaluation of the distance between the attractors of two mIIFSs.

12:40
A hierarchy of nonexpansivity and a natural transfer of fixed point properties

ABSTRACT. One proper setting to perform fixed point analysis of conventional nonexpansive mappings is provided by Browder-Gohde Theorem, and assumes the uniform convexity of the Banach space, as well as the closedness, convexity and boundedness of the domain. In this specific framework, the earlier mentioned theorem guarantees the existence of fixed points, as well as the closedness of the fixed point set. Moreover, to this existence result were added several constructive outcomes, analyzing various iterative procedures and proving them weakly (or, under additional assumptions as compactness of the domain, or property $(I)$, strongly) convergent to a fixed point.

This study provides an analysis of nonexpansive-related operators resulted through affine displacements of conventional nonexpansive mappings. In particular, a unitary and simultaneous approach on both averaged (as nonexpansivity subclass) and enriched nonexpansive mappings (as nonexpansivity superclass) is enabled, after unveiling some unexpected structural analogy and a certain duality of theses two classes: (1) both result from some affine displacements applied to nonexpansive mappings; (2) the reversed affine displacements applied to them generate nonexpansive mappings.

Moreover, we emphasize the fact that the displacement technique creates a natural transfer-return dynamics between operators, which preserves important operator properties (as the structure of the fixed point set, or condition $(I)$) and converts conventional iterative procedures related to nonexpansivity to adequate inertial procedures for displaced nonexpansive operators. The resulting equivalences allow us to take the fundamental results on fixed points of conventional nonexpansive mappings and transfer them to results related to fixed points of averaged or enriched nonexpansive operators.

14:00-15:30 Session 6: Tutorial 1
Location: A102
14:00
Synthetic Data Generation with LLMs

ABSTRACT. This tutorial provides a technical overview of synthetic data generation using Large Language Models (LLMs), focusing on core methodologies and their integration. Synthetic data has become an essential tool for addressing key limitations in the availability, cost, and distributional coverage of manually annotated datasets. It enables scalable experimentation, facilitates data augmentation in low-resource settings, and supports iterative model refinement. The session begins with a discussion of generation methods and filtering strategies designed to enforce quality constraints. Next, the tutorial examines practical use cases. These include alignment tuning, where synthetic datasets are used to steer model behavior; inference-time augmentation, where generated exemplars support few-shot generalization or contextual adaptation; and self-improvement workflows, where models contribute to their iterative training through synthetic supervision.

15:50-17:50 Session 7A: Artificial Intelligence track (2)
Location: A102
15:50
PyStash: Retrieval-Augmented Generation Pipeline Context Aware Fine Tuning

ABSTRACT. Retrieval-Augmented Generation (RAG) pipelines improve the factual consistency of large language model (LLM) outputs by grounding responses in external documents. However, most existing RAG implementations rely on fixed retrieval settings, which cannot adapt dynamically to query complexity, user intent, or document structure. This work presents PyStash, an extensible, modular RAG platform that integrates a novel context-aware optimisation algorithm for adaptive retrieval parameter tuning. The system automatically generates synthetic question-answer pairs from the user-provided corpus to optimise retrieval depth (top-k) and semantic similarity thresholds, enhancing the relevance and efficiency of context selection. PyStash supports document management, multi-model evaluation, directory-level isolation of configurations, and traceable chunk-level citations, all accessible through a graphical user interface. Experiments across multiple open-weight LLMs show that the proposed optimisation mechanism reduces inference latency and irrelevant references while maintaining or improving answer quality.

16:10
ContaGPT: A Domain-Adapted LLM for Romanian Financial and Accounting Applications
PRESENTER: Cezar Tudor

ABSTRACT. Large language models (LLMs) perform well across domains but often struggle in low-resource languages and specialized fields. We introduce ContaGPT, a Romanian domain-adapted LLM fine-tuned for financial and accounting-related tasks. Based on Microsoft’s Phi-4 model (14B parameters), ContaGPT was trained with Quantized Low-Rank Adaptation (QLoRA) on curated Romanian fiscal documents, Enterprise Resource Planning (ERP) manuals, and accounting guides.

To improve factual grounding, we integrated ContaGPT into a hybrid Retrieval-Augmented Generation (RAG) pipeline combining BM25 and embedding-based retrieval with reranking. Evaluation on 100 real-world Romanian financial queries—via both preference-based and metric-based human review—shows ContaGPT significantly outperforms the base Phi-4 in correctness, relevance, and user preference.

Despite its improved performance, ContaGPT remains efficient: it was trained on consumer GPUs and runs at 4-bit precision, requiring only 8GB of memory for inference. ContaGPT illustrates how practical, low-cost techniques can adapt open-source LLMs to low-resource languages and professional domains.

16:30
Finetuned Llama-3 based Solution for Specific Information Retrieval with Enhanced Reliability

ABSTRACT. Our study investigates solutions for developing a cost-effective chatbot to be used by an organization in need of an automated question answering support system. Many organizations (in particular, Universities) seek AI-based solutions capable to provide students the university life information in an automated manner. Many times however these are based on complex architectures and may prohibitive for institutions with limited resources. In our study, we examine how different models, context configurations, and personas (instructions meant to guide the system in terms of the answer purpose or tone) affect customer satisfaction regarding answers. We evaluated three Llama-3 variants fine-tuned across different combinations of training parameters, extra context provision and personas. Our solution focused on university applicability, namely for the Technical University of Cluj-Napoca. Its performance was measured through customer satisfaction scores obtained from university students who interacted with the chatbot and rated their experience. Results demonstrate that fine-tuned models with 1000 iterations achieved up to 85% customer satisfaction when combined with both extra context and persona features, compared to about 25% for the base model alone. These findings suggest that effective specialized chatbots could be implemented without complex Retrieval Augmented Generation architectures, providing practical guidance for companies considering deployment. Strategic model selection and prompt engineering techniques can achieve high customer satisfaction while maintaining implementation simplicity, cost-effectiveness and replicability.

16:50
Source Code Metrics and LLMs Summaries: Do Correlations Exist?

ABSTRACT. Source code metrics help developers assess complexity and maintainability. Large language models (LLMs) can generate code summaries, but it remains unclear whether summary length reflects structural properties. This study explores correlations between source code metrics and numbers of words produced by a Large Language Model when summarizing code. We examine data from a suite of systems to search for patterns of positive and negative correlations. We present evidence that there are system-specific relationships, but no single pattern exists over all systems.

17:10
A visual comparison between Neutral Networks and Schemata dynamics in Genetic Algorithms

ABSTRACT. In this paper, we study how the theoretical perspectives of Schema Theorem [Holland, 1975, 1992] and Neutral Networks [Kimura, 1968, Aguirre et. al., 2009] applied to Genetic Algorithms (GA) agree or differ. In both cases, we use deterministic 2D visualisations to display the distribution of neutral networks and schemata within a generation, and view their changes across successive generations. For easier result comparison, we use four well-known numerical benchmark functions (DeJong's 1st, Rastrigin's, Schwefel's and Michalewicz's).

Because the number of schemata increases exponentially with chromosome length, we study only small problem instances; however, combining this limitation with deterministic visualisations affords us visual interpretability.

We observe how algorithm convergence focuses both schemata and neutral networks to certain patterns within the population (and within genotype space), how they explore and switch optima, and see some covariance in the counts of population-instanced schemata and neutral networks, suggesting a partial overlap between what the two theories measure.

17:30
Do Language Models Help or Harm? The Role of LLM-Augmented Explanations in Human-AI Image Classification Tasks

ABSTRACT. As large language models (LLMs) are increasingly integrated into explainable AI (XAI) pipelines, there is growing interest in whether their fluent, human-like explanations improve or hinder decision-making in AI-assisted tasks. In this study, we examine how LLM-generated narrative explanations affect user understanding, confidence, and accuracy in a human–AI computer vision setting. Participants completed a fine-grained image classification task involving dog breeds, supported by either visual-only explanations (Grad-CAM) or visual + narrative explanations generated using GPT-4o. Using a 2×2 within-subjects design, we evaluated the effects of explanation type and model correctness on participant agreement with the AI, confidence ratings, decision accuracy, and confidence–accuracy calibration. Our results reveal a double-edged effect: narrative explanations increased confidence—especially when the model was correct—but did not improve overall accuracy. Critically, participants were more likely to accept incorrect predictions when a narrative explanation was present, suggesting a risk of overtrust. These findings highlight the persuasive power—but also potential pitfalls—of LLM-augmented explanations in vision tasks.

15:50-17:50 Session 7B: IAFP workshop (2)
Location: B022
15:50
A common fixed point theorem in strictly convex Fuzzy metric spaces

ABSTRACT. In this paper we prove existence of a common fixed point for two self-mappings defined on a convex, compact subset of strictly convex Fuzzy metric spaces satisfying nonlinear type condition. This result holds for a wide class of mappings including non-expansive mappings.

16:10
Explainable Machine Learning Models with SHAP for Microwave-Assisted Improved Extraction of Anticancer Nimbolide from Azadirachta indica Leaves

ABSTRACT. This study leverages explainable machine learning (ML) models integrated with SHapley Additive exPlanations (SHAP) to optimize and interpret the microwave-assisted extraction (MAE) of nimbolide, a potent anticancer compound from Azadirachta indica (neem) leaves. Key extraction parameters—solid/liquid ratio, microwave power, and extraction time—were analyzed using ML algorithms to predict nimbolide yield. SHAP values provided transparent insights into feature importance and interactions, revealing that microwave power and extraction time were the most influential factors. The optimized model achieved high predictive accuracy, aligning with experimental validation under ideal conditions (1:16 g/mL solid/liquid ratio, 280 W power, 22 min time). Subsequent purification via preparative thin-layer chromatography (PTLC) yielded nimbolide with >98 purity. This approach not only enhances extraction efficiency but also offers interpretability, bridging the gap between data-driven optimization and mechanistic understanding for bioactive compound isolation.

16:30
Fixed Points of Enriched Mappings with General Real Constants

ABSTRACT. In 2019, Belinde introduced the concept of enriched contractions as a new type of mapping that builds on classical fixed point theory.

Definition 1. Let (X, ||.||) be a linear normed space. A mapping T: X to X is said to be an enriched contraction if there exist b \in [0, infinity) and \theta \in [0, b+1) such that ||b(x - y) + Tx - Ty|| < or = \theta \|x - y\|, for all x, y in X.

To show the constants used in the above equation, we also call T a (b, theta)-enriched contraction.

This type of mapping includes both the well-known Picard--Banach contraction and some nonexpansive mappings. One important result is that the enriched contraction has a unique fixed point. This fixed point can be found using a method called the Krasnoselskij iteration.

In this paper, we extend the definition by allowing b in R instead of b in [0, infinity). This allows the condition to apply to both contraction and nonexpansive mappings. An example is provided to demonstrate a mapping that meets the extended condition but not the original.

16:50
Explainable Machine Learning Models with SHAP for Distillation and Extraction from Siam Cardamom

ABSTRACT. This study presents an enhanced approach for modeling and interpreting the distillation and extraction of essential oils from Siam cardamom (Amomum krervanh) using a third-degree polynomial regression model and SHapley Additive exPlanations (SHAP). The experimental data, originally fitted to quadratic equations, were reanalyzed using a third-degree polynomial regression model to capture more complex relationships among key processing parameters. The updated model achieved a higher coefficient of determination (R²), indicating improved predictive accuracy. To elucidate the influence of each parameter on the oil yield, SHAP values were computed, providing a transparent understanding of the contributions of variables such as extraction time, material-to-water ratio, and microwave power. The findings offer valuable insights for optimizing the extraction process, ensuring both efficiency and transparency in essential oil production from Siam cardamom.

17:10
SHAP-Enhanced Explainable Machine Learning Models for Antioxidants from Thai Pigmented Rice Bran

ABSTRACT. This study aims to develop an explainable machine learning model to predict the yields of antioxidant compounds from Thai pigmented rice bran under various extraction conditions by applying SHAP. We investigate and visualize the contribution of individual features such as water content, extraction time, and solid-to-solvent ratio to the model's output. The results demonstrate not only accurate predictive performance but also reveal clear insights into the influence of process variables, promoting data-driven decisions for green extraction optimization.

17:30
Two new extragradient methods for solving pseudomonotone the equilibrium problem in Hilbert spaces

ABSTRACT. This paper introduces two new extragradient methods designed to solve pseudomonotone equilibrium problems subject to a Lipschitz-type condition. These methods incorporate a variable stepsize criterion that dynamically adjusts with each iteration based on prior iterations. A distinguishing feature of these methods is their independence from prior knowledge of Lipschitz-type constants or any line-search method. The convergence theorems for the proposed methods are established under mild conditions, without requiring the knowledge of Lipschitz-type constants. Additionally, the paper includes several investigations demonstrating the numerical efficacy of the methods and facilitating comparisons with other approaches. This paper contributes to the advancement of computational methods for addressing pseudomonotone equilibrium problems across various applications.