SYNASC 2024: 26TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING
PROGRAM FOR MONDAY, SEPTEMBER 16TH
Days:
next day
all days

View: session overviewtalk overview

08:00-09:00 Session 1: Registration

Registration will start on September, 16 at 8:00 at the desk nearby Room 048 (ground floor, on left from the main hall).

Location: 048
09:20-10:10 Session 3A: SYNASC Invited talk
Location: A11
09:20
Can Quantum Computing Solve my Favorite Problem?

ABSTRACT. It seems that everyone is talking about the promises of quantum computing. In addition to the famous Shor algorithm for factoring in polynomial time, quantum information processing is believed to revolutionize classical optimization algorithms, classical search algorithms, and classical learning theories. Although there is indeed much evidence for such a potentiality, there are, to date, no definitive theoretical or practical demonstrations of any quantum advantage.

Part of the reason for this uncertain state of affairs is that the boundary between classical and quantum computing is not well-understood. In this talk, we will explore quantum computing and its connection to classical computing from a number of semantically motivated perspectives. Each perspective will provide some insight at a possible semantic source of quantum advantage and new classes of applications in which quantum computing would potentially provide an advantage over classical computing.

09:20-10:10 Session 3B: FROM Tutorial
Location: A01
09:20
Optimization Modulo Theory: A Tutorial Using Z3 and Practical Case Studies (part 1)

ABSTRACT. Optimization Modulo Theories (OMT) is a computational approach that extends Satisfiability Modulo Theories (SMT) to address optimization problems. While SMT focuses on determining the satisfiability of logical formulas within specific theories (such as linear arithmetic, arrays, or bit-vectors), OMT aims to find an optimal solution for these formulas under a given objective function. Popular OMT solvers include Z3 [1] and OptiMathSAT [2]. We will use Z3 in this tutorial.

The tutorial is divided into two parts. In the first part, we will introduce toy examples demonstrating OMT, which can be manually written using SMT-LIB syntax [3]. For more complex problems, the APIs provided by OMT tools are essential. The second part will illustrate this with a case study on the automated deployment of component-based applications in the Cloud. This problem involves allocating virtual machine (VM) offers from various Cloud Providers, ensuring that constraints from component interactions and hardware/software requirements are met, while optimizing performance objectives (e.g., minimizing costs). This can be formulated as a constraint optimization problem, enabling automatic optimization in principle. However, when dealing with a large set of VM offers (several hundreds), the computational demand becomes significant, making automatic optimization impractical with current general OMT tools.

We addressed this challenge by methodically analyzing the problem to identify search space reduction methods. These methods leverage:

1) Symmetries in the general Cloud deployment problem, the graph representation associated with structural constraints specific to each application, and their combinations (based on paper [4]).

2) Historical data and the graph-based representation of a particular application to extract soft constraints from edge classification predictions based on paper [5]).

References:

[1] https://github.com/Z3Prover/z3

[2] https://optimathsat.disi.unitn.it/

[3] https://smt-lib.org/

[4] M Erascu, F Micota, D Zaharie, “Scalable optimal deployment in the cloud of component-based applications using optimization modulo theory, mathematical programming and symmetry breaking”, Journal of Logical and Algebraic Methods in Programming 121, 100664

[5] E Laitin and M Erascu, “Fast and Exact Synthesis of Application Deployment Plans Using Graph Neural Networks and Satisfiability Modulo Theory”, accepted at 2024 International Joint Conference on Neural Networks (IJCNN).

10:30-11:20 Session 4A: SYNASC Invited talk
Location: A11
10:30
General elections over the Internet: what could possibly go wrong?

ABSTRACT. Online voting is seen as a way to include a wider demographic in the democratic process and increase voter turnout. But can we control the security risks? Election protocols and their security and privacy requirements have been studied for more than 40 years. The first democratic election with an option to cast a vote over the Internet took place in the year 2000. Since then several countries have trialed or repeatedly used online voting systems.

Due to the high stakes and high risks for fraud and foreign interference, the design, implementation, and operation of a country’s online voting system requires an extraordinarily careful examination. Switzerland’s approach to secure online voting has been to require formal verification of the voting system’s protocols, public intrusion tests, and published source code open to public scrutiny. This transparent approach offers a valuable case study on the deployment of a complex, security-critical system.

In this talk, I will review the security challenges for Internet voting systems with a focus on Switzerland’s approach. I will then discuss some of these challenges from the academic, formal verification perspective with a view towards the verification of security-critical systems in general.

10:30-11:20 Session 4B: FROM Symposium
Location: A01
10:30
Tracer: A Tool for Race Detection in Software Defined Network Models

ABSTRACT. Software Defined Networking (SDN) has become a new paradigm in computer networking, introducing a decoupled architecture that separates the network into the data plane and the control plane. The control plane acts as the centralized brain, managing configuration updates and network management tasks, while the data plane handles traffic based on the configurations provided by the control plane. Given its asynchronous distributed nature, SDN can experience data races due to message passing between the control and data planes. This paper presents Tracer, a tool designed to automatically detect and explain the occurrence of data races in DyNetKAT SDN models. DyNetKAT is a formal framework for modeling and analyzing SDN behaviors, with robust operational semantics and a complete axiomatization implemented in Maude. Built on NetKAT, a language leveraging Kleene Algebra with Tests to express data plane forwarding behavior, DyNetKAT extends these capabilities by adding primitives for communication between the control and data planes. Tracer exploits the DyNetKAT axiomatization and enables race detection in SDNs based on Lamport vector clocks. Tracer is a publicly available tool.

10:50
Converting BPMN Diagrams to Privacy Calculus

ABSTRACT. The ecosystem of Privacy Calculus is a formal framework for privacy comprising (a) the Privacy Calculus, a Turing-complete language of message-exchanging processes based on the π-calculus, (b) a privacy policy language, and (c) a type checker that checks adherence of Privacy Calculus terms to privacy policies. BPMN is a standard for the graphical description of business processes which aims to be understandable by all business users, from those with no technical background to those implementing software. This paper presents how (a subset of) BPMN diagrams can be converted to Privacy Calculus terms, in the hope that it will serve as a small piece of larger workflows for building privacy-preserving software. The conversion is described mathematically in the paper, but has also been implemented as a software tool.

11:40-13:00 Session 5A: Tracks: Theory of Computing + Distributed Computing
Location: A11
11:40
Complexity of Monomial Prediction in Cryptography and Machine Learning
PRESENTER: Mahesh Rajasree

ABSTRACT. In this paper, we focus on the monomial prediction problem in two settings: (1) Decide whether a particular monomial m is present in a composite function f:= f_r \circ f_{r-1} \circ \hdots f_0, where f_i are quadratic boolean functions, (2) Decide whether a particular monomial m is present in a composite function f:= f_r \circ f_{r-1} \circ \hdots f_0, where polynomials f_i are efficiently computable by Probabilistic Generating circuits over rationals. Probabilistic generating circuits (PGCs) are economical representations of multivariate probability generating polynomials (PGPs), which capture many tractable probabilistic models in machine learning.

The first problem has a strong connection with the security of symmetric-key primitives. Dinur and Shamir proposed the cube attack for distinguishing a cryptographic primitive from a random function, which can be thought of as an efficient monomial prediction. In a general setting, over any large finite field or integers, monomial prediction is known to be NP-hard. Here, we show that in the quadratic setting, the problem is \oplus P-complete. \oplus P is an interesting complexity class that is not known to contain NP, however, it is believed to contain computationally hard problems. On the other hand, we also present several new zero-sum distinguishers for 5-round Ascon, which is one of the ten finalists for NIST light weight cryptography standardization competition.

We show that the second problem is #P-complete. It is known that PGCs have efficient inference, i.e. given a monomial, one can efficiently output (which signifies the probability) its coefficient in the polynomial computed by the circuit. However, a composition of such functions makes the inference hard. Composition of probabilistic models and their efficient inference play a crucial role in the semantic contextualization and framework of uncertainty theories in graphical modelling.

12:00
On the Maximum Distance Sublattice Problem and Closest Vector Problem
PRESENTER: Mahesh Rajasree

ABSTRACT. In this paper, we introduced the Maximum Distance Sublattice Problem (MDSP). We observed that the problem of solving an instance of the Closest Vector Problem (CVP) in a lattice L is the same as solving an instance of MDSP in the dual lattice of L. We give an alternate reduction between the CVP and MDSP. This alternate reduction does not use the concept of dual lattice.

12:20
Abstract Continuation Semantics for a Biologically-Inspired Formalism

ABSTRACT. We investigate the abstractness of a continuation semantics for a calculus inspired by DNA computing. This semantic investigation is given in the framework of complete metric spaces, and uses the weak abstractness criterion introduced by us in recent work. We prove that the denotational semantics designed with continuations is weakly abstract with respect to the operational semantics of our calculus which involves multiparty synchronization. We show that the expected concurrency laws are satisfied in this semantics for the calculus under investigation.

12:40
Optimizing Cold Start Performance in Serverless Computing Environments

ABSTRACT. Serverless computing, also known as Function as a Service (FaaS), simplifies cloud application development by abstracting server management. However, cold start latency, which occurs when initializing the execution environment, poses significant challenges. This work proposes an Apache OpenWhisk-integrated dynamic caching and request routing system to mitigate cold start latency. By leveraging intelligent caching, the system aims to reduce response times for serverless functions. Comprehensive performance evaluations demonstrate that the proposed system reduced average response times by up to 7 times, while also ensuring efficient resource utilization. These findings provide practical insights and scalable solutions to enhance the efficiency and reliability of serverless architectures, particularly for latency-sensitive applications.

11:40-12:40 Session 5B: Workshop: Natural Computing and Applications
Location: 048
11:40
Counter-Factual Hidden Event Detection: A Proof of Concept using Machine Learning in Time Series

ABSTRACT. We propose a new framework for detecting hidden events based on historical data from previous events and related time series data that are known (or presumed) to be affected by the event. In particular, a forecasting model of the time- series data is trained using known occurrences of the hidden events. Then, for a given time interval where a hidden event may have occurred, the time series is forecasted under various hypotheses, including the hidden event happening (possibly at various time intervals) and not happening. The resulting forecasts are compared with the actual time series for the interval where the event would have had its maximum effect, and the hypothesis that leads to a forecast closest to the actual data is selected. The methodology is evaluated under two scenarios. The performance of the methodology is satisfactory in both cases, showing its potential for hidden event detection.

12:00
The NLP for Employee Review Sentiment Analysis: an Explainable Perspective

ABSTRACT. Main goal of the research proposed in this manuscript is to analyze employees satisfaction by employing hybrid methods between machine learning and metaheuristics. Furthermore, the proposed work utilizes term frequency-inverse document frequency (TF-IDF), natural language processing (NLP) technique, for the identification of important segments of the employees' statements towards their workplaces, while the extreme gradient boosting (XGBoost) model is used for predicting the outcomes. Since each machine learning model should be adjusted for specific problem, the XGBoost has been optimized by a modified swarm metaheuristics based on red fox optimizer (RFO) algorithm and its performance is evaluated against other high-performing swarm intelligence approaches. The results indicate the superiority of proposed solution, which has been further evaluated with the Shapley additive explanations (SHAP) analysis.

12:20
Modified Metaheuristics Optimization for Cyberbullying Detection On Online Data Science Platform

ABSTRACT. Online harassment detection faces significant challenges due to its expansive reach and anonymity. Addressing this issue demands effective detection mechanisms capable of processing vast data streams and adapting to evolving online language. Leveraging advancements in artificial intelligence, we propose a novel approach grounded in natural language processing (NLP) and metaheuristic algorithms. Our methodology integrates term frequency-inverse document frequency (TF-IDF) encoding and the AdaBoost algorithm for classification. To tackle the NP-hard problem of hyperparameter selection, we introduce a modified crayfish optimization algorithm (COA), termed GI-COA. This paper represents a pioneering effort in utilizing metaheuristic algorithms for hyperparameter selection in harassment detection models. Through experimentation, we demonstrate the efficacy of our approach in fostering a safer online environment. The best performing optimize models demonstrate and accuracy exceeding 77%.

11:40-13:10 Session 5C: FROM Tutorial
Location: A01
11:40
Optimization Modulo Theory: A Tutorial Using Z3 and Practical Case Studies (part 2)

ABSTRACT. Optimization Modulo Theories (OMT) is a computational approach that extends Satisfiability Modulo Theories (SMT) to address optimization problems. While SMT focuses on determining the satisfiability of logical formulas within specific theories (such as linear arithmetic, arrays, or bit-vectors), OMT aims to find an optimal solution for these formulas under a given objective function. Popular OMT solvers include Z3 [1] and OptiMathSAT [2]. We will use Z3 in this tutorial.

The tutorial is divided into two parts. In the first part, we will introduce toy examples demonstrating OMT, which can be manually written using SMT-LIB syntax [3]. For more complex problems, the APIs provided by OMT tools are essential. The second part will illustrate this with a case study on the automated deployment of component-based applications in the Cloud. This problem involves allocating virtual machine (VM) offers from various Cloud Providers, ensuring that constraints from component interactions and hardware/software requirements are met, while optimizing performance objectives (e.g., minimizing costs). This can be formulated as a constraint optimization problem, enabling automatic optimization in principle. However, when dealing with a large set of VM offers (several hundreds), the computational demand becomes significant, making automatic optimization impractical with current general OMT tools.

We addressed this challenge by methodically analyzing the problem to identify search space reduction methods. These methods leverage:

1) Symmetries in the general Cloud deployment problem, the graph representation associated with structural constraints specific to each application, and their combinations (based on paper [4]).

2) Historical data and the graph-based representation of a particular application to extract soft constraints from edge classification predictions based on paper [5]).

References:

[1] https://github.com/Z3Prover/z3

[2] https://optimathsat.disi.unitn.it/

[3] https://smt-lib.org/

[4] M Erascu, F Micota, D Zaharie, “Scalable optimal deployment in the cloud of component-based applications using optimization modulo theory, mathematical programming and symmetry breaking”, Journal of Logical and Algebraic Methods in Programming 121, 100664

[5] E Laitin and M Erascu, “Fast and Exact Synthesis of Application Deployment Plans Using Graph Neural Networks and Satisfiability Modulo Theory”, accepted at 2024 International Joint Conference on Neural Networks (IJCNN).

14:00-15:20 Session 6A: Track: Artificial Intelligence (1)
Location: A11
14:00
Novel Data Adaptation Techniques for Enhanced Lung Cancer Detection in CT Scans

ABSTRACT. Lung cancer persists as a global leader in cancer-related deaths, highlighting the critical need for precise and efficient detection methods. This paper investigates the use of the Medical Segmentation Decathlon dataset to train neural networks for lung cancer segmentation in CT scans via semantic segmentation. We propose and evaluate four new data adaptation techniques specifically designed for this dataset, with each technique being assessed using U-Net-based architectures. Our approach incorporates a thorough exploratory data analysis to uncover the dataset's strengths and weaknesses, which in turn guided our data preprocessing and augmentation strategies.

14:20
Pre-Diagnosis of Autism in Children Using Distributed Learning

ABSTRACT. Autism Spectrum Disorder (ASD) is a multifaceted neurodevelopmental condition, exhibiting diverse symptoms related to social communication, behavior, and sensory process- ing. Diagnosing ASD in children necessitates a comprehensive assessment of their behavioral traits and the identification of crucial indicators, rendering the task intricate and nuanced. This study addresses the performance resulting from mixing machine learning algorithms like logistic regression, decision tree, Na ̈ıve Bayes, support vector machine, and random forests, to streamline the diagnostic process of ASD in pe- diatric populations, within the framework of a modified version of the Privacy-Preserving Ag- gregation of Teacher Ensembles (PATE) Frame- work. This approach facilitates the amalgama- tion of modest datasets from multiple practition- ers and an ensemble of independent distributed Machine Learning (ML) approaches while ensur- ing the preservation of data integrity through encryption mechanisms. Our findings highlight the efficacy of the PATE approach in automating the diagnosis in chil- dren, demonstrating enhanced sensitivity, speci- ficity, and accuracy while ensuring data privacy compared to individual techniques. This study contributes to the advancement of collabora- tive methods for predictive modeling in psychol- ogy, enabling practitioners with small datasets to combine their efforts effectively.

14:40
On applying GraphLime for trustworthy diabet prediction

ABSTRACT. This paper aims to provide a comprehensive analysis of the benefits of employing GraphLIME (Local Interpretable Model Explanations for Graph Neural Networks) for reliable diabetes mellitus prediction. Our focus is on highlighting the advantages of integrating GraphLIME with a features attention-mechanism, compared to the standard pairing of deep learning neural networks with the original LIME explainability method. This system enabled us to develop an effective approach for identifying the most relevant features and applying the attention mechanism solely to those features. We conducted a detailed comparison of the performance metrics between the two approaches. By incorporating the attention mechanism, the model reached an accuracy of 92.6% in addressing the problem. The model’s performance is thoroughly illustrated, with results further assessed using the Receiver Operating Characteristic (ROC) curve. Applying this technique to a dataset of 768 patients with or without diabetes mellitus, we enhanced the model’s performance by over 18%.

15:00
How to Tackle Fake Medical Content with Clustering and Transformers

ABSTRACT. Spreading fake medical information / news online might threat the public health, e.g., by decreasing the vaccination rates or the level of trust in the healthcare process. The system presented in our paper aims to address this by determining in an automatic manner whether a text either contains or implies some claims known to be false. To this end, it relies on the Monant dataset, which comprises medical posts and claims whose veracity has been established by experts in the field. The dataset is used in two ways. First, the set of false claims aimed to be detected was obtained from it. These false claims are clustered, allowing the next steps to be performed against one cluster only, thus reducing the response time. Second, to detect a post's falsehood, this gets summarized using transformers, then the summary is compared similarity-wise with each element of the chosen cluster of false claims, using all-MiniLM-L6-v2 and a variation of BioBERT. A high similarity score is interpreted as the presence of the false claim in the summary. The results obtained from the experiments demonstrate, through manually checked examples, that the system manages to identify about 77% of the false claims within the text at hand (with an F1-value of 0.73).

14:00-15:30 Session 6B: FROM Tutorial
Location: A01
14:00
A Hands-on Introduction to the Coq Proof Assistant (part 1)

ABSTRACT. Coq is a formal proof management system. It consists of a formal language for writing mathematical definitions, algorithms, and theorems, together with an environment for the interactive development of formal proofs. This tutorial will focus on introductory topics: proofs of propositional and first-order statements; writing functional algorithms as recursive functions and their expected properties as theorems; and proving the said theorems by induction. After a brief introduction and few exercises that I will run interactively, you will take over and, with my help, are expected to solve the rest of the exercises. Pre-requisites are minimal: basic notions of logic and functional programming. Please have the latest Coq version installed on your laptop, together with the GUI of your choice. Installation instructions are available at https://coq.inria.fr/download.

15:50-17:10 Session 7A: Track: Symbolic Computation (1)
Location: A11
15:50
On Projective Delineability
PRESENTER: Lucas Michel

ABSTRACT. We consider cylindrical algebraic decomposition (CAD) and the key concept of delineability which underpins CAD theory.  We introduce the novel concept of projective delineability which is easier to guarantee computationally.  We prove results about this which can allow reduced CAD computations.

16:10
Towards Verified Polynomial Factorisation

ABSTRACT. Computer algebra systems are really good at factoring polynomials, i.e. writing f as a product of irreducible factors. It is relatively easy to verify that we have a factorisation, but verifying that these factors are irreducible is a much harder problem. This paper reports work-in-progress to do such verification in Lean.

16:30
First steps towards Computational Polynomials in Lean

ABSTRACT. The proof assistant Lean has support for abstract polynomials, but this is not necessarily the same as support for computations with polynomials. Lean is also a functional programming language, so it should be possible to implement computational polynomials in Lean. It turns out not to be as easy as the naive author thought.

16:50
Newton's Polynomial Root-Finder Can Be Strengthened

ABSTRACT. Our recent near-optimal polynomial root-finder at ACM-SIAM SODA 2024 enhances the power of the subdivision classical iterations, extends it to black box polynomials, given by an oracle (black box subroutine) for their evaluation rather than by coefficients, and promises to compete for user's choice. Restricted to a fixed Region of Interest on the complex domain the algorithm runs at the cost proportional to the number of roots in a small neighborhood of the Region. Polynomial root-finders based on functional iterations, in particular Newton's, have weaker formal support but compete empirically for approximation of all complex roots under proper initialization that involve polynomial coefficients. We accelerate Newton's and two other popular functional iterations by means of incorporation of the Fast Multipole method (FMM), devise fast black box algorithms for their initialization, and study combination of subdivision iterations with Newton's to enhancing their power for root-finding in a fixed complex region. Part of our study can be of independent technical interest for polynomial root-finding.

15:50-17:30 Session 7B: Special session: NACET
Location: 048
15:50
Frequency generation for a quantum current sensor in smart grids

ABSTRACT. Integrating renewable energy sources, electric vehicles, and storage systems into power grids demands advanced control and monitoring systems. Precise current sensors are a critical component of these systems, essential for ensuring reliable and efficient electricity distribution. Quantum sensors, particularly those based on nitrogen vacancy (NV) centers in diamond, provide a promising solution for high-precision current measurement. This document introduces an innovative method for generating microwave signals using direct digital synthesis (DDS) and field-programmable gate arrays (FPGAs) to drive multiple signals.

16:10
Fluorescence Lifetime of NV Centers for Temperature-Insensitive All-Optical Magnetic Field Sensing

ABSTRACT. We investigate the combined temperature and magnetic field dependent behavior of high NV-density microdiamonds in an all-optical frequency-domain based setup, utilizing the change in fluorescence lifetime. We use these frequency-domain data to train neural networks for the prediction of temperatures and magnetic fields. At zero magnetic field, we show the ability to predict temperatures in the range of 0°C to 100°C, with a standard deviation of 1.24°C. Furthermore, we show the ability to predict applied magnetic fields, while varying the ambient temperature, with a higher accuracy, compared to the approach of observing of the fluorescence intensity at a single excitation frequency.

16:30
Data-Driven Modeling of Photovoltaic Panels Using Neural Networks

ABSTRACT. Photovoltaic (PV) systems are essential for the shift towards sustainable energy. Accurate performance modeling of these systems is vital for optimizing their efficiency and adapting to changing environmental conditions. Traditional analytical models often struggle with real-world complexities. This paper uses a neural networks (NNs) method to model PV system performance, offering superior accuracy and adaptability. By leveraging NNs, the model adopts a black-box methodology, bypassing the need for detailed knowledge of PV system complexities. The study utilizes historical performance data from two panels located in Sion, Switzerland, and Eugene, Oregon, in the United States. This approach captures the interdependencies between input parameters such as solar irradiance, temperature, and applied load and output variables such as voltage and current. Additionally, it accounts for key PV metrics including open circuit voltage, short circuit current, maximum power point current, and maximum power point voltage. The NN model is trained with a comprehensive dataset featuring high persistent excitation along with real measurements. Performance metrics such as root mean square error demonstrate that the data-driven NN model surpasses traditional methods, offering a more practical solution for PV system performance modeling.

16:50
Assessing Predictor Influence in LSTM Models for Enhanced Solar Energy Forecasting

ABSTRACT. Accurate forecasting of solar energy production is highly important for an adequate integration of renewable energy into the power grid. This study explores the importance of various predictors for enhancing the accuracy of solar energy forecasting using two distinct Long Short-Term Memory models: one that relies solely on historical power production data and past exogenous weather data, and another that additionally incorporates future exogenous weather data. Various settings are tried to improve the results and relatively similar parameters are used by both, yielding more accurate results for the model that also uses information from the future. The results also demonstrate that predictor importance varies significantly between the two models. These findings suggest that different models benefit from distinct predictor combinations and incorporating exogenous data can significantly enhance forecasting performance.

17:10
Optimal decision tree design for near real-time control of battery energy storage systems

ABSTRACT. The increasing numbers of distributed energy resources (DERs) incur new challenges for the energy supply due to the volatilities and uncertainties of renewable energies. To utilize the benefits of DERs, a combination with storage systems and intelligent controls are necessary. Commonly used control methods are based on Model Predictive Control (MPC) with forecast and optimization, which require computational capabilities at the DER on a level, on which the distribution system operator (DSO) has no access. This paper proposes a control algorithm based on data, which combines a model based and an artificial intelligence (AI) based approach to utilize benefits from both methods while compensating for their drawbacks. Mixed Integer Quadratic Programming (MIQP) is used to generate training and testing data for a decision tree (DT). The investigation of the optimal composition of the training data and the optimal architecture of the DT are the main focus of this paper.

15:50-17:20 Session 7C: FROM Tutorial
Location: A01
15:50
A Hands-on Introduction to the Coq Proof Assistant (part 2)

ABSTRACT. Coq is a formal proof management system. It consists of a formal language for writing mathematical definitions, algorithms, and theorems, together with an environment for the interactive development of formal proofs. This tutorial will focus on introductory topics: proofs of propositional and first-order statements; writing functional algorithms as recursive functions and their expected properties as theorems; and proving the said theorems by induction. After a brief introduction and few exercises that I will run interactively, you will take over and, with my help, are expected to solve the rest of the exercises. Pre-requisites are minimal: basic notions of logic and functional programming. Please have the latest Coq version installed on your laptop, together with the GUI of your choice. Installation instructions are available at https://coq.inria.fr/download.

17:10-18:10 Session 8: Track: Numerical Computing
Chair:
Location: A11
17:10
Improving the Nested Markov Chain hyper-heuristic (NMHH) framework efficiency through sequential probability ratio testing

ABSTRACT. This paper introduces sequential probability ratio testing to the hyper-heuristic design process. Hyper-heuristics often require many function evaluations to find an acceptable heuristic configuration for the given problem. The reduction of the number of function evaluations is proposed via sequential T-Testing. The new approach is compared to two state-of-the-art hyper-heuristics and six metaheuristics on twelve benchmark problems in the continuous domain. The results point to the effectiveness of the new approach.

17:30
An application of the Shepard operator in image reconstruction

ABSTRACT. We propose an application of the Shepard operator combined with two radial basis functions in image processing. This method aims to reconstruct damaged black-and-white or color images, considering both a global and a local approach. In the construction of the Shepard operator, we use the inverse quadratic and the inverse multiquadric radial basis functions.

17:50
Generic high-speed design with low-area implementations of statistical operations based on an FPGA device

ABSTRACT. Artificial Intelligence has emerged as a transformative technology, revolutionizing numerous industries by enabling advanced automation, predictive analytics, and decisionmaking capabilities. For that Artificial Intelligence overruns many domains like telecommunication, smart manufacturing industry, autonomous machines, Automated Disease Diagnosis in Medical Imaging, defense, and others. On the other hand, the hardware implementation of Artificial Intelligence comes with certain challenges and constraints, especially in a critical area, which leverages machine learning algorithms and realtime data analysis to optimize production processes and improve overall efficiency. Statistical operations play a crucial role in various machine learning algorithms to understand, process data, or make predictions to optimize models. So, in this work, we developed a high-speed and low-area design and implemented statistical operations for image or signal processing using an FPGA Device. To enhance the performance, we develop different hardware architectures based on different levels of parallelism to process the statistical operations to compute the Mean, Variance, and RMS (Root Mean Square). These generic architectures work in parallel/pipeline architectures with and without memory. The proposed architectures implement an FPGA target (Intel/Altera Agilex 7: AGMH039R47A2E1V) using Altera Quartus prime pro edition version 23.4 and achieve an ultra-high throughput with low-area consumption compared to the state-of-art methods. For 480×640 image size, the mean calculation architecture involves 1498 logic registers, 1912 slice LUT, and just 29kbits memory and it operates at a maximum frequency of 406.5MHz. Additionally, for an 8×8 image size, we need 33 clock cycles to achieve the mean calculation and 33+1 clock cycles to complete the variance calculation, compared to other approaches that require more than 64 clock cycles.

17:20-18:00 Session 9: FROM Symposium
Chair:
Location: A01
17:20
Abstract Continuation Semantics for Multiparty Interactions in Process Calculi based on CCS

ABSTRACT. We develop denotational and operational semantics designed with continuations for process calculi based on Milner’s CCS extended with mechanisms offering support for multiparty interactions. We investigate the abstractness of continuation semantics. We show that our continuation-based denotational models are weakly abstract w.r.t. the corresponding operational models.

17:40
Towards Geometry-Preserving Reductions Between Constraint Satisfaction Problems (and other problems in NP)

ABSTRACT. We define two kinds of geometry-preserving reductions between constraint satisfaction problems and other NP-search problem. We give a couple of examples and counterexamples for these reductions.

17:30-18:50 Session 10: PhD session (1)
Location: 048
17:30
A Study on the topic of Crossing Minimisation

ABSTRACT. In this article we undertake a rigorous study on several problems related to the topic of reordering the order of the leaves of a tree according to a specific function and we come up with our own discoveries on the topic. Our study has its roots in tanglegrams, which represent a pair of trees on the same set of leaves with the matching leaves in the two trees joined by an edge. The primary focus of our article is on two distinct approaches for transforming the initial leaf order to match the input order: One Tree Crossing Minimization (OTCM) and One Tree Drawing by Deleting Edges (OTDE). To be more precise, we get a specific order as input and the purpose is to match the initial order of the tree as much as possible with the input order. Both problems are NP Hard, and we present approximation algorithms for each. Moreover, we explore efficient solutions for the isolated case of binary trees. Our own results include providing a 3 approximation algorithm for both OTDE and OTCM and displaying that OTDE is hard to approximate within a factor of $\frac{4}{5} - \epsilon$. Last but not least, we compose a literature survey, studying related topics in the field of crossing minimisation as we compare the solutions and their efficiency with OTDE and OTCM.

17:50
Detecting triangle-densest-k-subgraphs with simple nature-inspired algorithms

ABSTRACT. This paper proposes two nature-inspired algorithms to solve the triangle-densest-k-subgraph problem, and compares and contrasts their results. This is one of the less studied problems in computational graph theory that can be reasonably solved with at least one of the proposed algorithms even without specific improvements made to them.

18:10
Novel lossless crypto-compression scheme for medical images

ABSTRACT. The transmission of patients' medical images over the network increases the instances of fraud involving the theft of patient information. Crypto-compression schemes are used to ensure high security for medical images and better compression, which is necessary for efficient network transmission. In this paper, I propose a new crypto-compression scheme and a novel lossless compression algorithm based on an improved version of the Lempel-Ziv 77 algorithm, which uses a sliding window of 64 KB in size, combined with a variation of the RLC algorithm. Unlike other methods, the proposed crypto-compression scheme employs a double level of security ensured by the ElGamal cryptosystem based on elliptic curves and the XChaCha20 stream cipher, thus providing more secure transmission of images. The crypto-compression scheme (ECX-LZ-RLC) proposed in this paper is compared with four modern crypto-compression methods, such as JPEG-AES, FMT-AES, JPEG-EC-LFSR, and EC-ElGamal. Experimental results show that ECX-LZ-RLC provides a higher compression rate and a better average crypto-compression time compared to other modern crypto-compression schemes, evaluated for grayscale and RGB medical images with different resolutions, taken from certain datasets.

18:30
Control Flow Graphs against malware: methods of analysis and detection

ABSTRACT. One key component of a cyberattack is malware. If a malware program is detected and blocked the cyberattack may stagger or fail, thus bad actors design their malware programs with additional characteristics and functionalities that harden the analysis of the malware program and make the malware undetectable by antivirus solutions. With the appearance of more advanced malware new detection methods are needed. With the help of reverse engineering techniques one concept that analysts can work with is Control Flow Graphs. Used for software optimization, control flow graphs offer the advantage of graph properties for analysts to detect malicious particularities in malware samples. This paper explores some methods of detection and analysis based on control flow graphs, categorizes them in four categories and highlights different particularities in these approaches.