SYNASC 2023: 25TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING
PROGRAM FOR MONDAY, SEPTEMBER 11TH
Days:
next day
all days

View: session overviewtalk overview

09:20-10:10 Session 2: Invited talk: Subtropical Solving (Thomas Sturm)

Abstract. Subtropical real root finding is a heuristic method designed to solve large multivariate polynomials. This method considers the polynomial as a collection of exponent vectors associated with sign information on the coefficients. It employs linear programming to heuristically identify points where the values of the polynomial have opposite signs. In successful cases, the intermediate value theorem is utilized to compute a zero of the polynomial represented as a tuple of real algebraic numbers. By implementing this approach using the Reduce computer algebra system and theGurobi LP solver, we have achieved success in solving hundreds of problems derived from established mathematical models in chemistry and systems biology. These problems involved polynomials with up to 800,000 monomials in 10 variables and degrees up to 12, with an incompleteness failure rate of approximately 10 percent.

The method has been adapted to satisfiability modulo theories solving (SMT), where the focus went from one single polynomial equation to several simultaneous strict polynomial inequalities. We found that such problems represent more than 40 percent of the QF_NRA section of the SMT-LIB library of benchmarks. Technically, input problems in the theory QF_NRA are reduced problems in QF_LRA. Then SMT itself can be used in place of LP solving, which admits linear problems that are not necessarily pure conjunctions of constraints. Systematic experiments on the SMT-LIB have shown that the method is not a sufficiently strong decision procedure by itself but a valuable heuristic to use within a portfolio of techniques.

The talk gives an overview of the method and its application, along with theoretical results regarding its incompleteness.

10:30-11:20 Session 3: Invited talk: Generic Encodings and Static Analysis of Constructor Rewriting Systems (Horatiu Cirstea)

Abstract.  Rewriting is a formalism widely used in computer science and mathematical logic. The classical formalism has been extended, in the context of functional languages, with an order over the rules and, in the context of rewrite based languages, with the negation over patterns. We have proposed a concise and clear algorithm computing the difference over patterns which can be used to define generic encodings of constructor term rewriting systems with negation and order into classical term rewriting systems. As a direct consequence, established methods used for term rewriting systems can be applied to analyze the properties of the extended systems. The approach can also be seen as a generic compiler which targets any language providing basic pattern matching primitives. The formalism provides also a new method for deciding if a set of patterns subsumes a given pattern and thus, for checking the completeness of a set of patterns, the presence of useless patterns, or the absence of some patterns. The latter is particularly useful when one wants to verify that some patterns are absent from the result of a particular transformation and several extensions have been proposed to statically verify the absence of specified patterns from the reachable terms of constructor based term rewriting systems.

11:40-12:40 Session 4: Theory of Computing & Logic and Programming
11:40
Uniform SAmplINg with BOLTZmann
PRESENTER: Martin Pépin

ABSTRACT. Usain Boltz is a fast Python library for the uniform random generation of tree-like structures. It allows the user to specify both (1) the data structure they wish to sample, using simple combinators similar to those of context-free grammars, and (2) their memory representation. The underlying algorithms are optimised Boltzmann samplers allowing to get approximate-size uniform random generation in linear time. Experimental results show that Usain Boltz matches the performance of the experimental Arbogen package for OCaml, and out-performs the Boltzmann brain Haskell library, while being easier to integrate into existing scientific tools such as Sagemath.

12:00
Towards Learning Infinite SMT Models
PRESENTER: Mikolas Janota

ABSTRACT. This short paper proposes to learn models of Satisfiability Modulo Theories (SMT) formulas during solving. Specifically, we focus on infinite models for problems in the logic of linear arithmetic with uninterpreted functions (UFLIA). The constructed models are piecewise linear. Such models are useful for satisfiable problems but also provide an alternative driver for model based quantifier instantiation (MBQI).

12:20
On the Variants of Subset Sum: Projected and Unbounded
PRESENTER: Mahesh Rajasree

ABSTRACT. In this work, we focus on variants of Subset Sum. We first define a new variant called the Unique Projection Subset Sum (u-PSSUM) problem: Given (a_1, \hdots, a_n) \in Z_{\ge 0}^n, such that for every t, \sum_{i \in [n]} c_i a_i=t, for c_i \in \{0,1\} has unique solution if exists, u-PSSUM asks for a vector \Vec{x} = (x_1, \dots, x_n), such that there exists an S \subseteq [n], where \sum_{i \in S} x_i = t, and ||\Vec{a} - \Vec{x}||_p is minimized, where ||\cdot||_p denotes the \ell_p norm. We present a deterministic O(nt)-time algorithm and a randomized \widetilde{O}(n+t)-time algorithm for PSSUM, in \ell_1-norm.

The second one is already a known variant called Unbounded Subset Sum (UBSSUM) problem, which takes a tuple of non-negative integers (a_1, \dots, a_n, t), as an input, and asks whether there exists non-negative integers \beta_1, \dots, \beta_n, such that \sum_{i=1}^n \beta_i a_i = t. We present the first polynomial time reductions from UBSSUM to Closest Vector Problem (CVP) in \ell_1 and \ell_{\infty} norms. We also give new algorithms for two variants of UBSSUM problem.

14:00-15:30 Session 5: Tutorial: The TLA+ Language and Tools for Specifying and Verifying Systems (Stephan Merz)

Abstract. TLA+ is a language for the formal description of (mainly) concurrent and distributed algorithms and systems, based on Zermelo-Fraenkel set theory and linear-time temporal logic. Verifying properties of TLA+ specifications is supported by model checking (TLC and Apalache) and by theorem proving (TLAPS). This tutorial will provide an introduction to the language and its accompanying tools, using the problem of distributed termination detection as a running example.

 

15:50-17:30 Session 6A: Symbolic Computation (I) & Numerical Computing
Commentary:
15:50
Hybrid Intervals and Symbolic Block Matrices

ABSTRACT. Structured matrices with symbolic sizes appear frequently in the literature, especially in the description of algorithms for linear algebra. Recent work has treated these symbolic structured matrices themselves as computational objects, showing how to add matrices with blocks of different symbolic sizes in a general way while avoiding a combinatorial explosion of cases. The present article introduces the concept of hybrid intervals, which may have negative multiplicity. Various operations on hybrid intervals have compact and elegant formulations that do not require cases to handle different orders of the end points. This makes them useful to represent symbolic block matrix structures and to express arithmetic on symbolic block matrices compactly. The present article shows how hybrid intervals can be used to formulate symbolic block matrix addition and multiplication in a compact and uniform way.

16:10
Symbolic and numeric computation of symmetries for a class of Schrodinger Equations

ABSTRACT. An important and challenging computational problem is to identify and include the missing compatibility (integrability) conditions for general systems of partial differential equations. The inclusion of such missing conditions is executed by the application of differential elimination algorithms. Differential equations arising during modeling generally contain both exactly known coefficients and coefficients known approximately from data. We focus on our recent work on approximate differential elimination methods and in particular their application to the determination of approximate symmetries. We illustrate this with applications to a class of Schrodinger equations.

16:30
Experimental comparison of real root isolation implementations

ABSTRACT. Real solving of univariate polynomials is a fundamental algebraic question with important applications. This short paper focuses on the comparison of state-of-the-art free software implementations for isolating real roots of univariate polynomials. The experimental comparison is done on an extensive dataset of polynomials of different types of roots, as well as varying coefficient bitsize and degree.

16:50
On systems of linear homogeneous algebraic equations over truncated formal series
PRESENTER: Sergei Abramov

ABSTRACT. Let $K$ be a field, $K[[x]]$ the ring of formal power series, and $K((x))$ the field of formal Laurent series over $K$. Let $A$ be a nonzero polynomial matrix of degree $l$ which is an $l$-truncation of a matrix $M\in K[[x]]^{n\times n}$ which is not known in full. We establish here that it is impossible for the system $My=0$ to have a nonzero solution for every prolongation $M$ of a singular matrix $A$, and that for any truncated matrix $A$ there exists a prolongation giving a nonsingular matrix belonging to $K[[x]]^{n\times n}$. An algorithm is proposed which, given $A$, checks if the system $My=0$ has no non-zero solution for any prolongation $M$, and if this system, depending on $M$, may or may not have a nonzero solution in $K((x))^n$.

17:10
New Techniques in Numerical Analysis for Artificial Intelligence

ABSTRACT. Artificial intelligence (AI) has revolutionized numerous industries by providing intelligent solutions to complex problems. Numerical analysis plays a pivotal role in enhancing the performance and reliability of AI algorithms. This research paper explores new implementation techniques in numerical analysis for artificial intelligence. We investigate various areas where numerical analysis methods can be leveraged to improve AI algorithms, such as optimization, machine learning, and deep learning. Additionally, we discuss advancements in numerical techniques, including numerical optimization algorithms, numerical stability, and uncertainty quantification. The results demonstrate the effectiveness of these techniques in enhancing the performance and robustness of AI systems.

15:50-17:50 Session 6B: Workshop NCA
Commentary:
Location: Room 2 (A008)
15:50
Influence of Manual Inter-Observer Variability for the Performance of Deep Learning Models in Semantic Segmentation

ABSTRACT. Deep learning models for semantic segmentation have expanded their application span from general objects to structures of interest in real-world tasks. The current paper discusses one such practical challenge, i.e. the delineation of corrosion compounds on microscope images taken at the surface of unrestored archaeological artefacts. Apart from the complexity of learning irregular shapes, the influence of the human manual annotations on model training becomes pivotal. The aim of the current paper thus becomes to measure the impact of inter-observer variability on the segmentation of four copper corrosion compounds. The results support the need for rigorous guidelines in the manual establishment of the ground truth.

16:10
Neural Architecture Search with Zero-Cost Proxies

ABSTRACT. Neural Architecture Search (NAS) has demonstrated great potential in automating the design of deep neural networks. However, computational complexity and resource requirements are major drawbacks for neural architecture search algorithms. Zero-cost proxies offer a solution by assessing architecture performance using inexpensive surrogate metrics. In this paper, two novel methods are introduced to integrate zero-cost proxies into evolutionary neural architecture search: proxy-guided crossover and proxy regularization. In the proxy-guided crossover approach, the cut-point crossover operation is guided by the zero-cost proxy scores, in order to choose the best possible combination of the genetic material from two parents. Proxy regularization integrates zero-cost proxies as regularization during neural architecture search to eliminate the model with the lowest proxy score from the population list. The proposed methods were evaluated using CIFAR-100 and NAS-Bench-201 benchmark datasets. The experimental analysis demonstrates that the proposed techniques speed up the search process, helping in finding better models faster. This research contributes to the neural architecture search field by making the discovery of promising neural network architectures during the search process more effective.

16:30
Predicting Blood Glucose Levels Using LSTM

ABSTRACT. This study explores the application of Long Short-Term Memory (LSTM) algorithms to predict blood glucose levels, leveraging sequence-based data from a Dexcom G6 Continuous Glucose Monitoring (CGM) sensor. Our research methodically details a rigorous data cleansing process that transforms raw glucose readings into meaningful, sequenced input for LSTM models. By feeding the LSTM with structured data in a variety of volumes, we analyze and discuss how the quantity of information influences the accuracy of glucose level predictions. This investigation holds significant implications for the advancement of personalized glycemic control, potentially improving the day-to-day management and overall quality of life for individuals living with diabetes.

16:50
Detecting Manipulated Citations through Disturbed Node2Vec Embedding

ABSTRACT. This study introduces a novel method to reveal potential citation manipulation in academic papers. The credibility and trustworthiness of scholarly publications are often compromised by manipulated citations, which are challenging to identify using traditional methods. The proposed method utilizes perturbations applied to a deep embedding citation graph model to reconstruct meaningful connections. The network passes a series of perturbations consisting of randomly removing a fixed edges part. The method aims to recover reliable citations considering alternative paths while estimating their trustworthiness. Edges demonstrating more stable behavior according to such distortions are recognized as genuine. Accordingly, the most sensitive edges are comprehended a suspected to be manipulated. Numerical experiments demonstrate the method's effectiveness in identifying reliable citations as the core of the analyzed data.

17:10
Solar flare classification using modified metaheuristic optimized XGBoost

ABSTRACT. The Sun has a significant effect on nearly everything in the solar system. While a relatively tame star, it is nevertheless prone to strong emissions of energy in the form of solar flares. Events frequency follows a cycle of approximately 11 years. At the peak of this cycle, events occur more often. Powerful events can even reach the Earth and cause magnetic storms that can disrupt radio communications, satellite trajectories, and navigation. Particular energetic events can even cause significant harm to power and telecommunications infrastructure. A need for a robust system capable of accurately handling classification is evident. This work proposes an approach based on extreme gradient boosting optimized via metaheuristic algorithms and applied to a real-world observation dataset. Outcomes have been compared to several modern optimization algorithms and the outcomes are promising. The results have been statistically validated as significant, and the best models analyzed to determine feature impacts on classifications.

17:30
Comparative Study of Machine learning methods to classify bowel polyps

ABSTRACT. Gastrointestinal polyps are common abnormalities detected during colonoscopy screenings, and accurate classification of these polyps is essential for effective diagnosis and treatment planning. This study aims to explore the application of machine learning techniques for the classification of gastrointestinal polyps in colonoscopy video clips. A comparative analysis is conducted to evaluate the performance of seven machine learning algorithms, including three types of tree-based models, three types of boosting algorithms, and two other functions. The dataset consists of 152 instances, encompassing hyperplastic, serrated, and adenoma lesions, with a total of 76 polyps. The results of this study provide insights into the effectiveness of machine learning algorithms for improving the efficiency and accuracy of gastrointestinal polyp classification, ultimately contributing to enhanced patient care and diagnostic outcomes.