SYNASC2019: 21ST INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING
PROGRAM FOR FRIDAY, SEPTEMBER 6TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:40-09:30 Session 13: Invited talk

Adina-Magda Florea - Human-robot Interaction

Location: A11
08:40
Human-Robot Interaction

ABSTRACT. The presentation covers the basic trends in human-robot interaction research, with a special focus on user-robot interaction for people with special needs (assistive robotics). The research results of AI-MAS Laboratory related to this topic are then discussed, results obtained in the framework of 2 national projects and an European one.

09:30-10:20 Session 14: Invited talk

Stephen Watt - Review of Algorithms on Symbolic Domains

Location: A11
09:30
A Review of Algorithms on Symbolic Domains

ABSTRACT. This talk is about computing with mathematical quantities where the sizes or shapes are not known in advance. We consider polynomials where the exponents can be given by symbolic expressions, matrices with blocks or other internal structure of symbolic size, and piece-wise functions where the shapes of the domains are given by symbolic expressions.

For polynomials with symbolic exponents, there are various straightforward operations, such as squaring $x^{2n}–1$ to get $x^{4n}−2x^{2n}+1$, or differentiating to get $2nx^{2n−1}$. We review algorithms to compute more sophisticated operations such as the GCD, factorization and functional decomposition of such polynomials.

For symbolic matrices, we show how to do arithmetic on matrices with blocks, bands and other structures where the dimensions are given by symbolic expressions.

Finally, we consider the case of piece-wise functions, where the regions of definition are given symbolically. We show how hybrid sets, a generalization of multi-sets allowing negative multiplicities, can be used to reduce the computational complexity of working with these objects, and lead to a particularly elegant formulation of Stokes’ theorem.

10:10-12:10 Session 15: IAFP Workshop (I)
Location: 045C
10:10
Iterative approximations for non-self operators

ABSTRACT. The purpose of this work is to present some fixed point results, based on iteration methods, for some classes of non-self operators. We will consider the following classes: contractions, Kannan operators, compact operators, condensing operators.

10:30
Some conjectures on global asymptotic stability for discrete dynamical systems

ABSTRACT. In this paper we will present some results concerning: La Salle conjecture, Belitskii-Lyubich conjecture and two conjectures of the author.

10:50
A global implicit operator theorem in terms of monotone operators

ABSTRACT. Let $X$ be a nonempty set, $H$ a real Hilbert space and $T:X\times H\rightarrow H$ be such that $T\left( x,\cdot \right) :H\rightarrow H$ is a strongly monotone operator for each $x\in X$. In this paper we give conditions in which there exists a unique operator $\Phi :X\rightarrow H$, such that $T\left( x,\Phi \left( x\right) \right) =0$ for all $x\in X$. If $% X $ is a metric space, and if $X=\mathbb{R}^{p}$ and $H=\mathbb{R}^{q}$ we present some properties of the implicit operator $\Phi $.

11:10
Existence of asymptotically stable solutions to a nonlinear integral equation of mixed type

ABSTRACT. An existence result of asymptotically stable solutions to a nonlinear integral equation of mixed (Volterra-Hammerstein) type is presented. The proof is based on the application of a fixed point theorem of Schaefer’s type on Fréchet spaces.

11:30
Fixed Point Theorems Applied to Invariant Subspaces Problem on Hilbert Spaces

ABSTRACT. The problem of invariant subspaces for bounded linear operators on Hilbert spaces remained an open problem after almost a century of considerable efforts to solve it, since 1935 when John von Neumann proved in an unpublished manuscript that every compact operator has invariant subspaces. Our focus is on applying fixed point theorems to a suitable map, that will produce invariant subspaces. Many mathematicians tried alternative proofs for Lomonosov's theorem using other fixed point theorems, for instance Banach contraction principle. Hilden (1977) proved a special particular case - every compact operator has nontrivial hyperinvariant subspaces without explicitely using any fixed point theorem, but used a somehow contractive phenomenon about the norm of the operator.

11:50
Almost local contractions in b- pseudometric spaces

ABSTRACT. We introduce a new class of contractive mappings: the almost local contractions, starting from the almost contractions presented by V. Berinde (2004) and also from the concept of local contraction which are presented by Martins da Rocha and Filipe Vailakis in [Martins-da-Rocha,Filipe, Vailakis, Yiannis (2010) The concept of b-metric space was introduced by Czerwik,S. in 1998. Since then several publications were studied the fixed point of single valued and multivalued operators in b-metric spaces. In the beginning of this work, we shall present a set of concepts which help us to establish a theory of some fixed point theorems, related to a various type of almost local contractions in b-pseudometric spaces. We include some results involving data dependence of the fixed point for quasi $\varphi$-contractions on $b$-pseudometric spaces.

10:40-12:20 Session 16: Symbolic Computation (II)
Location: A11
10:40
An Attempt to Enhance Buchberger's Algorithm by Using Remainder Sequences and GCD Operation

ABSTRACT. This paper proposes a new method of enhancing Buchberger's algorithm for the lexicographic order (LEX-order) Groebner bases. The idea is to append polynomials to the input system, where we generate polynomials to be appended by computing PRSs (polynomial remainder sequences) of input polynomials and making them close to basis elements by the GCD operation. In order to do so, we first restrict the input system to satisfy three reasonable conditions (we call such systems ``healthy''), and we compute redundant PRSs so that the variable-eliminated remainders are not in a triangular form but in a rectangular form; we call the corresponding PRSs ``rectangular PRSs (rectPRSs)''; see {\bf 2.2} for details of rectPRSs. The lowest-order element of the LEX-order Groebner basis can be computed from the last set of remainders of rectPRSs and the GCD operation; see \cite{SI18}. We generate other polynomials to be appended by computing rectPRSs of leading coefficients of mutually similar remainders and converting the order-reduced leading coefficient to a polynomial in the given ideal. By these we are able to compute the second-lowest element of the Groebner basis, too. The research is on-going now. Our method seems promising but it contains many problems, theoretical as well as computational, so we open our research.

11:00
Feature Extraction using Legendre-Sobolev Representation for Handwritten Mathematical Characters

ABSTRACT. Despite the advancement of handwriting recognition, the mathematical expression-level recognition rates are still well below the threshold that is acceptable by a mathematics oriented system. Two-dimensionality nature of math formulas and the large set of math characters with different variations in style and size form the main challenges that the mathematical handwriting recognition problem faces. To address these difficulties, the way handwritten data is represented and the methods to compute certain features from the chosen representation are the two critical questions to answer.

To this aim, we treat handwritten characters as approximated parametrized coordinate curves in Legendre-Sobolev bases. This representation empowers us to study the geometrical features of handwritten characters as a whole. These geometrical features are equivalent to baselines, bounding boxes, loops, and cusps appearing in handwritten characters. In this paper, we propose methods for computing the derivative, roots, and gcd of polynomials in Legendre-Sobolev bases to find such features without needing to convert the approximations to the monomial basis. Our methods rely on linear algebra arithmetic operations such as matrix multiplications and solving Diophantine equations.

11:20
Source code vulnerabilities detection using loosely coupled data and control flows
PRESENTER: Sergiu Zaharia

ABSTRACT. Source code became one of the backbones for business and personal processes, with significant growth rate. As applications are one of the most used attack surfaces against individuals and organizations from all sectors, their intrinsic vulnerability arising from the supporting source code must be reduced by design. Currently there are technology providers and open communities which provide Static Analysis Security Testing (SAST) solutions, able to detect vulnerabilities in code written in the most used programming languages and development frameworks. The proposed method consists of a security scanning solution based on an Intermediate Representation of source code which is loosely coupled with the programming language structure and to the data flow, preserving at the same time the security vulnerability patterns. The ability to identify vulnerable source code snippets in the Intermediate Representation of the original source code, is core idea for this research project. Using loosely coupled control flows and data flows representations of the original source code enables the development of new security scanners, which can in the future evaluate applications written in new and exotic languages.

11:40
Applications of Equivalence Algorithms in Software Design

ABSTRACT. The paper overviews a set of applications we have proposed for efficiently solving, by means of equivalence algorithms, certain problems that may appear in software design. We have proposed and used canonical elements and equivalence algorithms for solving curricula equivalence problems, pattern matching in expert systems, data interchange frameworks between information systems using cloud environments, with a special case study of interchanging learning components, a work in progress, as well as best rule associations in business intelligence problems. We overview all these applications within the present study, revealing common characteristics and advantages, as well as new potential uses of canonical elements and equivalence algorithms in software design

12:00
The Fermat-Torricelli Problem on Sphere with Euclidean Metric

ABSTRACT. We study the Fermat-Torricelli problem on sphere with Euclidean metric. Given a spherical triangle $\triangle ABC$ whose length of sides are $a,b,c$ respectively, we discuss how to construct the implicit function $f(L,a,b,c)=0$ when the sum of distances $L$ between point $P$ on sphere and the vertexes of $\triangle ABC$ reaches the minimum. We transform this problem to elimination of polynomial equations and successfully construct $f(L,a,b,c)$ by combination of the Sylvester resultant, Dixon resultant and implicit function interpolation based on symbolic-numeric computation and then show that for any given $g(L,a,b,c)=0$ which $L,a,b,c$ may satisfy, $g(L,a,b,c)$ can be expressed using the four irreducible factors of $L(a,b,c)$.

13:00-14:30 Session 17: Tutorial

Arie Gurfinkel and Nikolaj Bjorner - The Science, Art and Magic of Constrained Horn Clauses

Location: A11
13:00
The Science, Art and Magic of Constrained Horn Clauses
PRESENTER: Arie Gurfinkel

ABSTRACT. Constrained Horn Clauses (CHC) is a fragment of First Order Logic modulo constraints that captures many program verification problems as constraint solving. Safety verification of sequential programs, modular verification of concurrent programs, parametric verification, and modular verification of synchronous transition systems are all naturally captured as a satisfiability problem for CHC modulo theories of arithmetic and arrays.

Of course, satisfiability of CHC is undecidable. Thus, solving them is a mix of science, art, and a dash of magic. In this tutorial, we will explore several aspects of this problem. We illustrate how different problems are translated to CHC. Present a framework, called Spacer, that reduces symbolically solving Horn clauses to multiple simpler Satisfiability Modulo Theories, SMT, queries. Finally, we describe advances in SMT that are necessary to make the framework a reality.

14:50-16:30 Session 18A: Logic and Programming
Location: A11
14:50
Portfolio SAT and SMT Solving of Cardinality Constraints in Sensor Network Optimization

ABSTRACT. Wireless Sensor Networks (WSNs) serve as the basis for Internet of Things applications. A WSN consists of a number of spatially distributed sensor nodes, which cooperatively monitor physical or environmental conditions. In order to ensure the dependability of WSN functionalities, several reliability and security requirements have to be fulfilled. In previous work, we applied OMT (Optimization Modulo Theories) solvers to maximize a WSN's lifetime, i.e., to generate an optimal sleep/wake-up scheduling for the sensor nodes. We discovered that the bottleneck for the underlying SMT (Satisfiability Modulo Theories) solvers was typically to solve satisfiable instances. In this paper, we encode the WSN verification problem as a set of Boolean cardinality constraints, therefore SAT solvers can also be applied as underlying solvers. We have experimented with different SAT solvers and also with different SAT encodings of Boolean cardinality constraints. Based on our experiments, the SAT-based approach is very powerful on satisfiable instances, but quite poor on unsatisfiable ones. In this paper, we apply both SAT and SMT solvers in a portfolio setting. Based on our experiments, the MiniCARD+Z3 setting can be considered to be the most powerful one, which outperforms OMT solvers by 1-2 orders of magnitude.

15:10
Superposition Reasoning about Quantified Bitvector Formulas
PRESENTER: David Damestani

ABSTRACT. We describe recent extensions to the first-order theorem prover Vampire for proving theorems in the theory of fixed-sized bitvectors, possibly with quantifiers. Details are given on extending both the parser of Vampire as well as the theory reasoning framework of Vampire. We present our experimental results by evaluating and comparing our approach to SMT solvers. Our experiments report also on a few examples that can be solved only by our work.

15:30
Improved Processor Bounding for Non-preemptive Task Scheduling

ABSTRACT. The scheduling problem lays at the heart of all research efforts regarding real-time systems design and analysis. Although numerous results have been achieved during the last decades for particular cases, the general problem is still a difficult challenge for the research community. This is particularly true for non-preemptive scheduling, where even the algorithms that have been proved optimal for preemptive scheduling fail in many situations. This paper, building on previous results of the authors, proposes an improved method of determining a lower bound for the number of processors that are necessary in a multiprocessor system, using non-preemptive scheduling, in order to allow finding a feasible schedule for a given single-instance task set. Two new contributions are shown, through which a more realistic lower bound can be computed.

15:50
Inference System to Achieve Saturation of (F,R,N) Knowledge Bases

ABSTRACT. This article aims to present a system for making inference (derivation) on any logical knowledge base with the goal to achieve its saturation. The main objective is to implement a derivation algorithm (inference engine) that applies the set of rules on facts to produce new knowledge (implicit) from the explicit stated ones. This process goes recursively until arrives in a state where no new facts can be deduced after an iteration (i.e. is achieved saturation). Typical knowledge bases from literature are comprised of facts and deduction rules. In our work was considered an extra layer of this ontology: the one of constraints. Negative constraints are logical counterpart of integrity constraints from database systems, which ensure the respect of semantics by data. So, in our case, the KB is a triple: (F,R,N). The concrete implementation of the system for derivation was made in the object-oriented language Java due to the numerous advantages on others. The KB is represented in OO environment as a collection of objects of classes corresponding to each type of knowledge: fact, rule and constraint. The system takes at input the knowledge base (in OO representation), applies the derivation algorithm and produces at output the saturated set of facts. The process relies on techniques like forward chaining and backtracking algorithm in order to find all possible combinations of facts that match the body of a rule in a progressive and ordered fashion. The testing was done on three types of KBs on various size: small, medium and large. Results and comparisons are presented to the reader in tabular form.

14:50-16:30 Session 18B: IAFP Workshop (II)
Location: 045C
14:50
Approximating fixed points of a general class of Kannan type contractive mappings

ABSTRACT. The aim of this paper in to introduce a large class of Kannan type mappings, that includes Kannan mappings as a particular case but also includes some nonexpansive mappings. We study their set of fixed points and prove a convergence theorem for Kransnoselskij iteration used to approximate fixed points of this kind of Kannan contractions. We then extend further this class to the class of Bianchini contractions. Examples to illustrate the effectiveness of our approach are given.

15:10
On strong fuzzy metric spaces

ABSTRACT. In this talk we provide a new perspective on strong fuzzy metric spaces, we discuss some topological properties and present some fixed point results in this setting.

15:30
Computing the Pompeiu-Hausdorff distance between some geometrical sets

ABSTRACT. In this paper, our aim is to illustrate how one can compute the distance betweensome geometrical sets. The starting point of our research is the paper [1], see also [2], [3]. References: [1]Tomislav Maroˇsevi´, The Hausdorff distance between some sets of points, MATHEMATICAL COMMUNICATIONS, Math. Commun. 23(2018), 247–257 [2]T.Maroˇsevi´c, Data clustering for circle detection, Croat. Oper. Res. Rev. 5(2014) 15–24. [3]T.Maroˇsevi´c, R. Scitovski, Multiple ellipse fitting by center-based clustering, Croat. Oper. Res. Rev. 6(2015), 43–53.

15:50
A common fixed point theorem for non-self mappings in strictly convex Fuzzy metric spaces
PRESENTER: Natasa Ćirović

ABSTRACT. In this paper we prove existence and uniqueness of a common fixed point for mappings defined on Fuzzy metric spaces with $t$-norm $*$ satisfying $a*a\geq a$, for all $a\in [0,1]$. We consider a pair of coincidentally commuting non-self mappings defined on strictly convex Fuzzy metric spaces. The contractive condition is of nonlinear generalized type. In the proof of the main result topological methods for characterization spaces with nondeterministic distances are used.

16:10
Fixed point theorems in partial ordered H-distance space

ABSTRACT. We establish new fixed point theorems for Kannan and Bianchini type mappings in the very general setting of a space with a distance. We will work on a partially ordered set where we suppose there is a H-distance and a non-decreasing mapping. The fixed point is unique under some additional conditions. We also present some exemples to illustrate the theoretical results. By working in the general setting of a complete H-distance space endowed with a partially ordered set, we obtained significant generalizations of Kannan and Bianchini fixed point theorems in usual partially ordered metric spaces.

16:50-18:50 Session 19: Artificial Intelligence (II)
Location: A11
16:50
Investigating differences between ancient and modern bacterial DNA in dental calculus via autoencoders

ABSTRACT. Deoxyribonucleic acid (DNA) encodes the totality of hereditary information of all living organisms. A plethora of studies in recent years have been dedicated to investigating DNA of various organisms. One of the many research directions focuses on studying ancient DNA molecules sampled from archaeological sites, which may reveal essential insight about lost populations, extinct species, past diseases or bacteria that may have caused epidemics. However, DNA collected from such locations is often subject to degradation and contamination, thus making its analysis even more cumbersome. This paper presents our experiments towards identifying differences between bacterial DNA from ancient and modern samples, which is important information needed by biologists who study ancient DNA. We investigate autoencoders' capability of capturing the internal characteristics of the DNA molecules and of encoding pertinent differences between ancient and modern bacterial DNA. We perform our experiments on more than 89000 ancient bacterial DNA sequences collected from an archaeological site and our results show the potential of this method.

17:10
Identification of elastoplastic material properties using PIES and PSO

ABSTRACT. The paper presents identification of material properties in elastoplastic boundary value problems. The direct problem is solved using the parametric integral equation system (PIES), while optimization is performed by the particle swarm optimization method (PSO). PIES is characterized by effective way of modeling of the plastic domain, only by parametric surfaces, without discretization. This makes the proposed approach more favorable (in terms of complexity) than identification using so-called element methods (FEM, BEM). The paper contains some examples of identification of the yield stress and the plastic modulus in selected elastoplastic problems.

17:30
Interval arithmetics in modelling and solving uncertainly defined boundary value problems of elasticity

ABSTRACT. In this paper, the idea of modeling and solving boundary value problems with uncertainly defined shape of boundary is presented. For this purpose, the PIES method modification is proposed. During tests based on examples of exactly defined problems, its advantages over alternative methods have been repeatedly emphasized. That's why, it was decided to modify this method for solving uncertainly defined problems. Directed interval arithmetic was used to model the shape of boundary and for numerical calculations the modification of directed interval arithmetic was proposed. Obtained numerical interval solutions will be compared with interval analytical solutions. Such solutions can be obtained using classical and directed interval arithmetic, well known in the literature. Additionally, proposed modification of directed interval arithmetic will be used.

17:50
A Restaurants Recommendation System: Improving Rating Predictions using Sentiment Analysis

ABSTRACT. Most users often find themselves in a situation when they need to answer the same question: which product, movie, vacation offer, restaurant or book is the best to choose? In order to answer this question, Recommendation Systems have been developed to generate the best suggestions according to the user’s interest. Recommendation Systems play an important role in the user’s decision-making process by enriching its experience and satisfaction, considering his peers’ actions and preferences. The goal of this paper is to enhance the recommendation process by applying Sentiment Analysis techniques on the input data. Sentiment Analysis is a domain that focuses on classifying information into positive, negative and neutral opinions. The results of a Sentiment Analysis task can be used to define social tendencies, items’ popularity and adapting the services for users’ needs. The proposed approach combines Sentiment Analysis and Recommendation Systems for defining the best suggestions for a user. Sentiment Analysis is applied for classifying restaurants’ text-based reviews into positive and negative. The output of the Sentiment Analysis task is passed to a recommendation system that, using the collaborative filtering algorithm, will predict the rating for a not-visited restaurant and generate a list of top-n restaurants for the user. This approach outperformed the results obtained when the Sentiment Analysis step was not considered in the recommendation process. Therefore, the proposed system increases the accuracy of the recommended items by analyzing, from a sentiment point of view, the text-based reviews offered by users.

18:10
Preventing File-less Attacks with Machine Learning Techniques

ABSTRACT. The cyber-threat detection problem is a complex one due to the large diversity of attacks, increasing number of prevalent samples and to the arms race between attackers and security researchers. A new class of attacks which appeared in the past years, is modifying its spreading and action methods in order to become non-persistent. Being non-persisten, the usual detection and analysis methods which are file oriented, do not work anymore. Therefore, several solutions became available like memory introspection, process activity monitoring or application enforcement. However, this solutions are time consuming, therefore their usage impose some resources needs. In this paper we discuss an entry-level anomaly detection method of the command lines arguments which are passed to the most known system tools generally available in Windows and not only. Some of these tools are used for years in companies to automatize task, but only in the recent period they became a powerfull tool for the attackers. The method is based on a derived version of Perceptron and consists in feature extraction and building a machine learning model.

18:30
Improving Detection of Malicious Office Documents using One-Side Classifiers

ABSTRACT. The current threat landscape is diverse and has lately been shifting from the binary executable application to a more light-coded and data-oriented approach. Considering this, the use of Microsoft Office documents in attacks has increased. The number of malicious samples is high and the complexity of evasion techniques is also challenging. The VBA macros are highly used in enterprise environments with benign purposes, so, in terms of detection, the number of false alarms should be close to zero. In this paper we discuss and propose a solution which focuses on keeping the rate of false positives as low as possible and, at the same time, maximizes the detection rate.