Computer Algebra Methods in Probing the Physics at Shortest Distances of the Microcosm
ABSTRACT. The dynamics of the physics at the shortest probed distances in the microcosm are described by renormalizable Quantum Field Theories. The corresponding observables are given by large sets of Feynman parameter integrals. For quite a large series of important quantities their calculation can be performed analytically using computer-algebraic technologies, analytic methods from difference ring theory, the theory of differential equations and special functions and numbers. In course of these calculations multi Giga- to Tera terms are processed. A series of new related function space have been revealed in this way during the last years.
We describe these technologies as they stand, discuss further avenues, and present characteristic examples down to the associated numeric representations.
ABSTRACT. The main objective of this paper is to create a proof of concept regarding the improvement of the human-generated database WordNet using computer-generated information from Word2Vec. Thus, we change the WordNet content, using the information from existing corpora. The main method used to achieve this goal is by comparing the results of path algorithms for computing the semantic similarities between WordNet concepts (such as Path, Wu and Palmer, or Leacock and Chodorow similarities), with cosine similarity between the Word2Vec vectors of the same concepts. One way to improve WordNet is by adding new concepts from the Word2Vec corpus which have strong connections with existing words from WordNet. Another way to improve it is by updating its existing connections to underline semantic change. Our experimental results prove that the method we propose may be used to improve the number of concepts and the quality of links between synsets in WordNet, creating a more meaningful semantic resource.
What are the Latest Cybersecurity Trends? A Case Study Grounded in Language Models
ABSTRACT. The year 2020 marked an important moment when the COVID-19 pandemic promoted Internet as a necessity even more than before, especially for school activities and businesses. This increased usage emphasized the importance of cybersecurity, a frequently overlooked subject by the common users, which in return plays a crucial role in safe Internet browsing. This paper introduces an approach grounded in Natural Language Processing techniques to identify the main trends in security news and empowers the analysis of vulnerable products, active attacks, as well as existing methods of defence against new attacks. Our dataset consists of 2264 news articles published on cybersecurity dedicated websites between January 2017 and May 2021. The RoBERTa language model was used to compute the texts embeddings, followed by dimensionality reduction techniques and topic clustering methods. Articles were grouped into approximately 20 clusters that were thoroughly evaluated in terms of importance and evolution.
Severity Prediction of Software Vulnerabilities based on their Text Description
ABSTRACT. Software vulnerabilities represent a real challenge nowadays, often resulting in disruption of vital systems and data loss. Due to the multitude of software applications used within a company, system administrators often end up in the situation of facing multiple vulnerabilities at the same time, having no choice but to prioritize the most critical ones. Administrators commonly use vulnerability databases and metric systems to rank vulnerabilities; however, it usually takes from days to weeks for the metrics to be published since these metrics are established by human security analysts and the number of daily discovered exploits is constantly increasing. Therefore, newly discovered vulnerabilities, especially those without an available patch, represent the largest problem. In this paper, we propose a deep learning approach to predict the severity score and other metrics of a vulnerability using only its text description, which is available on discovery. We use a Multi-Task Learning architecture with a pre-trained BERT model for computing vector-space representations of words. Our best configuration achieves a mean absolute error of 0.82 for the severity score and an accuracy of 73.87% for the severity level.
ABSTRACT. This paper presents a study on one of the important methods for the relation extraction problem. Roughly, the problem of relation extraction is the problem of finding all (some) facts in a given natural language sentence. The method is based on kernel functions for the support vector machines (SVM) algorithm. A kernel function is a function that represents the similarity of semantics between sentences. The kernel functions from the literature considered, in this paper, to be the most important ones are: bag-of-words kernel (BOWK), global context kernel (GCK), and sub-tree kernel (STK). The SVM algorithm is utilized to learn a candidate sentence classification model for a target relation r. The classification model can then be used to determine whether a new candidate sentence (with two labeled object descriptions) is a positive or a negative candidate sentence with respect to the given relation r. The paper includes a systematic comparison of the various kernel functions proposed in the literature. Based on the strength and weaknesses of the various kernel functions, new kernel functions are proposed, namely, directed global context kernel (DGCK), global context synonyms kernel (GCSK), directed global context synonyms kernel (DGCSK), and sub-tree synonyms kernel (STSK). Lastly, the paper contains a comparison of the evaluation results of the new kernel functions with the results of the kernel functions proposed in the literature.
New Conceptual Cohesion Metrics: Assessment for Software Defect Prediction
ABSTRACT. High cohesion and low coupling is a desirable software design principle as it significantly and positively impacts software quality. While the majority of existing cohesion metrics are structural, this paper proposes a set of new object-oriented metrics capturing the conceptual cohesion of classes. They are derived from the semantic information contained in the source code and involve using Doc2Vec, an artificial neural network based unsupervised model. Two case studies that compare the
proposed metrics with an extensive set of relevant existing ones are presented. The empirical validation has been performed on three open source software systems. It includes the assessment of the proposed metrics versus preexisting ones from the perspective of their utility for software defect prediction. The experimental results confirm that the metrics we propose capture additional aspects of cohesion when compared to existing cohesion metrics, while also providing better software defect prediction performance than preexisting related conceptual cohesion metrics.
A modified iterative approaches for solving fixed-point and minimization problems in Positive curvature metric spaces
ABSTRACT. In this paper, we propose a new modified proximal point algorithm in the setting of CAT(1) spaces which can be used for solving minimization problems and common fixed point problem. Also, we prove few convergence results of the proposed algorithm under some mild conditions. In process, several relevant results of the existing literature are generalized and improved.
ABSTRACT. An iterative method with inertial extrapolation term for approximating the solution of multiple-sets split feasibility problem in the infinite-dimensional Hilbert spaces is presented. In a recent paper (Advances in Pure and Applied Mathematics, vol.10, no.4, pp. 339-353, 2019), Ogbuisi and Mewomo introduced an iterative algorithm involving an inertial term and a step size independent of the operator norm for approximating a solution to split variational inequality problem in a real Hilbert space. We extend the algorithm introduced by Ogbuisi and Mewomo for solving multiple-set split feasibility problem, and we propose a self-adaptive technique to choose the stepsizes such that the implementation of our algorithm does not need prior information about the operator norm. We prove a weak convergence theorem to the proposed algorithm under some suitable conditions. Finally, we give some numerical examples to illustrate the efficiency and implementation of our method compared to some existing results.
A Viscosity-type Scheme for Optimization Problems in Hadamard Spaces With Applications
ABSTRACT. This article proposes a viscosity-type scheme for approximating a common solution of convex minimization problem, monotone vector field inclusion problem and fixed point problem involving multivalued nonexpansive mapping in the framework of Hadamard spaces. We establish a strong convergence theorem for the sequence generated therefrom to a solution of the problem. Furthermore, we apply our results to compute the Fr{\'e}chet mean, find the mean of probabilities, minimize the energy of measurable mappings and solve a problem of two-arm robotic motion control. Finally, we give a numerical example to demonstrate the applicability of the method and also issue comparisons with some existing methods. Our results extend and complement some recent results in the literature.
An Inertial Iterative Method for Solving the Split Variational Inclusion Problems in Hilbert Space
ABSTRACT. In this paper, we introduced a new inertial iterative method by combining inertial method together with proximal point algorithm for solving split variational inclusion problems in real Hilbert spaces. Under some suitable conditions, we have shown the strong convergence theorem for the sequence generated by the proposed iterative method. Furthermore, we applied our proposed algorithm to derive iterative methods for solving split feasibility problem and split minimization problem. Additionally, we present some numerical experiments to illustrate the convergence behaviour of the proposed method in comparisons with some existing methods in the literature and possible application of the proposed method in recovering a an original signal from a sparse and noisy signal.
ABSTRACT. This paper deals with the modified Halpern type iterative algorithm for finding a common solution from the sets of fixed points and singularities of an inclusion problem which is defined by the sum of strongly monotone vector field and multivalued maximal monotone vector field on Hadamard manifolds. Under appropriate assumptions, we prove convergence theorems for approximating the common solutions of the proposed problem in Hadamard manifolds.
ABSTRACT. In this tutorial, we cover six topics: (1) origami geometry;
(2) introduction to Orikoto language of Eos for origami programming;
(3) geometric construction by Orikoto;
(4) automated geometric theorem proving;
(5) traditional origami practice;
(6) tools for visualization and geometric analyses.
We extensively use origami programming examples in Orikoto during the tutorial session and construct simple origami works with the participants. Furthermore, we verify the geometric properties of the created objects in a streamlined fashion. In Eos, we automatically save geometric data symbolically and share them with verifiers.
This tutorial aims to gain experience in computational origami and enjoy origami geometrically with symbolic computation whenever you have the necessary computing resources at hand. Origami is fun for all people!
We will prepare learning material for the tutorial on the home page of the Eos project.
ABSTRACT. B-spline and NURBS functions are popular choices for describing trajectories associated with nonlinear dynamics and for imposing continuous-time constraint validation. Exploiting their properties (local support, convexity and positivity) usually leads to sufficient conditions which are conservative and difficult to handle as part of an optimization problem.
In this talk I will present several methods which reduce the conservatism and complexity of the approach. First, I will make use of knot refinement strategies to improve arbitrarily-well on the sufficient conditions. Second, I will provide approximate solutions which reduce significantly the computational effort at the price of a moderate loss in performance. The approaches are validated for a motion planning problem where off-line trajectories with obstacle(s) avoidance guarantees are generated.
On Common Fixed Point Theorems Satisfying Z_Σ-⊥-Contraction Mappings in Orthogonal Modular b-Metric Spaces
ABSTRACT. In this paper, by considering the notion of orthogonal sets and modular b−metric function we introduce orthogonal modular b−metric spaces. Moreover, we establish some common fixed point theorems for Σ−type contractions involving generalized simulative-Proinov function and in the setting of orthogonal modular b−metric spaces.
ABSTRACT. In this paper, by using the concept of cyclic
$(\alpha,\beta)$-admissible pair, Geraghty contractions and
simulation functions, we establish the existence and uniqueness of
common fixed point of a generalized $(\alpha,\beta)-$Geraghty simulative contraction on non-Archimedean modular metric spaces. Our results generalize and extend various
comparable results in the existing literature. As an application, we acquire common fixed point results in non-Archimedean modular metric spaces with a directed graph.
Types-to-Sets for Theorem Proving in Higher-Order Logic
ABSTRACT. Users of interactive theorem provers based on Higher-Order Logic (such as HOL4 and Isabelle/HOL) find it very convenient to "ride" the HOL type system when defining concepts and reasoning about them -- within a paradigm that can be called *type-based reasoning*. However, to achieve higher flexibility, users often need to resort to *set-based reasoning*, which is more bureaucratic and less supported by automation. In previous work with Ondrej Kuncar, I developed a mechanism called Types-to-Sets that allows one to make the best of both worlds by offering a bridge between the simpler type-based theorems and the heavier set-based ones. In recent work, I show that, contrary to the previous belief, the rules for moving from types to sets are admissible in the standard Higher-Order Logic used in theorem provers, so no extension of the axiomatization base is required. In this talk, I will motivate and introduce the Types-to-Sets framework, and will present the recent admissibility result and its connection to other concepts such as Reynolds-style parametricity and compositional (co)datatypes.
ABSTRACT. Motivated by the study of shallow rectifier neural networks, we propose an algorithm that approximates a polygon in the plane by an outer polygon with fewer vertices. The algorithm minimizes the area locally in an iterative manner, and hence is guaranteed to find a local minimum and not necessarily the global optimum. Nevertheless, experiments indicate that in practice it yields a reasonably good approximation.
ABSTRACT. Outliers in clustered data are objects that do fit
well into existing clustering structures. Starting from properties
of a special product of matrices with non-negative elements we
present an algorithm that uses variations of silhouettes of objects
to identify outliers relative to a given clustering.
ABSTRACT. A classical idea in supervised learning is to compute a strong classifier by combining weak classifiers. One of its famous implementations is the ADABOOST algorithm which is a minimization procedure.
In this paper, an original representation of the cost is formulated as the cornerstone of this study. We prove that, under a certain hypothesis (H), this cost function has, almost surely, a unique minimum when the training set is large, and we study the properties of this minimum.
Moreover, we compare ADABOOST with a classical optimization algorithm: the relaxation algorithm.
This study is carried out by considering a problem with two classes and three binary classifiers. The point of view adopted here is rather that of the analyst and thus gives a perspective complementary to usual statistical methods.
Bounded Solutions of an Iterative Differential Equations
ABSTRACT. In this paper,we use Schauder and Banach fixed point theorems to study the existence ,uniqueness and stability of bounded nonhomogeneous iterative functional differential equations of the form:
ABSTRACT. Abstract—The fixed point problem raise more issues: the
existence of fixed point, the uniqueness of fixed point, and exact
or approximative method/methods of finding the fixed point.
Browder demonstrate that nonexpansive nonlinear operators
in Banach space has a fixed point, but it is not ensured the
uniqueness and more the Picard iteration do not converge
at fixed point. Browder and Petrishyn provided an iterative
method (Krasnoselskii method), which for demicompact mapping
converge strong at a fixed point.
There was developed new iterative approximative method to find
fixed points: Mann iterative method, Ishikawa iterative method
and others.
J. T. Markin extending Browder theorem proves a theorem for
contractive set valued mapping using Pompeiu-Hausdorff metric.
In 1969 Nadler provided definition for multi-valued mapping.
In 1974 Lim, Teck-Cheong provided an existence fixed point
theorem in a uniformly convex Banach space for nonexpansive
mapping.
A natural question raise now is whether turn single valued fixed
point theory for nonexpansive mapping in multivalued mapping.
Main developments around the concept of almost contractions in generalized metric spaces
ABSTRACT. The aim of this paper is to survey the most relevant developments done in the last decade around the concept of almost contractions in generalized metric spaces.
ABSTRACT. Software Testing is crucial for the safety of many systems and it is an expensive stage of the software development cycle. For a long time, formal methods and software testing were seen as rivals, but along with the expansion of the technology, researchers explored ways of using the two approaches complementary. This article presents several ways in which software testing is assisted by formal methods.
ABSTRACT. Out of the desire to observe the interdisciplinary usefulness of Computer Science, we decided to begin a study about the surrounding world. Starting from the visual world, we outlined a basic question: Do events we experience in our lives influence our artistic capabilities? We begin by collecting all the necessary data about several painters and then extracting the predominant colors of their paintings. Using Machine Learning techniques, we compared the efficiency of different algorithms, measuring the efficiency score in the CIELAB Color Space. Then, based on Natural Language Processing elements and painters’ biographical information, we built a timeline of the events from the painter’s lives, enriched with the predominant colors used by them. Wanting to observe the human perspective about the feelings derived from the paintings, we build a small form-like application to extract this information from volunteers in the field. Correlating the algorithms’ results with those extracted from human subjects, we showed that periods of happiness and unhappiness bounded with the colors used by painters. These results lead us to the conclusion that this topic deserves further study in order to obtain more accurate information and to be able to move on to its use in practical applications.
ABSTRACT. We are often curious to find out what fruit is found in a tree when we are walking through parks or a botanical garden. Fruit classification is a complex activity that depends on the datasets used and the deep learning methods applied to them. This paper aims to present the steps by which we created a dataset with 262 fruits, and then how we created several models based on it. Evaluation and testing have shown us that the resource and models created can be useful to those who want to create applications that recognize fruits.
ABSTRACT. Over the last decade, climate change has impacted Earths' atmosphere and environment more than anytime before. Glaciers are the most sensitive indicators of its impact. In this work, we model a glacier's evolution by applying computer vision algorithms on high-resolution satellite imagery. We detect changes in the ice coverage movement by applying a dense optical flow algorithm over an image time series covering a particular scene (region) and processed to extract the NDSI. We perform tests on the Jungfrau-Aletsch-Bietschhorn (JAB) glacier in the Swiss Alps. Our results show that we are able to obtain relevant information by computing motion vectors across time. Furthermore, we observe small differences between our predicted NDSI and the observed values demonstrating the efficiency of the approach.
In addition, by applying various machine learning techniques we provide an optimistic date for when the JAB glacier will completely disappear: 2800.
A fixed point result for a convex type contraction
ABSTRACT. Let $(X, \left\langle \cdot \right\rangle)$ be a Hilbert space and $T:X\to X$ be a decreasing operator.
Under a metric condition involving the convex combination of $x$ and $T(x)$, we will prove
some fixed point theorems which generalize and complement several results in the theory of nonlinear operators.
Our results are closely related to the admissible perturbations approach in fixed point theory. For other results and details
see Petru\c sel, A., Petru\c sel, G.: \emph{Fixed point results for decreasing convex orbital operators in Hilbert spaces.}
J. Fixed Point Theory Appl. 23, 35 (2021).
https://doi.org/10.1007/s11784-021-00873-1
Common fixed point for enriched contraction under the assumption of R-weakly commuting condition
ABSTRACT. We introduce common fixed point theorems
using the technique of enriched Banach contractions
for single-valued mappings under the assumption of Rweakly
commuting condition, R-weakly commuting of
type (Ag) and R-weakly commuting of type (Af): We
give examples for each theorem in turn to suport our
results. We obtained related results of Pant, H. K.
Pathak, Y. J. Cho and S. M. Kang and V. Berinde
and M. P˘acurar.
Kannan and Bianchini fixed point theorems in partially ordered H-distance space
ABSTRACT. We establish new fixed point theorems for Kannan
and Bianchini type mappings in the very general setting of a space
with a distance. We will work on a partially ordered set where we suppose
there is a H-distance and a non-decreasing mapping. The fixed
point is unique under some additional conditions.
We also present some exemples to illustrate the theoretical results