ICCS 2021: INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE
PROGRAM FOR THURSDAY, JUNE 17TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:50 Session 7: Keynote Lecture 3
09:00
Enabling High-Performance Large-Scale Irregular Computations

ABSTRACT. Large graphs are behind many problems in today’s computing landscape. The growing sizes of such graphs, reaching 70 trillion edges recently, require unprecedented amounts of compute power, storage, and energy. In this talk, we illustrate how to effectively process such extreme-scale graphs. Our solutions are related to various forms of graph compression, paradigms and abstractions, effective design and utilization of massively parallel hardware, vectorizability of graph representations, communication avoidance, and others.

09:50-10:20Coffee Break
10:20-12:00 Session 8A: MT 7
10:20
Acceleration of the Robust Newton Method by the use of the S-iteration

ABSTRACT. In this paper, we propose some improvement of the Robust Newton's Method (RNM). The RNM is a generalisation of the known Newton's root finding method restricted to polynomials. The RMN is globally convergent, defined at critical points, free of instability and chaos, in contrary to the classical Newton's algorithm. Unfortunately, the RNM algorithm is a slow one. It needs a large number of iterations to find the polynomial's roots or critical points. Thus, in this paper, we propose the acceleration of this method by replacing the standard Picard iteration in the RNM algorithm by the S iteration. This leads to an essential acceleration of the modified method without visible destroying of the sharp boundaries among the basins of attraction of the polynomial's roots. Basing on numerical experiments, we present advantages of the proposed modified algorithm over the base RNM with the help of polynomiagraphs and some numerical measures such as an average number of iterations, convergence area index and time of generation. Moreover, we present a possible application of the proposed method to the generation of artistic patterns.

10:40
A New Approach to Eliminate Rank Reversal in the MCDA problems

ABSTRACT. In the multi-criteria decision analysis (MCDA) domain, one of the most important challenges of today is Rank Reversal. In short, it is a paradox that the order of alternatives belonging to a certain set is changed when a new alternative is added to that set or one of the current ones is removed. It may undermine the credibility of ratings and rankings, which are returned by methods exposed to the Rank Reversal phenomenon.

In this paper, we propose to use the Characteristic Objects method (COMET), which is resistant to the Rank Reversal phenomenon and combining it with the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and Preference Ranking Organization Method for Enrichment Evaluations II (PROMETHEE II) methods. The COMET method requires a very large number of pair comparisons, which depends exponentially on the number of criteria used. Therefore, the task of pair comparisons will be performed using the PROMETHEE II and TOPSIS methods. In order to compare the quality of both proposed approaches, simple comparative experiments will be presented. Both proposed methods have high accuracy and are resistant to the Rank Reveral phenomenon.

11:00
Validating Optimal COVID-19 Vaccine Distribution Models

ABSTRACT. With the approval of vaccines for the coronavirus disease by many countries worldwide, most developed nations have begun, and developing nations are gearing up for the vaccination process. This has created an urgent need to provide a solution to optimally distribute the available vaccines once they are received by the authorities. In this paper, we propose a clustering-based solution to select optimal distribution centers and a Constraint Satisfaction Problem framework to optimally distribute the vaccines taking into consideration two factors namely priority and distance. We demonstrate the efficiency of the proposed models using real-world data obtained from the district of Chennai, India. The model provides the decision making authorities with optimal distribution centers across the district and the optimal allocation of individuals across these distribution centers with the flexibility to accommodate a wide range of demographics.

11:20
RNACache: Fast Mapping of RNA-Seq Reads to Transcriptomes using MinHashing

ABSTRACT. The alignment of reads to a transcriptome is an important initial step in a variety of bioinformatics RNA-seq pipelines. As traditional alignment methods suffer from high runtimes, alternative, alignment-free methods have recently gained increasing importance. We present a novel approach to the detection of local similarities between transcriptomes and RNA-seq reads based on context-aware minhashing. We introduce RNACache, a three-step processing pipeline consisting of minhashing of $k$-mers, match-based (online) filtering, and coverage-based filtering in order to identify truly expressed transcript isoforms. Our performance evaluation shows that RNACache produces transcriptomic mappings of higher accuracy and includes significantly fewer erroneous matches than the state-of-the-art tools RapMap, Salmon, and Kallisto. Furthermore, it offers scalable and highly competitive runtime performance at low memory consumption on common multi-core workstations. We plan to make RNACache publicly available upon acceptance of the manuscript.

11:40
Digital image reduction for analysis of topological changes in pore space during chemical dissolution

ABSTRACT. The paper presents an original algorithm for reducing three-dimensional digital images to improve persistence diagrams computing performance. These diagrams represent topology changes in digital rocks pore space. The algorithm has linear complexity because removing the voxel is based on the structure of its neighborhood. We illustrate that the algorithm's efficiency depends heavily on the pore space's complexity and the size of the filtration steps.

10:20-12:00 Session 8B: MT 8
10:20
Oil and Gas Reservoirs Parameters Analysis Using Mixed Learning of Bayesian Networks

ABSTRACT. In this paper, a multipurpose Bayesian-based method for data analysis, causal inference and prediction in the sphere of oil and gas reservoir development is considered. This allows analysing parameters of a reservoir, discovery dependencies among parameters (including cause and effects relations), checking for anomalies, prediction of expected values of missing parameters, looking for the closest analogues, and much more. The method is based on extended algorithm MixLearn@BN for structural learning of Bayesian networks. Key ideas of MixLearn@BN are following: (1) learning the network structure on homogeneous data subsets, (2) assigning a part of the structure by an expert, and (3) learning the distribution parameters on mixed data (discrete and continuous). Homogeneous data subsets are identified as various groups of reservoirs with similar features (analogues), where similarity measure may be based on several types of distances. The aim of the described technique of Bayesian network learning is to improve the quality of predictions and causal inference on such networks. Experimental studies prove that the suggested method gives a significant advantage in missing values prediction and anomalies detection accuracy. Moreover, the method was applied to the database of more than a thousand petroleum reservoirs across the globe and allowed to discover novel insights in geological parameters relationships.

10:40
Analytic and Numerical Solutions of Space-Time Fractional Diffusion Wave Equations with different Fractional order

ABSTRACT. The aim of this manuscript is to investigate analytic and numerical solutions of space--time fractional diffusion wave equations with different fractional order ($\alpha$ and $\beta$). After deriving analytic solution, an implicit unconditional stable finite difference method for solving space-time fractional diffusion wave equations is proposed. The Gerschgorin theorem is used to study the stability and convergence of the method. Furthermore, the behavior of the error is examined to verify the order of convergence by numerical example.

11:00
Chebyshev-type rational approximations of the one-way Helmholtz equation for solving a class of wave propagation problems

ABSTRACT. This study is devoted to improving the efficiency of the numerical methods for solving the pseudo-differential parabolic equation of diffraction theory. A rational approximation on an interval is used instead of the Padé approximation in a vicinity of a point. The relationship between the pseudo-differential propagation operator, variations of the refractive index, and the maximum propagation angle is established. It is shown that using the approximation on an interval is more natural for this problem and allows using a more sparse computational grid than when using the local Padé approximation. The proposed method differs from the existing ones only in the coefficients of the numerical scheme and does not require significant changes in the implementations of the existing numerical schemas. The application of the proposed approach to the tropospheric radio-wave propagation and underwater acoustics is provided. Numerical examples quantitatively demonstrate the advantages of the proposed approach.

11:20
Investigating In Situ Reduction via Lagrangian Representations for Cosmology and Seismology Applications

ABSTRACT. Although many types of computational simulations produce time- varying vector fields, subsequent analysis is often limited to single time slices due to excessive costs. Fortunately, a new approach using a Lagrangian representation can enable time-varying vector field analysis while mitigating these costs. With this approach, a Lagrangian representation is calculated while the simulation code is running, and the result is explored after the simulation. Importantly, the effectiveness of this approach varies based on the nature of the vector field, requiring in-depth investigation for each application area. With this study, we evaluate the effectiveness for previously unexplored cosmology and seismology applications. We do this by considering encumbrance (on the simulation) and efficacy (of the reconstructed result). To inform encumbrance, we integrated in situ infrastructure with two simulation codes, and evaluated on representative HPC environments, performing Lagrangian in situ reduction using GPUs as well as CPUs. To inform efficacy, our study conducted a statistical analysis across a range of spatiotemporal configurations as well as a qualitative evaluation. In all, we demonstrate effectiveness for both cosmology and seismology — time-varying vector fields from these domains can be reduced to less than 1% of the total data via Lagrangian representations, while maintaining accurate reconstruction and requiring under 10% of total execution time in over 80% of our experiments.

11:40
PIES for viscoelastic analysis

ABSTRACT. The paper presents the approach for solving 2D viscoelastic problems using the parametric integral equation system (PIES). On the basis of Kevin model the PIES formula in time dierential form is obtained. As solving procedure the time marching is adopted, by introducing a linear approximation of displacements. The proposed approach, unlike other numerical methods, does not require discretization even the boundary. It uses curves as a tool for global modeling of boundary segments: curves of the rst degree for linear segments and of the third degree for curvilinear segments. The accuracy is steered by the approximation series with Lagrange basis functions. Some test are made and shown in order to validate the proposed approach.

10:20-12:00 Session 8C: AIHPC4AS 4
10:20
Agent-based Modeling of Social Phenomena for High Performance Distributed Simulations

ABSTRACT. Detailed models of numerous groups of social beings, which find applications in broad range of applications, require efficient methods of parallel simulation. Detailed features of particular models strongly influence the complexity of the parallelization problem. In this paper we identify and analyze existing classes of models and possible approaches to their simulation parallelization. We propose a new method for efficient scalability of the most challenging class of models: stochastic, with beings mobility and mutual exclusion of actions. The method is based on a concept of two-stage application of plans, which ensures equivalence of parallel and sequential execution. The method is analyzed in terms of distribution transparency and scalability at HPC-grade hardware. Both weak and strong scalability tests show speedup close to linear with more than 3000 parallel workers.

10:40
Automated Method for Evaluating Neural Network's Attention Focus

ABSTRACT. Rapid progress in machine learning and artificial intelligence (AI) has brought increased attention to the potential security and reliability of AI technologies. This paper identifies the threat of network incorrectly relaying on counterfactual features that can stay undetectable during validation but cause serious issues in life application. Furthermore, we propose a method to counter this hazard. It combines well-known techniques: object detection tool and saliency map obtaining formula in order to compute metric indicating potentially faulty learning. We prove the effectiveness of the method, as well as note its weak sides.

11:00
AI-based optimization of earthquake fiber bundle models using reinforcement learning

ABSTRACT. Earthquakes are the result of the sudden stress release during the rupture of the Earth's crust due to tectonic forces accumulated over hundreds or thousands of years. Rupture of any heterogeneous material is a complex physical process difficult to model deterministically due to the number of unmeasurable parameters involved and the poorly constrained physical conditions. Moreover, Earth rupture governs different episodes in a wide time and space scale. While the mainshock takes seconds or few minutes to occur, and its nucleation involves meters to initiate, their aftershock sequence could take weeks or months to be produced spatially covering kilometers around the main rupture fault. The lack of long seismic series, due to our short instrumental recording time, makes it difficult to observe whole seismic cycles. Thus, the predictive potential of these phenomena usually becomes insufficient. One of the main goals is to explore new approaches able to generate accurate synthetic time series (physically and statistically) aiming to produce a better understanding of the earthquake phenomenon. In this sense, an earthquake simulator based on the Fiber Bundle Model (FBM) that produces synthetic series fulfilling seismic statistical patterns has been recently developed, in particular, those series related to the mainshock and the aftershock sequences (Monterrubio-Velasco et al. 2019a, 2019b, 2020). This new model has been coined as TREMOL (sTochastic Rupture Earthquake MOdeL). The FBM is a model whose algorithm is based upon the interaction of individual elements (or fibers), with particular charge transfer rules and a probability distribution function to describe the intrinsic properties of its constituent elements. This model offers many advantages and great adaptability to describe various rupture phenomena, from the modeling of rupture in microscopic composite materials to large-scale rupture phenomena such as earthquakes. One of the most remarkable properties of the FBM is the self-organized criticality (SOC) inherent in its stochastic nature. One of the most important features of TREMOL is that it requires a deep parameter tuning that can significantly improve the approximation of the synthetic results with respect to the real ones. The correct parameterization of TREMOL generates seismic synthetic catalogs consistent with those observed in nature, thus adjusting the most important empirical relationships of seismology. Unfortunately the strong stochastic and discrete nature of the FBM hinders the application of classical optimization techniques based on, for example, continuous gradient descent methods. As a promising alternative to these approaches, supervised machine learning (ML) classification algorithms have been recently used to predict the best parameter values associated with some preselected classes (Monterrubio et al., 2018, Llácer et al., 2020). Those algorithms demonstrate high performance in solving this problem for the specific aftershock application, producing a synthetic behavior of earthquakes close enough to the observed one. However, note that this optimization strategy is inherently discrete due to the ML classification, requiring in general costly training and producing less accurate results as long as the number of classes increases. More precisely, the explored supervised techniques were applied to analyze three parameters requiring a large amount of pre-executed simulations to train the ML models. In cases where the model complexity increases (i.e. increasing the dimensionality by adding more features, classes, and spatial dimensions) the previous approach may be computationally unaffordable for optimizing the TREMOL model. Trying to overcome some of the aforementioned drawbacks, in this work we explore an alternative strategy to optimize the TREMOL model following an artificial intelligence (AI) based approach. Instead of performing a supervised method to learn the best parameter class, the key idea is building an artificial agent that learns from its own experience (with no supervision) which is the optimal parameter value that maximizes a given goal function. This AI paradigm is known as reinforcement learning (RL), where the agent interacts with its environment by taking actions and evaluating a reward signal. The final goal is to learn a policy that transforms a current environment state into an action that potentially returns the maximum accumulation of rewards, taking into account all the possibilities. Here, we reformulate the RL paradigm as an optimization problem for the TREMOL environment and build an artificial agent that deals with continuous actions as the values of the FBM parameters. The RL algorithms work naturally in high dimensional spaces and benefit from multiple ways of addressing high-performance implementations, for instance, the possibility of distributing numerous environment instances for those cases where TREMOL requires higher computational costs.

11:20
Machine Learning Control Design for Elastic Composite Materials

ABSTRACT. A novel numerical method, based on a machine learning approach, is used to solve an inverse problem involving the Dirichlet eigenfrequencies for the elasticity operator in a bounded domain filled with a composite material. The inhomogeneity of the material under study is characterized by a vector design parameter used to control the constituent mixture of homogeneous elastic materials that compose it. Using the finite element method, we create a training set for a forward artificial neural network, solving the forward problem. A forward nonlinear map of the Dirichlet eigenfrequencies as a function of the vector design parameter is then obtained. This forward relationship is inverted and used to obtain a training set for an inverse radial basis neural network, solving the aforementioned inverse problem. A numerical example is presented with the purpose to show the applicability of this methodology as a control design tool and prove its effectiveness.

10:20-12:00 Session 8D: CompHealth 1
10:20
Hybrid Predictive Modelling for Finding Optimal Multipurpose Multicomponent Therapy

ABSTRACT. This study presents a new hybrid approach to predictive modelling of disease dy-namics for finding optimal therapy. We use existing methods, such as expert-based modelling methods, models of system dynamical and ML methods in com-positions with our proposed dynamic modelling methods for simulating treatment process and predicting treatment outcomes depending on the different therapy variants. Treatment outcomes include a set of treatment-goal values, therapy vari-ants include a combination of drugs and treatment procedures. Personal therapy recommendation by this approach is optimal in terms of achieving the best treat-ment multipurpose outcomes. We use this approach in the task of creating a prac-tical tool for finding optimal therapy for T2DM disease. The proposed tool was validated using surveys of experts, clinical recommendations [1], and classic met-rics for the predictive task. All these validations have shown that the proposed tool is high-quality, interpretable and usability, therefore it can be used as part of the Decision Support System for medical specialists who work with T2DM pa-tients.

10:40
Towards cost-effective treatment of periprosthetic joint infection: from statistical analysis to Markov models

ABSTRACT. The aim of the research is to perform statistical analysis and to build probabilistic models for the treatment of periprosthetic joint infection (PJI) based on available data. We assessed and compared the effectiveness of different treatment proce-dures from the terms of the objective result (successful PJI treatment without re-lapse) and the subjective assessment of their condition by the patients themselves (Harris score). The ways to create prognostic models and analyze cost-effectiveness of treatment strategies are discussed based on the results obtained.

11:00
Refining the causal loop diagram: a tutorial for maximizing the contribution of domain expertise in computational system dynamics modeling

ABSTRACT. Complexity science is increasingly recognized as a relevant paradigm for studying systems where biology, psychology, and socio-environmental factors interact. The application of complexity science however often only encompasses developing a conceptual model that visualizes the mapping of causal links within a system, e.g., a causal loop diagram (CLD). While this is an important contribution in itself, it is imperative to formulate a computational version of a CLD in order to interpret the dynamics of the modeled system and simulate ‘what if’ scenarios. We propose to realize this by deriving knowledge from experts’ mental models in the biopsychosocial domains. This tutorial paper first describes the steps required for capturing expert knowledge in a CLD such that it may result in a computational system dynamics model (SDM). For this purpose, we introduce several annotations to the CLD that facilitate this intended conversion. This annotated CLD (aCLD) includes sources of evidence, intermediary variables, functional forms of causal links, and the distinction between uncertain and known-to-be-absent causal links. We propose an algorithm for developing an aCLD that includes these annotations. We then describe how to formulate an SDM based on the aCLD. The described steps for this conversion help identify, quantify, and potentially reduce sources of uncertainty and obtain confidence in the results of the SDM’s simulations. We utilize a running example that illustrates this conversion process. The approach described in this paper facilitates and advances the application of computational science methods to biopsychosocial systems.

11:20
Discovering the diversity of patient treatment decisions through clinical pathways modeling and physician profiling

ABSTRACT. A clinical pathway is a way to model and represent the health care process based on data. In this work, we propose to add doctor profiles to the clinical pathway modelling for more accurate identification. This work help answer two questions: (1) whether some specialists influence the process of providing care, (2) and is it possible to appoint a specialist in advance for a more favorable outcome for a patient? For modelling, we modify the CP identification and clustering algorithms which were previously developed based on machine learning and genetic algorithm methods. Embedding a doctor's profile is implemented as an additional parameter of the CP stage and an additional objective function for multivariate optimization in the genetic algorithm.

11:40
Optimization of Selection of Tests in Diagnosing the Patient by General Practitioner

ABSTRACT. In General Practitioner’s work the fundamental problem is the accuracy of the diagnosis under time constraints and health care cost limitations. The General Practitioner (GP) after an interview and a physical examination makes a preliminary diagnosis. The goal of the paper is to find the set of tests with such total diagnostic potential in verification of this diagnosis that is not smaller than a threshold value and with minimal total cost of tests. In proposed solution method, the set of preliminary diagnoses after the interview and the physical examination is given. For each preliminary diagnosis, for each test, diagnostic potential of the test in verification of the diagnosis is determined using Analytic Hierarchy Process based method with medical expert participation. Then binary linear programming problem with constraint imposed on total diagnostic potential of tests but with criterion function of minimal total test cost is solved for each diagnosis. For the case study when the patient with lumbal pain is coming to the GP, for each of six preliminary diagnoses, for each test, the diagnostic potentials of tests have been es-timated. Then for each diagnosis, the solution of the binary linear programming problem has been found. A limitation of the case study is the estimation of diagnostic potential of tests by one expert only. This approach can be applied in diagnostics of technical objects and systems too.

10:20-12:00 Session 8E: COMS 4
10:20
Iterative global sensitivity analysis algorithm with neural network surrogate modeling

ABSTRACT. Global sensitivity analysis (GSA) is a method to quantify the effect of the input parameters on outputs of physics-based systems. Performing GSA can be challenging due to a combined effect of the high computational cost of each individual physics-based model, a large number of input parameters, and the need to perform repetitive model evaluations. To reduce this cost, neural networks (NNs) are used to replace the expensive physics-based model in this work. This introduces the additional challenge of finding the minimum number of training data samples required to train the NNs accurately. In this work, a new method is introduced to accurately quantify the GSA values by iterating over both the number of samples required to train the NNs, terminated using an outer-loop sensitivity convergence criteria, and the number of model responses required to calculate the GSA, terminated with an inner-loop sensitivity convergence criteria. The iterative surrogate-based GSA guarantees converged values for the Sobol’ indices and, at the same time, alleviates the specification of arbitrary accuracy metrics for the surrogate model. The proposed method is demonstrated on to two cases, namely, an eight variable borehole function, and a three variable nondestructive testing (NDT) case. For the borehole function, both the first and total-order Sobol’ indices required 200 and 100,000 data points to terminate on the outer- and inner-loop sensitivity convergence criteria, respectively. For the NDT case, these values were 100 for both first and total-order indices, for the outer-loop sensitivity convergence, and 1,000,000 and 1000, respectively, for the first and total-order indices, on the inner-loop sensitivity convergence. The differences of the proposed method with GSA on the true functions are less than 3% in the analytical case and less than 10% in the physics-based case (where the large error comes from small Sobol’ indices).

10:40
Forecasting Electricity Prices: Autoregressive Hybrid Nearest Neighbors (ARHNN) method

ABSTRACT. The ongoing reshape of electricity markets has significantly stimulated electricity trading. Limitations in storing electricity as well as on-the-fly changes in demand and supply dynamics, have led price forecasts to be a fundamental aspect of traders’ economic stability and growth. In this perspective, there is a broad literature that focuses on developing methods and techniques to forecast electricity prices. In this paper, we develop a new hybrid method, called ARHNN, for electricity price forecasting (EPF) in day-ahead markets. A well performing autoregressive model, with exogenous variables, is the main forecasting instrument in our method. Contrarily to the traditional statistical approaches, in which the calibration sample consists of the most recent and successive observations, we employ the k-nearest neighbors (k-NN) instance-based learning algorithm and we select the calibration sample based on a similarity (distance) measure over a subset of the autoregressive model’s variables. The optimal levels of the k-NN parameter are identified during the validation period in a way that the forecasting error is minimized. We apply our method in the EPEX SPOT market in Germany. Comparison with commonly used models shows significantly improved accuracy in the results.

11:00
Data-Driven Methods for Weather Forecast

ABSTRACT. In this paper, we propose efficient and practical data-driven methods for weather forecasts. We exploit the information brought by historical weather datasets to build machine-learning-based models. These models are employed to produce numerical forecasts, which can be improved by injecting additional data via data assimilation. Our approaches' general idea is as follows: given a set of time snapshots of some dynamical system, we group the data by time across multiple days. These groups are employed to build first-order Markovian models that reproduce dynamics from time to time. Our numerical models' precision can be improved via sequential data assimilation. Experimental tests are performed by using the National-Centers-for-Environmental-Prediction Department-of-Energy Reanalysis II dataset. The results reveal that numerical forecasts can be obtained within reasonable error magnitudes in the $L_2$ norm sense, and even more, observations can improve forecasts by order of magnitudes, in some cases.

11:20
Generic Case of Leap-Frog Algorithm for Optimal Knots Selection in Fitting Reduced Data

ABSTRACT. The problem of fitting multidimensional reduced data ${\cal M}_n$ is discussed here. The unknown interpolation knots ${\cal T}$ are replaced by optimal knots which minimize a highly non-linear multivariable function ${\cal J}_0$. The numerical scheme called {\em Leap-Frog Algorithm} is used to compute such optimal knots for ${\cal J}_0$ via the iterative procedure based in each step on single variable optimization of ${\cal J}_0^{(k,i)}$. The discussion on conditions enforcing unimodality of each ${\cal J}_0^{(k,i)}$ is also supplemented by illustrative examples both referring to the generic case of {\em Leap-Frog}. The latter forms a new insight in the topic of interpolating reduced data ${\cal M}_m$.

11:40
Intelligent Planning of Logistic Networks to Counteract Uncertainty Propagation

ABSTRACT. A major obstacle to stable and cost-efficient management of goods distribution systems is the bullwhip effect – reinforced demand uncertainty propagating among system nodes. In this work, by solving a formally established optimization problem, it is shown how one can mitigate the bullwhip effect, at the same minimizing transportation costs, in modern logistic networks with complex topologies. The flow of resources in the analyzed network is governed by the popular order-up-to inventory policy, which thrives to maintain sufficient stock at the nodes to answer a priori unknown, uncertain demand. The optimization objective is to decide how intensive a given transport channel should be used so that unnecessary goods relocation and the bullwhip effect are avoided while being able to fulfill demand requests. The computationally challenging optimization task is solved using a population-based evolutionary technique – Biogeography-Based Optimization. The results are verified in extensive simulations of a real-world transportation network.

10:20-12:00 Session 8F: QCW 1
10:20
Implementing Quantum Finite Automata Algorithms on Noisy Devices

ABSTRACT. Quantum finite automata (QFAs) literature offers an alternative mathematical model for studying quantum systems with finite memory. As a superiority of quantum computing, QFAs have been shown exponentially more succinct on certain problems such as $\MODp = \setBuilder{a^{j}}{j \equiv 0 \mod p}$, where $p$ is a prime number. In this paper we present improved circuit based implementations for QFA algorithms recognizing the $\MODp$ problem using the Qiskit framework. We focus on the case $p=11$ and provide a 3 qubit implementation for the $\MOD{11}$ problem reducing the total number of required gates using alternative approaches. We run the circuits on real IBM quantum devices but due to the limitation of the real quantum devices in the NISQ era, the results are heavily affected by the noise. This limitation reveals once again the need for algorithms using less amount of resources. Consequently, we consider an alternative 3 qubit implementation which works better in practice and obtain promising results even for the problem $\MOD{31}$.

10:40
OnCall Operator Scheduling for Satellites with Grover's Algorithm

ABSTRACT. The application of quantum algorithms on some problems in NP promises a significant reduction of time complexity. This work uses Grover’s Algorithm, designed to search an unstructured database with quadratic speedup, to find valid a solution for an instance of the on-call operator scheduling problem at the German Space Operation Center. We explore new approaches in encoding the problem and construct the Grover oracle automatically from the given constraints and independent of the problem size. Our solution is not designed for currently available quantum chips but aims to scale with their growth in the next years.

11:00
Multimodal Container Planning: a QUBO Formulation and Implementation on a Quantum Annealer

ABSTRACT. Quantum computing is developing fast. Real world applications are within reach in the coming years. One of the most promising areas is combinatorial optimisation, where the Quadratic Unconstrained Binary Optimisation (QUBO) problem formulation is used to get good approximate solutions. Both the universal quantum computer as well as the quantum annealer can handle this kind of problems well. In this paper, we present an application on multimodal container planning. We show how to map this problem to a QUBO problem formulation and how the practical implementation can be done on the quantum annealer produced by D-Wave Systems.

11:20
Portfolio Optimisation Using the D-Wave Quantum Annealer

ABSTRACT. The first quantum computers are expected to perform well at quadratic optimisation problems. In this paper a quadratic problem in finance is taken, the Portfolio Optimisation problem. Here, a set of assets is chosen for investment, such that the total risk is minimised, a minimum return is realised and a budget constraint is met. This problem is solved for several instances in two main indices, the Nikkei225 and the S\&P500 index, using the state-of-the-art implementation of D-Wave's quantum annealer and its hybrid solvers. The results are benchmarked against conventional, state-of-the-art, commercially available tooling. Results show that for problems of the size of the used instances, the D-Wave solution, in its current, still limited size, comes already close to the performance of commercial solvers.

11:40
Cross Entropy Optimization of Constrained Problem Hamiltonians for Quantum Annealing

ABSTRACT. This paper proposes a Cross Entropy approach to shape constrained Hamiltonians by optimizing their energy penalty values. The results show a significantly improved solution quality when run on D-Wave's quantum annealing hardware and the numerical computation of the eigenspectrum reveals that the solution quality is correlated with a larger minimum spectral gap. The experiments were conducted based on the Knapsack-, Minimum Exact Cover- and Set Packing Problem. For all three constrained optimization problems we could show a remarkably better solution quality compared to the conventional approach, where the energy penalty values have to be guessed.

10:20-12:00 Session 8G: MMS 1
10:20
VVUQ of large-scale applications with QCG-PilotJob

ABSTRACT. This submission describes QCG-PilotJob and its role in the VVUQ scenarios implemented in the VECMA project. It also introduces EasyVVUQ-QCGPJAPI (EQI), which allows for straightforward usage of QCG-PiloJob directly from EasyVVUQ library.

10:40
Validation, verification and sensitivity analysis in the multiscale fusion plasma simulations

ABSTRACT. In order to ensure plasma confinement in nuclear fusion reactors, it is crucial to understand the effects of microturbulence on the overall fusion plasma transport. A component-based, multiscale fusion workflow [1] couples single-scale models for equilibrium, turbulence, and transport with the MUSCLE3 library [2] and is utilized for the study. To make sure the simulation can produce reliable results, the workflow needs to undergo the process of validation, verification, and uncertainty quantification. In the validation process, various validation metrics can be applied to quantitatively compare distributions from simulation and experimental data. The figure below shows the comparison between temperature distributions obtained from simulation and experimental data using the compatibility measure [3] (with various weighting factors) and the Z-test. Refining model parameters until the quantities of interest reach convergence is one approach towards verification in the computational model. In addition, variance-based sensitivity analysis on a single-scale model can be used to study the global response of the quantities of interest throughout the entire input parameter space. For the actual computation the turbulence code GEM [4] was run for more than 6 sec to reach a self-consistent state for the temperature and density profiles as well as for the associated plasma equilibrium and transport properties. The derived uncertainties, the sensitivity analysis as well as the progress on validation and verification of the multiscale fusion workflow will be presented.

11:00
Towards a coupled migration and weather simulation: South Sudan conflict

ABSTRACT. Multiscale simulations present a new approach to increase the level of accuracy in terms of forced displacement forecasting, which can help humanitarian aid organizations to better plan resource allocations for refugee camps. People's decisions to move may depend on perceived levels of safety, accessibility or weather conditions; simulating this combination realistically requires a coupled approach. In this paper, we implement a multiscale simulation for the South Sudan conflict in 2016-2017 by defining a macroscale model covering most of South Sudan and a microscale model covering the region around the White Nile, which is in turn coupled to weather data from the Copernicus project. We couple these models cyclically in two different ways: using file I/O and using the MUSCLE3 coupling environment. For the microscale model, we incorporated weather factors including precipitation and river discharge datasets. To investigate the effects of the multiscale simulation and its coupling with weather data on refugees’ decisions to move and their speed, we compare the results with single-scale approaches in terms of the total validation error, total execution time and coupling overhead.

11:20
Evaluating WRF-BEP/BEM performance: on the way to analyze urban air quality at high resolution using WRF-Chem+BEP/BEM

ABSTRACT. Air pollution exposure is a major environmental risk to health in highly populated areas and the responsible for an estimated 7 million people deaths every year worldwide. Countries can reduce serious diseases by reducing air pollution levels of their main cities. Monitoring, analyzing and predicting air pollution in our cities could aid the population to be more informed and urban planners to make better decisions. WRF-Chem, Weather Research and Forecasting (WRF) model coupled with chemistry, provides not only the expected meteorological conditions but also the concentrations of polluting species in the studied domain, however, WRF-Chem does not take into account the urban morphology. WRF-BEP/BEM includes the urban canopy effects into the atmosphere without considering the evolution of the pollutants. Therefore, in order to evaluate the air quality in urban zones, the combinations of all mentioned models WRF-Chem+BEP/BEM must be considered. However, these coupled models are computationally very expensive especially at very high urban resolutions, so that it makes indispensable to properly analyze their performance in terms of time and quality to be useful both for operational and reanalysis purposes. This work represents the first step towards this global objective. For that purpose, the WRF-BEP/BEM scalability and the quality of the results provides has been analyzed for a study case of March 2015 in the urban area of Barcelona (Spain).

10:20-12:00 Session 8H: CLDD 4
10:20
On validity of Extreme Value Theory-based parametric models for out-of-distribution detection

ABSTRACT. Open-set classifiers need to be able to recognize inputs that are unlike the training or known data. As this problem, known as out-of-distribution (OoD) detection, is non-trivial, a number of methods to do this have been proposed. These methods are mostly heuristic, with no clear consensus in the literature as to which should be used in specific OoD detection tasks. In this work, we focus on a recently proposed, yet popular, Extreme Value Machine (EVM) algorithm. The method is unique as it uses parametric models of class inclusion, justified by the Extreme Value Theory, and as such is deemed superior to heuristic methods. However, we demonstrated a number of open-set text and image recognition tasks, in which the EVM was outperformed by simple heuristics. We explain this by showing that the parametric (Weibull) model in EVM is not appropriate in many real datasets, which is due to unsatisfied assumptions of the Extreme Value Theorem. Hence we argue that the EVM should be considered another heuristic method.

10:40
Clustering-based Ensemble Pruning in the Imbalanced Data Classification

ABSTRACT. Ensemble methods in combination with data preprocessing techniques are one of the most used approaches to dealing with the problem of imbalanced data classification. At the same time, the literature indicates the potential capability of classifier selection/ensemble pruning methods to deal with imbalance without the use of preprocessing, due to the ability to use expert knowledge of the base models in specific regions of the feature space. The aim of this work is to check whether the use of ensemble pruning algorithms may allow for increasing the ensemble's ability to detect minority class instances at the level comparable to the methods employing oversampling techniques. Two approaches based on the clustering of base models in the diversity space, proposed by the author in previous articles, were evaluated based on the computer experiments conducted on 41 benchmark datasets with a high imbalance ratio. The obtained results and the performed statistical analysis confirm the potential of employing classifier selection methods for the classification of data with the skewed class distribution.

11:00
Improvement of random undersampling to avoid excessive removal of points from a given area of the majority class

ABSTRACT. In this paper we focus on class imbalance issue which often leads to sub-optimal performance of classifiers. Despite many attempts to solve this problem, there is still a need to look for better ones, which can overcome the limitations of known methods. For this reason we developed a new algorithm that in contrast to traditional random undersampling removes maximum k nearest neighbors of the samples which belong to the majority class. In such a way, there has been achieved not only the effect of reduction in size of the majority set but also the excessive removal of too many points from the given area has been successfully prevented. The conducted experiments are provided for eighteen imbalanced datasets, and confirm the usefulness of the proposed method to improve the results of the classification task, as compared to other undersampling methods. Non-parametric statistical tests show that these differences are usually statistically significant.

11:20
Predictability Classes for Forecasting Bank Clients Behavior by Transactional Data

ABSTRACT. Nowadays, the task of forecasting the client's behavior using his/her digital footprints is highly demanded. There are many approaches to predict the client's next purchase or next location visited which are focused on achieving the best forecasting quality in terms of different metrics. Within such approaches, the quality is however usually measured with respect to the entire dataset, without distinguishing possible predictability classes. In contrast to the approaches, here we pay attention to estimating the event predictability rate. More precisely, we propose an approach for the identication of the client's predictability class by using only his/her historical transactional data. This approach allows us to estimate the predictability rate of a client's foreign trip in the next month before the actual forecasting. Our experiments show that the approach is rather efficient and that the predictability classes obtained quite agree with those found after the actual forecasting.

11:40
A Non-Intrusive Machine Learning Solution for Malware Detection and Data Theft Classification in Smartphones

ABSTRACT. Smartphones contain information that is more sensitive and personal than those found on computers and laptops. With an increase in the versatility of smartphone functionality, more data has become vulnerable and exposed to attackers. Successful mobile malware attacks could steal a user's location, photos, or even banking information. Due to a lack of post-attack strategies firms also risk going out of business due to data theft. Thus, there is a need besides just detecting malware intrusion in smartphones but to also identify the data that has been stolen to assess, aid in recovery and prevent future attacks. In this paper, we propose an accessible, non-intrusive machine learning solution to not only detect malware intrusion but also identify the type of data stolen for any app under supervision. We do this with android usage data obtained by utilising publicly available data collection framework–SherLock. We test the performance of our architecture for multiple users on real-world data collected using the same framework. Our architecture exhibits less than 9% inaccuracy in detecting malware and can classify with 83% certainty on the type of data that is being stolen.

12:00-13:00Lunch
13:00-14:40 Session 9A: MT 9
13:00
Revolve-Based Adjoint Checkpointing for Multistage Time Integration

ABSTRACT. We consider adjoint checkpointing strategies that minimize the number of recomputations needed when using multistage timestepping. We demonstrate that we can improve on the seminal work based on the Revolve algorithm. The new approach provides better performance for a small number of time steps or checkpointing storage. Numerical results illustrate that the proposed algorithm can deliver up to two times speedup compared with that of Revolve and avoid recomputation completely when there is sufficient memory for checkpointing. Moreover, we discuss a tailored implementation that is arguably better suited for mature scientific computing libraries by avoiding central control assumed in the original checkpointing strategy. The proposed algorithm has been included in the PETSc library.

13:20
High Resolution TVD Scheme based on Fuzzy Modifiers for Shallow-Water equations

ABSTRACT. This work proposes a new fuzzy logic based high resolution(HR) total variation diminishing (TVD) scheme in finite volume frame-works to compute an approximate solution of Shallow water equations. Fuzzy logic enhances the execution of classical numerical algorithms. To test the effectiveness and accuracy of the proposed scheme, the dam-break problem is considered. A comparison of the numerical results by implementing some classical flux limiting methods is provided. The pro- posed scheme is able to capture both smooth and discontinuous profiles, leading to better oscillation-free results.

13:40
Large-scale stabilized multi-physics earthquake simulation for digital twin

ABSTRACT. The development of computing environments and computational techniques, together with data observation technology, big data and extreme-scale computing (BDEC) has gained immense attention. An example of BDEC is the digital twin concept of a city, a high-fidelity model of the city developed based on a computing system for the BDEC. The virtual experiments using numerical simulations are performed there, whose results are used in decision making. The earthquake simulation, which entails the highest computational cost among numerical simulations in the digital twin, was targeted in this study. In the multi-physics earthquake simulation considering soil liquefaction, the computation could become unstable when a high resolution is used for spatial discretization. In the digital twin, high-resolution large-scale simulation is performed repeatedly, and thus, it is important to avoid such instability due to the discretization setting. In this study, an earthquake simulation method was developed to stably perform high-resolution large-scale simulations by averaging the constitutive law spatially based on a non-local approach. The developed method enables us to stably perform simulations with high-resolution of the order of 0.1 m and obtain a converged solution.

14:00
On the design of Monte-Carlo particle coagulation solver interface: a CPU/GPU Super-Droplet Method case study with PySDM

ABSTRACT. Super-Droplet Method (SDM) is a probabilistic Monte-Carlo-type model of particle coagulation process, an alternative to the mean-field formulation of Smoluchowski. SDM as an algorithm has linear computational complexity with respect to the state vector length, the state vector length is constant throughout simulation, and most of the algorithm steps are readily parallelisable. This paper discusses the design and implementation of two number-crunching backends for SDM implemented in PySDM, a new free and open-source Python package for simulating the dynamics of atmospheric aerosol, cloud and rain particles (https://github.com/atmos-cloud-sim-uj/PySDM). The two backends share their application programming interface (API) but leverage distinct parallelism paradigms, target different hardware, and are built on top of different lower-level routine sets. First offers multi-threaded CPU computations and is based on Numba (using Numpy arrays). Second offers GPU computations and is built on top of ThrustRTC and CURandRTC (and does not use Numpy arrays). The paper puts forward a proposal of the Super-Droplet Kernels API featuring data structures and backend-level routines (computational kernels) suitable for performing the computational steps SDM consists of. Presented discussion covers: data dependencies across steps, parallelisation opportunities, CPU and GPU implementation nuances, and algorithm workflow. Example simulations suitable for validating implementations of the API are presented.

14:20
Comprehensive regularization of PIES for problems modeled by 2D Laplace’s equation

ABSTRACT. The paper proposes the concept of eliminating the explicit computation of singu-lar integrals appearing in the parametric integral equation system (PIES) used to simulate the steady-state temperature field distribution. These singularities can be eliminated by regularizing the PIES formula with the auxiliary regularization function. Contrary to existing regularization methods that only eliminate strong singularities, the proposed approach is definitely more comprehensive due to the fact that it eliminates all strong and weak singularities. As a result, all singularities associated with PIES's integral functions can be removed. A practical aspect of the proposed regularization is the fact that all integrals appearing in the resulting formula can be evaluated numerically with a standard Gauss-Legendre quadrature rule. Simulation results indicate the high accuracy of the proposed algorithm.

13:00-14:40 Session 9B: MT 10
13:00
Fast and Accurate Determination of Graph Node Connectivity Leveraging Approximate Methods

ABSTRACT. For a graph G, the node connectivity K is defined as the minimum number of nodes that must be removed to make the graph disconnected. The determination of K is a computationally demanding task for large graphs since even the most efficient algorithms require many evaluations of an expensive max flow function. Approximation methods for determining K replace the max flow function with a much faster algorithm that gives a lower bound on the number of node independent paths, but this frequently leads to an underestimate of K. We show here that with minor changes, the approximate method can be adapted to retain most of the performance benefits while still guaranteeing an accurate result.

13:20
An Exact Algorithm for Finite Metric Space Embedding into a Euclidean Space when the Dimension of the Space is not Known.

ABSTRACT. We present an O(n3) algorithm for solving the Distance Geometry Problem for a complete graph (a simple undirected graph in which every pair of distinct vertices is connected by a unique edge) consisting of n + 1 vertices and non-negatively weighted edges. It is known that when the solution of the problem exists, the dimension of the Euclidean embedding is at most n. The algorithm provides the smallest possible dimension of the Euclidean space for which the exact embedding of the graph exists. Alternatively, when the distance matrix under consideration is non-Euclidean, the algorithm determines a subset of graph vertices whose mutual distances form the Euclidean matrix. The proposed algorithm is an exact algorithm. If the distance matrix is a Euclidean matrix, the algorithm provides a geometrically unambiguous solution for the location of the graph vertices. The presented embedding method was illustrated using examples of metric traveling salesman problem that allowed in some cases to obtain high dimensional partial immersions.

13:40
Resolving Policy Conflicts for Cross-Domain Access Control: A Double Auction Approach

ABSTRACT. Policy-mapping mechanisms can efficiently help to realize the exchange and the sharing of cross-domain information at low cost. However, due to concerns over policy conflicts, if not sufficient incentives, most selfish domains are often disinterested in helping others to implement policy mapping cooperatively. Thus an appropriate incentive mechanism is required. In this paper, we propose an incentive mechanism to encourage selfish domains to take part in policy mapping and resolve policy conflicts. Formulating conflict resolution as a double auction and solving Bayesian Nash equilibrium, we design the optimal asking/bidding price scheme to maximize the benefits of the domains involved. Simulations demonstrate that our approach can efficiently incentivize selfish domains to take part in cooperation.

14:00
An Adaptive Network Model for Procrastination Behaviour Including Self-Regulation and Emotion Regulation

ABSTRACT. Procrastination is an ever-growing problem in our current society. It was shown that 80-95% of college students are subject to it. The importance of this natural human behaviour is what led to this study. In this paper, the goal was to model both the self-control and the emotion regulation dynamics involved in the process of procrastination. This is done by means of a temporal-causal network, incorporating learning and control of the learning. We set out to unveil the dynamics of the system. Additionally, the effect of stress regulation-therapy on the process of procrastination was investigated. The model’s base level implementation was verified by making sure the aggregated impact matches the node values for certain stationary points and the model’s Hebbian learning behaviour was also mathematically shown to be correctly implemented. The results proved this model’s ability to model different types of individuals, all with different stress sensitivities. Therapy was also shown to be greatly beneficial. This temporal-causal network, however, can be improved, such as including self-compassion into the model as a link between procrastination and stress.

13:00-14:40 Session 9C: AIHPC4AS 5
13:00
Exploiting the Kronecker product structure of \varphi-functions with applications to exponential time integrators

ABSTRACT. Exponential time integrators are a class of methods for solving systems of ordinary differential equations, including those obtained from discretizing in space Partial Differential Equations (PDEs). They are mostly employed to solve semilinear systems of the form u'(t)=Au(t)+N(u(t),t) where A is a linear operator (e.g., the stiffness matrix of a PDE) and N is non-linear. For second-order equations, exponential integrators are used after rewriting the system as a set of first order equations in time. These type of methods are usually expressed in terms of the exponential of the matrix A and the so-called \varphi-functions. Exponential integrators have recently gained popularity due to the advances on efficient algorithms to compute the action of \varphi-functions over vectors. In multiple applications, the matrix A has the form of a Kronecker sum, i.e., A = Ax\otimes Iy + Ix\otimes Ay, where Ax and Ay are one-dimensional matrices coming from discretizing the space variable with a finite element method or finite differences. This is possible when the spatial domain is a rectangle and when the material properties are constant. Applications of interest that fulfill this request are the Schrodinger equation, the Burger's equation or the Allen-Cahn equation, among many others. It is well known that the exponential function preserves the Kronecker product structure. However, a similar decomposition does not exist for high-order \varphi-functions with p > 0. Indeed, such construction exploiting the Kronecker product structure of A is elusive. To overcome this problem, in here we introduce an auxiliary variable \Phi_p(A) := A^p\varphi(A) that allows us to find such decomposition using the Kronecker structure. More precisely, we develop recurrence formulas to express \Phi_p(A) being A a 2D matrix with Kronecker sum structure, in terms of \Phi_q(Ax) and \Phi_q(Ay) for q<p. Then, we recover \varphi_p(A) from \Phi_p(A) by solving Sylvester type equations. This procedure decreases dramatically the computational times and increases the memory savings due to the low dimensionality of Ax and Ay comparing to the dimension of A.

13:20
Optimize Memory Usage in Vector Particle-In-Cell (VPIC) to Break the 10 Trillion Particle Barrier in Plasma Simulations

ABSTRACT. Vector Particle-In-Cell (VPIC) is one of the fastest plasma simulation codes in the world, with particle numbers ranging from one trillion on the first petascale system, Roadrunner, to more recent 10 trillion particles on the Blue Waters supercomputer. Supercomputers continue to rapidly grow in size, as does the gap in compute and memory capability. Memory has historically lagged behind compute; the growing importance of accelerators in supercomputers such as GPUs only exacerbates the gap problem. Current memory systems limit VPIC simulations greatly as the maximum number of particles that can be simulated directly depends on the available memory. In this study, we present a suite of VPIC optimizations (i.e., particle weight, half-precision, and fixed-point optimizations) that enable significant increases in the number of particles. We assess the optimizations' impact on a GPU-accelerated Power9 system. We show how our optimizations enable a 31.25% reduction in memory usage for particles.

13:40
Deep Learning for solving partial differential equations using Ritz method

ABSTRACT. The use of Deep Learning (DL) techniques to solve Partial Differential Equations (PDEs) has grown exponentially during the last lustrum. One of the methods that can be implemented with DL techniques to solve symmetric and positive definite PDEs is the Ritz method.

In this work, we want to analyze how the Ritz method performs with different numerical methods to estimate gradients and integrals. To compute the gradients of our model, we use finite differences or automatic differentiation; to approximate integrals, we use the trapezoidal rule or a Gaussian quadrature rule.

To illustrate our findings, we consider a one-dimensional problem whose solution is u(x)=x^2. We compare the numerical solutions and errors when using a three-point Gaussian quadrature rule combined with automatic differentiation vs employing a trapezoidal rule with finite differences to approximate the derivatives. We observe a superior convergence of the higher-order methods, namely, Gaussian quadrature combined with automatic differentiation.

14:00
Deep learning for prediction of complex geology ahead of drilling

ABSTRACT. During a geosteering operation the well path is intentionally adjusted in response the new data acquired while drilling. To achieve consistent high-quality decisions, especially when drilling in complex environments, decision support systems can help cope with high volumes of data and interpretation complexities. They can assimilate the real-time measurements into a probabilistic geomodel and use the updated model for decision recommendations.

Recently, machine learning (ML) techniques have enabled a wide range of methods that redistribute computational cost from on-line to off-line calculations. In this paper, we introduce two ML techniques into the geosteering decision support framework. Firstly, a complex earth model representation is generated using a Generative Adversarial Network (GAN), and secondly, a commercial extra-deep electromagnetic simulator is represented using a Forward Deep Neural Network (FDNN).

The numerical experiments demonstrate that the combination of GAN and the FDNN in ensemble randomized maximum likelihood data assimilation provides real-time estimation of complex geological uncertainty. This yields reduction in the geological uncertainty ahead of the drill-bit from the measurements gathered behind and around the well bore.

13:00-14:40 Session 9D: CompHealth 2
13:00
Simulation of Burnout Processes by a Multi-Order Adaptive Network Model

ABSTRACT. In this paper, an adaptive network model for the development of and recovery from burnout was designed and analysed. The current literature lacks adequate adaptive models to describe the processes involved in burnout. In this research, the informal conceptual models from Golembiewski and Leiter-Maslach model were combined with additional first- and second-order adaptive components and used to design a computational network model based on them. Four different scenarios were simulated and compared, where the importance of the therapy and the ability to learn from it was emphasised. The results show that if there was no therapy present, the emotional regulation was too poor to have an effect. However, at the moment therapy was applied, the emotional regulation would start to help against burnout. Another finding was that one long therapy session has a greater effect than several shorter sessions. Lastly, therapy only had a significant long-lasting effect when adequate neuro-plasticity occurred.

13:20
Reversed Correlation-Based Pairwised EEG Channel Selection in Emotional State Recognition

ABSTRACT. Emotions play an important role in everyday life and contribute to physical and emotional well-being. They can be identified by verbal or non-verbal signs. Emotional states can be also detected by EEG signals. However, efficient information retrieval from the EEG sensors is a difficult and complex task due to noise from the internal and external artifacts and overlapping signals from different electrodes. Therefore, the appropriate electrode selection and discovering the brain parts and electrode locations that are most or least correlated with different emotional states is of great importance. We propose using reversed correlation-based algorithm for intra-user electrode selection, and the inter-subject subset analysis to establish electrodes least correlated with emotions for all users. Moreover, we identified subsets of electrodes most correlated with emotional states. The proposed method has been verified by experiments done on the DEAP dataset. The obtained results have been evaluated regarding the recognition of two emotions: valence and arousal. The experiments showed that the appropriate reduction of electrodes has no negative influence on emotion recognition. The differences between errors for recognition based on all electrodes and the selected subsets were not statistically significant. Therefore, where appropriate, reducing the number of electrodes may be beneficial in terms of collecting less data, simplifying the EEG analysis, and improving interaction problems without recognition loss.

13:40
Regaining Cognitive Control: An Adaptive Computational Model Involving Neural Correlates of Stress, Control and Intervention

ABSTRACT. Apart from various other neural and hormonal changes caused by stress, frequent and long-term activation of the hypothalamus-pituitary-adrenal (HPA) axis in response to stress leads in an adaptive manner to the inadequacy of the stress response system. This leads to a cognitive dysfunction where the subject is no more able to downregulate his or her stress due to the atrophy in the hippocampus and hypertrophy in the amygdala. These atrophies can be dealt with by antidepressant treatment or psychological treatments like cognitive and behavioural therapies. In this paper, an adaptive neuro-science-based computational network model is introduced which demonstrates such a cognitive dysfunction due to a long-term stressor and regain-ing of the cognitive abilities through a cognitive-behavioural therapy: Mindfulness-Based Cognitive Therapy (MBCT). Simulation results are reported for the model which demonstrates the adaptivity as well as the dynamic interaction of the involved brain areas in the phenomenon.

14:00
MAM: A Metaphor-based Approach for Mental Illness Detection

ABSTRACT. Among the most disabling disorders, mental illness is one which affects millions of people across the world. Although a great deal of research has been done to prevent mental disorders, but detecting mental illness in potential patients still remains a considerable challenge. This paper proposes a novel metaphor-based approach (MAM) to determine whether a social media user has a mental disorder or not by classifying social media texts. We observe that the social media texts posted by people with mental illness often contain many implicit emotions that metaphors can express. Therefore, we extract these texts' metaphor features as the primary indicator for the text classification task. Our approach firstly proposes a CNN-RNN (Convolution Neural Network - Recurrent Neural Network) framework to enable the representations of long texts. The metaphor features are then applied to the attention mechanism for achieving the metaphorical emotions-based mental illness detection. Subsequently, compared with other works, our approach achieves creative results on the detection of mental illness.

14:20
Theory of Mind Helps to Predict Neurodegenerative Processes in Parkinson's Disease

ABSTRACT. Normally, it takes many years of theoretical and clinical training for a physician to be the movement disorder specialist. It takes additional multiple years of the clinical practice to handle various “non-typical” cases. The purpose of our study was to predict neurodegenerative disease development by abstract rules learned from experienced neurologists. Theory of mind (ToM) is human's ability to represent mental states such as emotions, intensions or knowledge of others. ToM is crucial not only in human social interactions but also is used by neurologists to find an optimal treatment for patients with neurodegenerative pathologies such as Parkinson's disease (PD). On the basis of doctors' expertise, we have used supervised learning to build AI system that consists of abstract granules representing ToM of several movement disorders neurologists (their knowledge and intuitions). We were looking for similarities between granules of patients in different disease stages to granules of more advanced PD patients. We have compared group of 23 PD with attributes measured three times every half of the year (G1V1, G1V2, G1V3) to other group of 24 more advanced PD (G2V1). By means of the supervised learning and rough set theory we have found rules describing symptoms of G2V1 and applied them to G1V1, G1V2, and G1V3. We have obtained the following accuracies for all/speed/emotion/cognition attrib-utes: G1V1: 68/59/53/72%; G1V2: 72/70/79/79%; G1V3: 82/92/71/74%. These results support our hypothesis that divergent sets of granules were characteristic for different brain's parts that might degenerate in non-uniform ways with Parkinson's disease progression.

13:00-14:40 Session 9E: COMS 5
13:00
Modeling traffic forecasts with probability in DWDM optical networks

ABSTRACT. Dense wavelength division multiplexed networks enable operators to use more efficiently the bandwidth offered by a single fiber pair and thus make significant savings, both in operational and capital expenditures. In this study traffic demands pattern forecasts (with probability) in subsequent years are calculated using statistical methods. Subject to results of statistical analysis numerical methods are used to calculate traffic intensity in edges of a dense wavelength division multiplexed network both in terms of the number of channels allocated and the total throughput expressed in gigabits per second. For the calculation of traffic intensity a model based on mixed integer programming is proposed, which includes a detailed description of optical network resources. The study is performed for a practically relevant network within selected scenarios determined by realistic traffic demand sets.

13:20
Endogenous factors affecting the cost of large-scale geo-stationary satellite systems

ABSTRACT. This work proposes the use of model-based sensitivity analysis to determine important internal factors that affect the cost of a large-scale complex engineered systems (LSCES), such as geo-stationary communication satellites. A physics-based satellite simulation model and a parametric cost model are combined to model a real-world satellite program whose data is extracted from selected acquisitions reports. A variance-based global sensitivity analysis using Sobol' indices computationally aids in establishing internal factors. The internal factors in this work are associated with requirements of the program, operations and support, launch, ground equipment, personnel required to support and maintain the program. The results show that internal factors such as the system based requirements affect the cost of the program significantly. These important internal factors will be utilized to create a simulation-based framework that will aid in the design and development of future LSCES.

13:40
Description of electricity consumption by using leading hours intra-day model

ABSTRACT. This paper focuses on the parametrization of one-day time series of electricity consumption. In order to parametrize such time series data mining technique was elaborated. The technique is based on the multivariate linear regression and is self-configurable, in other words, a user does not need to set any model parameters upfront. The model finds the most essential data points whose values allow to model the electricity consumptions for remaining hours in the same day. The number of data points required to describe the whole time series depends on the demanded precision which is up to the user. The constructed model is characterized by a high precision and allows to find non-typical days from the electricity demand point of view.

14:00
The problem of tasks scheduling with due dates in a flexible multi-machine production cell

ABSTRACT. In the paper we consider an NP-hard problem of tasks scheduling with due dates and penalties for the delay in a flexible production cell. Each task should be assigned to one of the cell's machines and the order of their execution on machines should be determined. The sum of penalties for tardiness of tasks execution should be minimized. We propose to use the tabu search algorithm to solve the problem. Neighborhoods are generated by moves based on changing the order of tasks on the machine and changing the machine on which the task will be performed. We prove properties of moves that significantly accelerate the search of the neighborhoods and shorten the time of the algorithm execution and in result significantly improves the efficiency of the algorithm compared to the version that does not use these properties.

14:20
Discovering the influence of interruptions in cycling training: A data science study

ABSTRACT. The usage of wearables in different sports has resulted in the potential of recording vast amounts of data that allow us to dive even deeper into sports training. This paper provides a novel approach to classifying stoppage events in cycling, and shows an analysis of interruptions in training that are caused when a cyclist encounters a road intersection where he/she must stop while cycling on the track. From 2,629 recorded cycling training sessions 3,731 viable intersection events were identified on which analysis was performed of heart-rate and speed data. It was discovered that individual intersections took an average of 4.08 seconds, affecting the speed and heart-rate of the cyclist before and after the event. We've also discovered that, after the intersection disruptions, the speed of the cyclist decreased and his heart-rate increased in comparison to his pre intersection event values.

14:40
Analysis of complex partial seizure using non-linear duffing Van der Pol oscillator model

ABSTRACT. Complex partial seizures belong to the most common type of epileptic seizures. The main purpose of the case study is the application of the Van der Pol model oscillator to study brain activity dur-ing temporal left lobe seizures. The oscillator is characterized by three pairs of parameters: linear and two nonlinear, cubic and Van der Pol damping. The optimization based on the normalized power spectra of model output and real EEG signal is performed used a genetic algorithm. The results suggest, that the estimated parameter values change during the course of the seizure, ac-cording to changes in brain waves generation. In the article, based on values of sensitivity factor of parameters, sample entropy and spectrograms, non-stationary of considered seizure phases are analyzed. The onset of the seizure and the tangled stage belong to strongly non-stationary pro-cesses.

13:00-14:40 Session 9F: QCW 2
13:00
Classification using a two-qubit quantum chip

ABSTRACT. Quantum computing has great potential for advancing machine learning algorithms beyond classical reach. Even though full-fledged universal quantum computers do not exist yet, its expected benefits for machine learning can already be shown using simulators and already available quantum hardware. In this work, we consider a distance-based classification algorithm and modify it to be run on actual early stage quantum hardware. We extend upon earlier work and present a non-trivial reduction using only two qubits. The algorithm is consequently run on a two-qubit silicon-spin quantum computer. We show that the results obtained using the two-qubit silicon-spin quantum computer are similar to the theoretically expected results.

13:20
Performance Analysis of Support Vector Machine Implementations on the D-Wave Quantum Annealer

ABSTRACT. In this paper a classical classification model, Kernel-Support Vector machine, is implemented as a Quadratic Unconstrained Binary Optimisation problem. Here, data points are binary classified by a separating hyperplane while maximizing the function margin. The problem is solved for a public Banknote Authentication dataset and the well-known Iris dataset using a classical approach, using simulated annealing, directly on a quantum annealer and using a hybrid solver on the quantum annealer. The hybrid solver and Simulated Annealing algorithm outperform the classical implementation on various occasions but show high sensitivity to a small variation in training data.

13:40
Adiabatic Quantum Feature Selection for Sparse Linear Regression

ABSTRACT. Linear regression is a popular machine learning approach to learn and predict real valued outputs or dependent variable from independent variables or features. In many real world problems, its beneficial to perform sparse linear regression to identify important features helpful in predicting the dependent variable. It not only helps in getting interpretable results but also avoids overfitting when the number of features is large and the amount data is small. The best and natural way to achieve this is by using `best subset selection' which penalizes non-zero model parameters by adding l0 norm over parameters to the least squares loss. However, this make the objective function non-convex and intractable even for small number of features. This paper aims to address the intractability of sparse linear regression with l0 norm using adiabatic quantum computing, a quantum computing paradigm that is particularly useful for solving optimization problems faster. We formulate the l0 optimization problem as a quadratic unconstrained binary optimization (QUBO) problem and solve it using D-Wave adiabatic quantum computer. We study and compare the quality of QUBO solution on synthetic and real world data sets. The results demonstrate the effectiveness of the proposed adiabatic quantum computing approach in finding the optimal solution.

14:00
EntDetector: entanglement detecting toolbox for bipartite quantum states

ABSTRACT. Quantum entanglement is an extremely important phenomenon in the field of quantum computing. It is the basis of many communication protocols, cryptography and other quantum algorithms. On the other hand, however, it is still an unresolved problem, especially in the area of entanglement detection methods. In this article, we present a~computational toolbox which offers a set of currently known methods for detecting entanglement, as well as proposals for new tools operating on two-partite quantum systems. We propose to use the concept of combined Schmidt and spectral decomposition as well as the concept of Gramian operators to examine a~structure of analysed quantum states. The presented here computational toolbox was implemented by the use of Python language. Due to popularity of Python language, and its ease of use, a proposed set of methods can be directly utilised with other packages devoted to quantum computing simulations. Our toolbox can also be easily extended.

14:20
On Decision Support for Quantum Application Developers: Categorization, Comparison, and Analysis of Existing Technologies

ABSTRACT. Quantum computers have been significantly advanced in recent years. Offered as cloud services, quantum computers have become accessible to a broad range of users. Along with the physical advances, the landscape of technologies supporting quantum application development has also grown rapidly in recent years. However, there is a variety of tools, services, and techniques available for the development of quantum applications, and which ones are best suited for a particular use case depends, among other things, on (i) the quantum algorithm and (ii) quantum hardware. Thus, their selection is a manual and cumbersome process. To tackle this challenge, we introduce (i) a categorization and (ii) a taxonomy of available tools, services, and techniques for quantum application development to enable their analysis and comparison. Based on that we further present (iii) a comparison framework to support quantum application developers in their decision for certain technologies.

13:00-14:40 Session 9G: MMS 2
13:00
Pathology dynamics in healthy-toxic protein interaction and the multiscale analysis of neurodegenerative diseases

ABSTRACT. Neurodegenerative diseases are frequently associated with aggregation and propagation of toxic proteins. In particular, it is well known that along with amyloid-beta, the tau protein is also driving Alzheimer’s disease. Multiscale reaction-diffusion models can assist in our better understanding of the evolution of the disease. We have modified the heterodimer model in such a way that it can now capture some of critical characteristics of this evolution such as the conversion time from healthy to toxic proteins. We have analyzed the modified model theoretically and validated the theoretical findings with numerical simulations.

13:20
A Semi-implicit Backward Differentiation ADI Method for Solving Monodomain Model

ABSTRACT. In this paper, we present an efficient numerical method for solving the electrical activity of the heart. We propose a second order alternating direction implicit finite difference method (ADI) for both space and time. The derivation of the proposed ADI scheme is based on the semi-implicit backward differentiation formula (SBDF). Numerical simulation showing the computational advantages of the proposed algorithm in terms of the computational time and memory consumption are presented.

13:40
A Deep Learning Approach for Polycrystalline Microstructure-Statistical Property Prediction

ABSTRACT. Upscaling of the mechanical properties of polycrystalline aggregates might require complex and time-consuming procedures, if adopted to help in the design and reliability analysis of micro-devices. In inertial micro electro-mechanical systems (MEMS), the movable parts are often made of polycrys-talline silicon films and, due to the current trend towards further miniaturiza-tion, their mechanical properties must be characterized not only in terms of average values but also in terms of their scattering. In this work, we propose two convolutional network models based on the ResNet and DenseNet archi-tectures, to learn the features of the microstructural morphology and allow automatic upscaling of the statistical properties of the said film properties. Results are shown for film samples featuring different values of a length scale ratio, so as to assess accuracy and computational efficiency of the pro-posed approach.

14:00
MsFEM upscaling for coupled thermo-mechanical problem

ABSTRACT. In this paper, we present the framework for the multiscale thermoelastic analysis of composites. Asphalt concrete (AC) was selected to demonstrate the applicability of the proposed approach. It is due to the observed high dependence of this material performance on the thermal eects. The insight into the microscale behavior is upscaled to the macroresolution by the multiscale nite element method (MsFEM) that has not been used so far for coupled problems. In the paper, we present a brief description of this approach together with its new application to coupled thermoelastic numerical modeling. The upscaled results are compared with the reference ones and the error analysis is presented. A very good agreement between these two solutions was obtained. Simultaneously, a large reduction of the degrees of freedom can be observed for the MsFEM solution. The number of degrees of freedom was reduced by 3 orders introducing only about 6% additional approximation error. We also present the convergence of the method with the increasing approximation order at the macroresolution. Finally, we demonstrate the impact of the thermal effects on the displacements in the analyzed asphalt concrete sample.

14:20
MaMiCo: Non-Local Means Filtering with Flexible Data-Flow for Coupling MD and CFD

ABSTRACT. When a molecular dynamics (MD) simulation and a computational fluid dynamics (CFD) solver are coupled together to create a multiscale, molecular-continuum flow simulation, thermal noise fluctuations from the particle system can be a critical issue, so that noise filtering methods are required. Noise filters are one option to significantly reduce these fluctuations.

We present a modified variant of the Non-Local Means (NLM) algorithm for MD data. Originally developed for image processing, we extend NLM to a space-time formulation and discuss its implementation details.

The space-time NLM algorithm is incorporated into the Macro-Micro-Coupling tool (MaMiCo), a C++ molecular-continuum coupling framework, together with a novel flexible filtering subsystem. The latter can be used to configure and efficiently execute arbitrary data-flow chains of simulation data analytics modules or noise filters at runtime on an HPC system, even including python functions. We employ a coupling to a GPU-based Lattice Boltzmann solver running a vortex street scenario to show the benefits of our approach. Our results demonstrate that NLM has an excellent signal-to-noise ratio gain and is a superior method for extraction of macroscopic flow information from noisy fluctuating particle ensemble data.

13:00-14:40 Session 9H: IoTSS 3
13:00
Object-Oriented Internet - Cloud Interoperability

ABSTRACT. Optimization of industrial processes requires further research on the integration of machine-centric systems with human-centric cloud-based services in the context of new emerging disciplines, namely Industry 4.0 and Industrial Internet of Things. The research aims at working out a new generic architecture and deployment scenario applicable to that integration. The reactive interoperability relationship of the interconnected nodes is proposed to deal with the network traffic propagation asymmetry or assets' mobility. The described solution based on the OPC Unified Architecture international standard relaxes issues related to the real-time multi-vendor environment. The discussion addressing generic architecture concludes that the embedded gateway software part best suits all requirements. To promote separation of concerns and reusability, the proposed architecture of the embedded gateway has been implemented as a composable part of the selected OPC UA PubSub framework.

The proposals are backed by proof of concept reference implementations confirming the possibility to integrate selected cloud services with the cyber-physical system interconnected as one whole atop of the OPC UA by applying the proposed architecture and deployment scenario. It is contrary to interconnecting cloud services with the selected OPC UA server limiting the PubSub role to data export only.

13:20
Static and Dynamic Comparison of Pozyx and DecaWave UWB Indoor Localization Systems with Possible Improvements

ABSTRACT. This paper investigates static and dynamic localization accuracy of two indoor localization systems using Ultra-wideband (UWB) technology: Pozyx and DecaWave DW1000. We present the results of laboratory research, which demonstrates how those two UWB systems behave in practice. Our re-search involves static and dynamic tests. A static test was performed in the la-boratory using the different relative positions of anchors and the tag. For a dy-namic test, we used a robot that was following the EvAAL-based track located between anchors. Our research revealed that both systems perform below our expectations, and the accuracy of both systems is worse than declared by the system manufacturers. Therefore, we proposed a set of filters that allow for the improvement of localization accuracy.

13:40
Challenges associated with sensors and data fusion for AGV-driven smart manufacturing

ABSTRACT. Data fusion methods enable the precision of measurements based on information from individual systems as well as many different subsystems to be increased. Besides, the data obtained in this way enables additional conclusions to be drawn from their work, e.g., detecting degradation of the work of subsystems. The article focuses on the possibilities of user data fusion for creating Autonomous Guided Vehicles solutions in the areas of increasing precise positioning, navigation, and cooperation with the production environment including docking. For this purpose, it was proposed that information from other manufacturing subsystems be used. This paper aims to review the current implementation possibilities and to identify the relationship between various research sub-areas.

14:00
Dynamic pricing and discounts by means of interactive presentation systems in stationary point of sales

ABSTRACT. The main purpose of this article was to create a model and simulate the profitability conditions of interactive presentation system (IPS) with recommender system (RS) used in the kiosk. 90 million simulations has been run in Python with the SymPy resulting in the following findings: 1) the outcome does not depend on the share of customers using the system 2) the important parameters are customers’ initial purchase intention and discount level 3) the system can increase the number of customers by factor of three.

14:20
Dataset for anomalies detection in 3D printing

ABSTRACT. Nowadays, the Internet of Things plays a significant role in many domains. Especially, Industry 4.0 is making significant usage of concepts like smart sensors and big data analysis. IoT devices are commonly used to monitor industry machines and detect anomalies in their work. This paper presents and describes a set of data streams coming from a working 3D printer. Among others, it contains accelerometer data of printer head, intrusion power and temperatures of the printer elements. In order to gain data, we lead to several printing malfunctions applied to the 3D model. The resulting dataset can therefore be used for anomalies detection research.

13:00-14:40 Session 9I: ACMAIML 3
13:00
Trojan Traffic Detection Based on Meta-learning

ABSTRACT. At present, Trojan traffic detection technology based on machine learning generally needs a large number of traffic samples as the training set. In the real network environment, in the face of Zero-Day attack and Trojan variant technology, we may only get a small number of traffic samples in a short time, which can not meet the training requirements of the model. To solve this problem, this paper proposes a method of Trojan traffic detection using meta-learning for the first time, which mainly includes the embedded part and the relation part. In the embedding part, we design a neural network combining ResNet and BiLSTM to transform the original traffic into eigenvectors and allocate the meta tasks of each round of training in the form of a C-way K-shot. In the relation part, we design a relationship network improved by dynamic routing algorithm to calculate the relationship score between samples and categories in the meta-task. The model can learn the ability to calculate the difference between different types of samples on multiple meta-tasks. The model can use a small number of samples to complete training and classify quickly according to prior knowledge. In small samples, our method has better results in Trojan traffic classification than the traditional deep learning method.

13:20
Grasp the Key: Towards Fast and Accurate Host-based Intrusion Detection in Data Centers

ABSTRACT. With the rapid development of data center facilities and technology, in addition to detection accuracy, detection speed has also become a concern for host-based intrusion detection. In this paper, we propose a DNN model to detect intrusion for host with high accuracy. Along with that, a data reduction method based on SHapley Additive exPlanations (SHAP) is incorperated to reduce the execution time of the DNN model. Extensive evaluation on two well-known public datasets in this field shows that our proposed method can achieve high-efficiency intrusion detection while ensuring high-precision.

13:40
MGEL: A Robust Malware Encrypted Traffic Detection Method Based on Ensemble Learning with Multi-Grained Features

ABSTRACT. As the use of encryption protocols increase, so does the challenge of identifying malware encrypted traffic. One of the most significant challenges is the robustness of the model in different scenarios. In this paper, we propose an ensemble learning approach based on multi-grained features to address this problem which is called MGEL. The MGEL builds diverse base learners using multi-grained features and then identifies malware encrypted traffic in a stacking way. Moreover, we introduce the self-attention mechanism to process sequence features and solve the problem of long-term dependence. We verify the eﬀectiveness of the MGEL on two public datasets and the experimental results show that the MGEL approach outperforms other state-of-the-art methods in four evaluation metrics.

14:00
TS-Bert: Time-series Anomaly Detection via Pre-training Model Bert

ABSTRACT. Anomaly detection of time series is of great importance in data mining research. Current state of the art suer from scalability, over reliance on labels and high false positives. To this end, a novel framework, named TS-Bert, is proposed in this paper. TS-Bert is based on pre-training model Bert and therefore consists of two phases. It learns the behavior features of the time series from massive unlabeled data during the pre-training phase and is ne-tuned based on the target dataset during the ne-tuning phase. This pre-training mode makes the model more general. Since the Bert model was not designed for the time series anomaly detection task, we have made some modications to enable Bert to deal with this problem and improve the detection accuracy of the model. Furthermore, we have removed the dependency of the model on labeled data, so that although the original Bert is supervised, TS-Bert is not. Experiments on the public data set KPI and yahoo demonstrate that TS-Bert has signicantly improved the f1 value compared to the current state-of-the-art unsupervised learning models.

14:20
Relation order histograms as a network embedding tool

ABSTRACT. In this work, we introduce a novel graph embedding technique called NERO (Network Embedding based on Relation Order histograms). Its performance is assessed using a number of well-known classification problems and a newly introduced benchmark dealing with detailed laminae venation networks. The proposed algorithm achieves results surpassing those attained by other kernel-type methods and comparable with many state-of-the-art GNNs while requiring no GPU support and being able to handle relatively large input data. It is also demonstrated that the produced representation can be easily paired with existing model interpretation techniques to provide an overview of the individual edge and vertex influence on the investigated process.

13:00-14:40 Session 9J: CLDD 5
13:00
Analysis of Semestral Progress in Higher Technical Education with HMM Models

ABSTRACT. Supporting educational processes with Hidden Markov Models (HMMs) has great potential. In this paper, we explore the possibility of identifying students' learning progress with HMMs. Students' grades are used to train the HMMs to find out if the analysis of obtained models lets us detect patterns emerging from student's results. We also try to predict the final students' results on the basis of their partial grades. A new, classification approach for this problem, using properties of HMMs is proposed.

13:20
Vicinity-based Abstraction: VA-DGCNN Architecture for Noisy 3D Indoor Object Classification

ABSTRACT. One of the outstanding benchmark architectures for point cloud processing with graph-based structures is Dynamic Graph Convolutional Neural Network (DGCNN). Though it works well for classification of nearly perfectly described digital models, it leaves much to be desired for real-life cases burdened with noise and 3D scanning shadows. Therefore we propose a novel, feature-preserving vicinity abstraction (VA) layer for the EdgeConv module. This allowed for enriching the global feature vector with the local context provided by the k-NN graph. Rather than processing a point together with its neighbours at once, local information is aggregated before further processing, unlike in the original DGCNN. Such an approach enabled a model to learn accumulated information instead of max-pooling features from local context at the end of each EdgeConv module. Thanks to this strategy mean- and overall classification accuracy increased by 9.4pp and 4.4pp, respectively. Furthermore, thanks to processing aggregated information rather than the entire vicinity, the new VA-DGCNN model converges significantly faster than the original DGCNN.

13:40
Grid-Based Concise Hash for Solar Images

ABSTRACT. Continuous full-disk observations of the solar chromosphere and corona are provided nowadays by the Solar Dynamics Observatory. Such data are crucial for analysing the Sun-Earth system and life on our planet. Part of the data is an enormous number of high-resolution images. We create a compact grid-based solar image hash to classify or retrieve similar solar images. To compute the hash, we design intermediate hand-crafted features. Then, we use a convolutional autoencoder to encode the descriptors to the form of a concise hash.

14:00
Machine learning algorithms for conversion of CVSS base score from 2.0 to 3.x

ABSTRACT. The Common Vulnerability Scoring System (CVSS) is the industry standard for describing the characteristics of software vulnerabilities and measuring their severity. However, not all publicly known vulnerabilities have criticality rating in CVSS 3.x, which is the latest and most advanced version of the standard. This is due to the large time gap between the publication of the CVSS 2.0 and CVSS 3.x standards, the large number of the detected and published vulnerabilities at the time, and significant differences in the method of determining vulnerability criticality and assigning vector properties to evaluation components. Consequently, organizations using CVSS to prioritize vulnerabilities use both CVSS versions and abandoned the full transition to CVSS 3.x standard. In this paper authors introduce machine learning algorithms for performing conversions from CVSS 2.0 to CVSS 3.x, scores, which should significantly facilitate the upgrade to CVSS 3.x standard for all stakeholders. The machine learning algorithms are applied to difficult data derived from CVSS database. The considered case corresponds to a real world application with a large potential impact of the research.

14:20
Applicability of Machine Learning to Short-Term Prediction of Changes in the Low Voltage Electricity Distribution Network

ABSTRACT. Low voltage electricity distribution network actively maintains the stability of its key parameters, primarily against the predictable regularity of seasonal changes. This makes long-term coarse prediction practical, but it hampers the accuracy of a short-term fine-grained one. While such predictability can further improve the stability of the network. This paper presents the outcome of research to determine whether Machine Learning (ML) methods can improve the accuracy of the pre-diction of next-second values of three network parameters: voltage, frequency and harmonic distortions. Four ML methods were tested: XGBoost Regressor, Dense neural networks (both one and two layer) and LSTM networks, against static prediction methods. Real data collected from the actual network were used for both training and testing. The challenging nature of this data is due to the net-work executing corrective measures, thus making parameter values return to their means. This results in non-normal distribution with strong long-term memory impact, but with no viable correlation to use for short-term prediction. Still, re-sults indicate improvements of up to 20%, even for non-optimized ML methods, with some scope for further improvements.

14:50-15:40 Session 10: Keynote Lecture 4
14:50
Empirical Bayesian Inference using Joint Sparsity

ABSTRACT. We develop a new empirical Bayesian inference algorithm for solving a linear inverse problem given multiple measurement vectors (MMV) of under-sampled and noisy observable data. Specifically, by exploiting the joint sparsity across the multiple measurements in the sparse domain of the underlying signal or image, we construct a new support informed sparsity promoting prior. While a variety of applications can be modeled using this framework, in this talk we discuss classification and target recognition from synthetic aperture radar (SAR) data which are acquired from neighboring aperture windows. Our numerical experiments demonstrate that using this new prior not only improves accuracy of the recovery, but also reduces the uncertainty in the posterior when compared to standard sparsity producing priors. We also discuss how our method can be used to combine and register different types of data acquisition.

This is joint work with Theresa Scarnati formerly of the Air Force Research Lab Wright Patterson and now working at Qualis Corporation in Huntsville, AL, and Jack Zhang, recent bachelor degree recipient at Dartmouth College and now enrolled at University of Minnesota’s PhD program in mathematics.

15:40-16:10Coffee Break
16:10-17:50 Session 11A: MT 11
16:10
Improved Lower Bounds for the Cyclic Bandwidth Problem

ABSTRACT. We study the classical Cyclic Bandwidth problem, an optimization problem which takes as input an undirected graph G=(V,E) with |V|=n, and asks for a labeling φ of V in which every vertex v takes a unique value φ(v)∈[1;n],in such a way that Bc(G,φ)=max{min uv∈E(G){|φ(u)−φ(v)|,n− |φ(u)−φ(v)|}}, called the cyclic bandwidth of G, is minimized. In this paper, we provide three new and improved lower bounds for the Cyclic Bandwidth problem, applicable to any graph G: two are based on the neighborhood vertex density of G, the other one on the length of a longest cycle in a cycle basis of G. We also show that our results improve the best known lower bounds for a large proportion of a set of instances taken from a frequently used benchmark for the problem, namely the Harwell-Boeing sparse matrix collection. Our third proof provides additional elements: first, an improved sufficient condition yieldin gBc(G)=B(G) (where B(G)=min φ{max uv∈E(G){|φ(u)−φ(v)|}} denotes the bandwidth of G); second, an algorithm that, under some conditions (including B(G)=Bc(G)), computes a labeling reaching B(G) from a labeling reaching Bc(G).

16:30
Co-evolution of Knowledge Diffusion and Absorption: A Simulation-Based Analysis

ABSTRACT. The paper utilizes agent-based simulations to study diffusion and absorption of knowledge. The causal relation of diffusion on absorption is established in order. The process of diffusion and absorption of knowledge is governed by network structure and the dynamics of the recurring influence, conceptualized and modeled as legitimacy, credibility, and strategic complementarity; again a causal relation between the three in order. If not stationary, the agents can also move to acquire either random walk or profile-based mobility modes. Therefore, the co-evolution of network structure due to the mobility of the agents and the dynamics of the recurring influence of ever-changing neighborhood is also modeled. The simulation results reveal that -- (i) higher thresholds for legitimacy and credibility determine slower, (ii) higher number of early adopters results into faster, and (iii) a scheduled and repeated mobility (the profile-based mobility) results into faster -- the absorption of knowledge.

16:50
Estimation of Road Lighting Power Efficiency Using Graph-Controlled Spatial Data Interpretation

ABSTRACT. Estimation of the energy requirements of street lighting is a task crucial for both investment planning and efficiency evaluation of retrofit projects. However, this task is time-consuming and infeasible when performed by hand. This paper proposes an approach based on analysis of publicly-available map data. To assure integrity of this pro- cess and automate it, a new type of graph transformations (Spatially Triggered Graph Transformations) are defined. They result in a seman- tic description of each lighting situation. These, in turn, are used to estimate the power necessary to fulfil the European lighting standard requirements, using pre-computed configurations stored in a ‘big data’ structure.

17:10
Embedding alignment methods in dynamic networks

ABSTRACT. In recent years, dynamic graph embedding has attracted a lot of attention due to its usefulness in real-world scenarios. In this paper, we consider discrete-time dynamic graph representation learning, where embeddings are computed for each time window, and then are aggregated to represent the dynamics of graph. However, independently computed embeddings in consecutive windows suffer from the stochastic nature of representation learning algorithms and are algebraically incomparable (affine transformations). We underline the need for embedding alignment process and provide nine alignment techniques evaluated on real-world datasets in link prediction and graph reconstruction tasks. Our experiments show that embedding alignment improves the performance of downstream tasks up to 11 pp compared to the not aligned scenario.

16:10-17:50 Session 11B: MT 12
16:10
The OpenPME Problem Solving Environment for Numerical Simulations

ABSTRACT. We introduce OpenPME, the Open Particle-Mesh Environment, a problem solving environment that provides a Domain Specific Language (DSL) for numerical simulations in scientific computing. It is built atop a domain metamodel that is general enough to cover the main types of numerical simulations: simulations using particles, meshes, and hybrid combinations of particles and meshes. Using model-to-model transformations, OpenPME generates code against the state-of-the-art C++ parallel computing library OpenFPM. This effectively lowers the programming barrier and enables users to implement scalable simulation codes for high-performance computing (HPC) systems using high-level abstractions. Plenty of recent research has shown that higher-level abstractions and problem solving environments are well suited to alleviate low-level implementation overhead. We demonstrate this for OpenPME and its compiler on three different test cases---particle-based, mesh-based, and hybrid particle-mesh---showing an average up to 7-fold reduction in the number of lines of code required to implement simulations in the OpenPME DSL versus a direct OpenFPM implementation in C++.

16:30
Building a Prototype for Easy to Use Collaborative Immersive Analytics

ABSTRACT. The increase in the size and complexity of today's datasets creates a need to develop and experiment with novel data visualization methods. One of these innovations is immersive analytics, in which extended reality technologies such as CAVE systems and VR headsets are used to present and study data in virtual worlds. But while the use of immersive analytics dates back to the end of the 20th century, it wasn't until recently that collaboration in these data visualization environments was taken in consideration. One of the problems currently surrounding this field is the lack of availability of easy to use cooperative dataviz tools that take advantage of the modern, easily attainable HMD VR solutions. This work proposes to create an accessible collaborative immersive analytics framework that users with low virtual reality background can master, and share, regardless of platform. With this in mind, a prototype of a visualization platform was developed in Unity3D that allows users to create their own visualizations and collaborate with other users from around the world. Additional features such as avatars, resizable visualizations and data highlighters were implemented to increase immersion and collaborative thinking. The end result shows promising qualities, as it is platform versatile, simple to setup and use and is capable of rapidly enabling groups to meet and analyse data in an immersive environment, even across the world.

16:50
Implementation of Auditable Blockchain Voting System with Hyperledger Fabric

ABSTRACT. An efficient democratic process requires a quick, fair and fraud-free election process. Many electronic-based voting systems have been developed to fulfil these requirements but there are still unsolved issues with transparency, privacy and data integrity. The development of distributed ledger technology called blockchain creates the potential to solve these issues. This technology's rapid advancement resulted in numerous implementations, one of which is Hyperledger Fabric, a secure enterprise permissioned blockchain platform. In this paper, the implementation of an Auditable Blockchain Voting System in Hyperledger Fabric is presented to showcase how various platform components can be used to facilitate electronic voting and improve the election process.

17:10
Quantum Data Hub: A Collaborative Data and Analysis Platform for Quantum Material Science

ABSTRACT. Quantum materials research is a rapidly growing domain of materials research, seeking novel compounds whose electronic properties are born from the uniquely quantum aspects of their constituent electrons. The data from this rapidly evolving area of quantum materials requires a new community-driven approach for collaboration and sharing the data from the end-to-end quantum material process. This paper describes the quantum material science process in the NSF Quantum Foundry with an overarching example, and introduces the Quantum Data Hub, a platform to amplify the value of the Foundry data through data science and facilitation of: (i) storing and parsing the metadata that exposes programmatic access to the quantum material research lifecycle; (ii) FAIR data search and access interfaces; (iii) collaborative analysis using Jupyter Hub on top of scalable cyberinfrastructure resources; and (iv) web-based workflow management to log the metadata for the material synthesis and experimentation process.

16:10-17:50 Session 11C: CompHealth 3
16:10
Feature Engineering with Process Mining Technique for Patient State Predictions

ABSTRACT. Process mining is an emerging study area adopting a data-driven approach and a classical model-based process analysis. Process mining techniques are applicable in different domains and may represent standalone tools or integrated solutions within other fields. In this paper, we propose an approach based on a meta-states concept to additional feature extraction from discovered process models for predictive modelling. We show how a simple assumption about cyclic process behaviours can not only help to structure and interpret the process model but to be used in machine learning tasks. The proposed approach is demonstrated for hypertension control status prognosis within a remote monitoring program.

16:30
Comparative Evaluation of Lung Cancer CT Image Synthesis with Generative Adversarial Networks

ABSTRACT. Generative adversarial networks have already found widespread use for the for-mation of artificial, but realistic images of a wide variety of content, including medical imaging. Mostly they are considered to be used for expanding and aug-menting datasets in order to improve accuracy of neural networks classification. In this paper we discuss the problem of evaluating the quality of CT images of lung cancer, which is characterized by small size of nodules, synthesized using two different GAN architectures – for 2D and 3D dimensions. We select the set of metrics for estimating the quality of the generated images, including Visual Tu-ring Test, FID and MRR metrics; then we carry out a problem-oriented modifica-tion of the Turing test in order to adapt it both to the actually obtained images and to resource constraints. We compare the constructed GANs using the selected metrics; and we show that such a parameter as the size of the generated image is very important in the development of the GAN architecture. We consider that with this work we have for the first time shown that for small neo-plasms, direct scaling of the corresponding solutions used to generate large neo-plasms (for ex-ample, gliomas) is ineffective. Developed assessment methods have shown that additional techniques like MIP and special combinations of metrics are required to generate small neoplasms. In addition, an important conclusion can be consid-ered that it is very important to use GAN networks not only, as is usually the case, for augmentation and expansion of the datasets, but for direct use in clinical practice by radiologists.

16:50
Deep convolutional neural networks in application to kidney segmentation in the DCE-MR images

ABSTRACT. This paper evaluates three convolutional neural network architectures -- U-Net, SegNet, and Fully Convolutional (FC) DenseNets -- in application to kidney segmentation in the dynamic contrast-enhanced magnetic resonance images (DCE-MRI). We found U-Net to outperform the alternative solutions with the Jaccard coefficient equal to 94\% against 93\% and 91\% for SegNet and FCDenseNets, respectively. As a next step, we propose to classify renal mask voxels into cortex, medulla, and pelvis based on temporal characteristics of signal intensity time courses. We evaluate our computational framework on a set of 20 DCE-MRI series by calculating image-derived glomerular filtration rates (GFR) -- an indicator of renal tissue state. Then we compare our calculated GFR with the available ground-truth values measured in the iohexol clearance tests. The mean bias between the two measurements amounts to -7.4 ml/min/1.73m2 which proves the reliability of the designed segmentation pipeline

17:10
Comparison of Efficiency, Stability and Interpretability of Feature Selection Methods for Multiclassification Task on Medical Tabular Data

ABSTRACT. Feature selection is an important step of machine learning pipeline. Certain models may select features intrinsically without human interactions or additional algorithms applied. Such algorithms usually belong to neural networks. Others require help of a researcher or feature selection algorithms. However, it is quite hard to know beforehand what variables contain the most relevant information and which may cause difficulties for a model to learn the correct relations. In that respect, researchers have been developing feature selection algorithms. To understand what methods perform better on tabular medical data, we have conducted a set of experiments to measure accuracy, stability and compare interpretation capacities of different feature selection approaches. Moreover, we propose an application of Bayesian Inference to the task of feature selection that can provide more interpretable and robust solution. We believe that high stability and interpretability are as import as classification accuracy especially in predictive tasks in medicine.

17:30
Side effect alerts generation from EHR in Polish

ABSTRACT. The paper addresses the problem of extending an existing and widely used program for Polish public healthcare with a function for detecting possible occurrences of drug side effects. The task is performed in two steps. First, we extract information that binds names of drugs with side effects and their frequency. In the next step, we look for similar phrases in the list of side effect phrases. For all words in phrases, we use Polish Wordnet to find similar ones and check if phrases with replaced words exist in the list. For long side effect phrases, which never occur in patient records, we look for simpler internal side effect phrases to generate alarms. Finally, we evaluate to what extent this action increases the efficiency of side effect alarms.

16:10-17:50 Session 11D: QCW 3
16:10
Quantum Asymmetric Encryption Based on Quantum Point Obfuscation

ABSTRACT. Quantum obfuscation means encrypting the functionality of circuits or functions by quantum mechanics. Although quantum symmetric encryption schemes of some functions based on quantum obfuscation have been discussed, quantum public key scheme based on quantum obfuscation is still not proposed. To construct the asymmetric encryption scheme based on quantum point function, we start with preliminaries on quantum obfuscation, quantum point function and basic quantum operations. Then we implement single-qubit rotation operation to achieve asymmetric encryption, and deal the output quantum state with quantum obfuscation to encrypt the functionality of quantum point function. Finally, we prove the correctness and security of the scheme. As a start of study on asymmetric encryption based on quantum obfuscation, our work will be helpful in the future development of quantum obfuscation theory.

16:30
Index calculus method for solving elliptic curve discrete logarithm problem using quantum annealing

ABSTRACT. This paper presents an index calculus method for elliptic curves over prime fields using quantum annealing. The relation searching step is transformed into the QUBO (Quadratic Unconstrained Boolean Optimization) problem, which may be efficiently solved using quantum annealing, for example, on a D-Wave computer. Unfortunately, it is hard to estimate the complexity of solving the given QUBO problem using quantum annealing. Using Leap hybrid sampler on the D-Wave Leap cloud, we could break ECDLP for the $8$-bit prime field. It is worth noting that the most powerful general-purpose quantum computers nowadays would break ECDLP at most for a $6$-bit prime using Shor's algorithm. In presented approach, the Semaev method of construction of decomposition base is used, where the decomposition base has a form $\mathcal B=\left \{ x: 0 \leq x \leq p^{\frac{1}{m}} \right \}$, with $m$ being a fixed integer.

16:10
Deep Learning for Solar Irradiance Nowcasting: A Comparison of a Recurrent Neural Network and Two Traditional Methods

ABSTRACT. This paper aims to improve short-term forecasting of clouds to accelerate the usability of solar energy. It compares the Convolutional Gated Recurrent Unit (ConvGRU) model to an optical flow baseline and the Numerical Weather Prediction (NWP) Weather Research and Forecast (WRF) model. The models are evaluated over 75 days in the summer of 2019 for an area covering the Netherlands, and it is studied under what circumstance the models perform best. The ConvGRU model proved to outperform both extrapolation-based methods and an operational NWP system in the precipitation domain. For our study, the model trains on sequences containing irradiance data from the Meteosat Second Generation Cloud Physical Properties (MSG-CPP) dataset. Additionally, we design an extension to the model, enabling the model also to exploit geographical data. The experimental results show that the ConvGRU outperforms the other methods in all weather conditions and improves the optical flow benchmark by 9% in terms of Mean Absolute Error (MAE). However, the ConvGRU prediction samples demonstrate that the model suffers from a blurry image problem, which causes cloud structures to smooth out over time. The optical flow model is better at representing cloud fields throughout the forecast. The WRF model performs best on clear days in terms of the Structural Similarity Index Metric (SSIM) but suffers from the simulation's short-range.

16:30
Automatic-differentiated Physics-Informed Echo State Network (API-ESN)

ABSTRACT. We propose the Automatic-differentiated Physics-Informed Echo State Network (API-ESN). The network is constrained by the physical equations through the reservoir's exact time-derivative, which is computed by automatic differentiation. As compared to the original Physics-Informed Echo State Network, the accuracy of the time-derivative is increased by up to seven orders of magnitude. This increased accuracy is key in chaotic dynamical systems, where errors grows exponentially in time. The network is showcased in the reconstruction of unmeasured (hidden) states of a chaotic system. The API-ESN eliminates a source of error, which is present in existing physics-informed echo state networks, in the computation of the time-derivative. This opens up new possibilities for an accurate reconstruction of chaotic dynamical states.

16:50
A machine learning method for parameter estimation and sensitivity analysis

ABSTRACT. We discuss the application of a supervised machine learning method, decision tree algorithms, to perform parameter space exploration and sensitivity analysis on ordinary differential equation models. Decision trees can provide complex decision boundaries and can help visualize decision rules in an easily digested format that can aid in understanding the predictive structure of a dynamic model and the relationship between input parameters and model output. We study a simplified process for model parameter tuning and sensitivity analysis that can be used in the early stages of model development.

17:10
Auto-Encoded Reservoir Computing for Turbulence Learning

ABSTRACT. We present an Auto-Encoded Reservoir-Computing (AE-RC) approach to learn the dynamics of a 2D turbulent flow. The AE-RC consists of a Convolutional Autoencoder, which discovers an efficient manifold representation of the flow state, and an Echo State Network, which learns the time evolution of the flow in the manifold. The AE-RC is able to both learn the time-accurate dynamics of the turbulent flow and predict its first-order statistical moments. The AE-RC approach opens up new possibilities for the spatio-temporal prediction of turbulent flows with machine learning.

17:30
Real-time probabilistic inversion of DNN-based DeepEM model while accounting for model error

ABSTRACT. Deep Neural Networks (DNNs) are becoming go-to methods for fast approximation of complex systems which have been traditionally modelled by PDE solvers. For instance, DNNs showed relatively good approximation of the Maxwell's equations required for modeling deep electromagnetic (DeepEM) logging while drilling measurements [Alyaev et al., 2020].

Fast approximations are specifically important to the fields where real-time inversion is required. In drilling operations, the real-time interpretation of subsurface measurements bundled with estimation of relevant subsurface uncertainties could add a significant value by intentionally correcting the well path in real time (known as geosteering). While one can also approximate the inverse operator directly by Deep Learning [Shahriari et al., 2020], recovering relevant uncertainties is non-trivial and in many cases these approximations come short because the inverse problem is ill-posed. Bayesian algorithms could be useful for real-time inversion because of the flexibility to account for non-uniqueness and uncertainties. Among those Bayesian algorithms, iterative ensemble smoothers could be the best choice for real time inversion due to the relatively low computational cost and the parallel nature of the algorithm [Chen and Oliver, 2012].

While significant efforts are usually made to ensure the accuracy of the Deep Learning models, it is widely known that the DNNs contain some type of model-errors in the regions not covered by the training data, which are unknown and training specific. When the Deep Learning models are inverted, the effects of the model errors could be smeared by adjusting the input parameters to match the observations. This results in a bias estimated input parameters and as a consequence might result in a bad quality geosteering operation.

In this communication, we evaluate the performance of the probabilistic real-time inversion of a Deep Learning model on the example of inverting the DeepEM geosteering measurements using iterative ensemble smoothers. During the inversion we estimate the boundary positions and resistivities of a layer-cake geological model as well as their associated uncertainties. Such joint inversion of geometry and properties is known to produce an ill-posed inverse problem with local minima. In particular, we focus on model error as one of the main challenges associated with the inversion of Deep Learning models.

For this purpose we evaluate two different types of iterative ensemble smoother: the Classical and Flexible ES-MDA [Rammay et al., 2020]. ES-MDA can take into account highly non-linear nature of the Deep learning model and measurement errors, however it does not account for model errors. Our implementation of Flexible ES-MDA, on the other hand, takes into account model error during the probabilistic inversion of DNN model by analysing the change in the residuals during the iterative inversion. We observe that the Flexible ES-MDA has the capability to reduce the effect of model-bias by capturing the unknown model-errors, thus improving the quality of the estimated input parameters for Geosteering. Moreover, we describe the framework for identification of the multi-modality of the real time inversion of Deep learning models using vanilla inversion and possible solutions to alleviate it in real time.

The proposed methodology provides a real-time probabilistic inversion framework for Deep Learing models, which accounts for model errors and non-uniqueness. We observe that both iterative ensemble smoothers provide exact estimates when the problem is well posed and has no model errors. When the model errors are present however, the Flexible ES-MDA avoids erroneous convergence to a wrong solution, and preserves a wider posterior which covers the true solution and the true data. Furthermore, in the extreme cases we can detect problems dominated by local minima by comparing the inversion results between the Classical and the Flexible ES-MDA. These issues can be avoided by using informed prior or restarting the inversion with different priors.

16:10-17:50 Session 11F: IoTSS 4
16:10
Profile-driven synthetic trajectories generation to enhance smart systems solutions

ABSTRACT. The knowledge of the individual trajectories of citizens’ mobility in the urban space is critical for the smart city. It allows to understand the citizens’ behaviours and, in consequence, it may provide support for their activeness, what improves a quality of life. Whereas, the administrators of the urban ecosystem are able to calibrate effectively the global models of the smart city. The data concerning trajectories from the providers of telephone services are still difficult to be obtained in practice and one of the considerable obstacles here are legal aspects. We have designed and implemented the trajectory generator for objects located in a selected but arbitrary urban area. A generation process is based on the random selection of the pre-defined profiles of tourist activeness, including variable mobility patterns. The profiles are to represent realistically a varied tourist behaviour. It is possible to generate a practically unlimited number of trajectories, if needed, they may also be directed at the certain specific types of behaviours. Thus obtained large sets of data may be used both for understanding the urban behaviour, calibrating urban models, recommending systems under construction, as well as anticipating the needs for testing the future software for the smart city.

16:30
Augmenting automatic clustering with expert knowledge and explanations

ABSTRACT. Cluster discovery from highly-dimensional data is a challenging task, that has been studied for years in the fields of data mining and machine learning. Most of them focus on amortization of the process, resulting in the clusters that once discovered have to be carefully analyzed to discover semantics of numerical labels. However, it is often the case that such an explicit, symbolic knowledge about possible clusters is available prior to clustering and can be used to enhance the learning process. More importantly, we demonstrate how a machine learning model can later be used to refine the expert knowledge and extend it with an aid of explainable AI algorithms. We present our framework on a real life use-case scenario from an industrial installation in an underground mine.

16:50
Renewable energy-aware heuristic algorithms for edge server selection for stream data processing

ABSTRACT. IT systems, including the Internet of Things, are developing a lot. Data processing takes place not only in the cloud but also at the edge of the network. At the same time, this increases electricity demand and carbon footprint. One solution to overcome the impact on the environment is using renewable energy sources such as photovoltaic panels to power both cloud and edge datacenters. Unfortunately, it leads to disturbances in the power grid's quality of energy because they act as distributed energy sources. One way to overcome these problems is to increase the self-consumption of electricity produced by solar panels used to power datacenters and thus decrease the power grid's energy transfer. In the paper, we present heuristic algorithms for selecting edge servers for data stream processing to manage renewable energy utilisation.

16:10-17:50 Session 11G: ACMAIML 4
16:10
Desensitization Due to Overstimulation: A Second-Order Adaptive Network Model

ABSTRACT. In this paper, a second-order adaptive network model is presented for the effects of supernormal stimuli. The model describes via the underlying mechanisms in the brain how the response on a normal stimulus as occurs in absence of a supernormal stimulus is lost after a supernormal stimulus occurs and adaptative desensitization to that takes place. By simulated example scenarios, it was evaluated that the model describes the expected dynamics. By stationary point and equilibrium analysis correctness of the implemented model with respect to its design was verified.

16:30
Modified deep Q-network algorithm applied to the evacuation problem

ABSTRACT. This paper presents a modication of the deep Q-network algorithm that can perform more than one action in each step. We apply the algorithm to the evacuation problem. To this end, we have created a simple grid-based environment which enables modelling evacuation. We present the results of preliminary tests.

16:50
Human-like Storyteller: A Hierarchical Network with Gated Memory for Visual Storytelling

ABSTRACT. Different from the visual captioning that describes an image concretely, the visual storytelling aims at generating an imaginative paragraph with a deep understanding of the given image stream. It is more challenging for the requirements of inferring contextual relationships among images. Intuitively, humans tend to tell the story around a central idea that is constantly expressed with the continuation of the storytelling. Therefore, we propose the Human-Like StoryTeller (HLST), a hierarchical neural network with a gated memory module, which imitates the storytelling process of human beings. First, we utilize the hierarchical decoder to integrate the context information effectively. Second, we introduce the memory module as the story's central idea to enhance the coherence of generated stories. And the multi-head attention mechanism with a self adjust query is employed to initialize the memory module, which distils the salient information of the visual semantic features. Finally, we equip the memory module with a gated mechanism to guide the story generation dynamically. During the generation process, the expressed information contained in memory is erased with the control of the read and write gate. The experimental results indicate that our approach significantly outperforms all state-of-the-art (SOTA) methods.

17:10
Discriminative Bayesian Filtering for the Semi-Supervised Augmentation of Sequential Observation Data

ABSTRACT. We aim to construct a probabilistic classifier to predict a latent, time-dependent boolean label given an observed vector of measurements. Our training data consists of sequences of observations paired with a label for precisely one of the observations in each sequence. As an initial approach, we learn a baseline supervised classifier by training on the labeled observations alone, ignoring the unlabeled observations in each sequence. We then leverage this first classifier and the sequential structure of our data to build a second training set as follows: (1) we apply the first classifier to each unlabeled observation and then (2) we filter the resulting estimates to incorporate information from the labeled observations and create a much larger training set. We describe a Bayesian filtering framework that can be used to perform step 2 and show how a second classifier built using the latter, filtered training set can outperform the initial classifier. At Adobe, our motivating application entails predicting customer segment membership from readily-available proprietary features. We administer surveys to collect label data for our subscribers and then generate feature data for these customers at regular intervals around the survey time. While we can train a supervised classifier using paired feature and label data from the survey time alone, the availability of nearby feature data and the relative expensive of polling drives this semi-supervised approach. We perform an ablation study comparing both a baseline classifier and a likelihood-based augmentation approach to our proposed method and show how our method best improves predictive performance for an in-house classifier.

17:30
Trend Capturing SAX

ABSTRACT. Time series mining is an important branch of data mining, as time series data is ubiquitous and has many applications in several domains. The main task in time series mining is classification. Time series representation methods play an important role in time series classification and other time series mining tasks. One of the most popular representation methods of time series data is the Symbolic Aggregate approXimation (SAX). The secret behind SAX popularity is its simplicity and efficiency. SAX has however one major drawback, which is its inability to represent trend information. Several methods have been proposed to enable SAX to capture trend information, but this comes at the expense of complex processing, preprocessing, or post-processing procedures. In this paper we present a new modification of SAX that we call Trend SAX (TSAX), which only adds minimal complexity to SAX, but substantially improves its performance in time series classification. This is validated experimentally on 50 datasets. The results show the superior performance of our method, as it gives a smaller classification error on 39 datasets compared with SAX.

17:50
MultiEmo: Multilingual, Multilevel, Multidomain Sentiment Analysis Corpus of Consumer Reviews

ABSTRACT. This article presents MultiEmo, a new benchmark data set for the multilingual sentiment analysis task including 11 languages. The collection contains consumer reviews from four domains: medicine, hotels, products and university. The original reviews in Polish contained 8,216 documents consisting of 57,466 sentences. The reviews were manually annotated with sentiment at the level of the whole document and at the level of a sentence (3 annotators per element). We achieved a high Positive Specific Agreement value of 0.91 for texts and 0.88 for sentences. The collection was then translated automatically into English, Chinese, Italian, Japanese, Russian, German, Spanish, French, Dutch and Portuguese. MultiEmo is publicly available under a Creative Commons Attribution 4.0 International Licence. We present the results of the evaluation using the latest cross-lingual deep learning models such as XLM-RoBERTa, MultiFiT and LASER+BiLSTM. We have taken into account 3 aspects in the context of comparing the quality of the models: multilingualism, multilevel and multidomain knowledge transfer ability.

16:10-17:50 Session 11H: SmartSys 1
16:10
Improving UWB Indoor Localization Accuracy Using Sparse Fingerprinting and Transfer Learning

ABSTRACT. Indoor localization systems become more and more popular. Several technologies are intensively studied with application to high precision object lo-calization in such environments. UWB is one of the most promising, as it com-bines relatively low cost and high localization accuracy, especially compared to Beacon or WiFi. Nevertheless, we noticed that leading UWB systems' accuracy is far below values declared in the documentation. To improve it, we proposed a transfer learning approach, which combines high localization accuracy with low fingerprinting complexity. We perform very precise fingerprinting in a controlled environment to learn the neural network. When the system is deployed in a new localization, full fingerprinting is not necessary. We demonstrate that thanks to the transfer learning, high localization accuracy can be maintained when only 7% of fingerprinting samples from a new localization are used to update the neural network, which is very important in practical applications. It is also worth notic-ing that our approach can be easily extended to other localization technologies.

16:30
Effective Car Collision Detection with Mobile Phone Only

ABSTRACT. Despite fast progress in the automotive industry, the number of deaths in car accidents is constantly growing. One of the most important challenges in this area, besides crash prevention, is immediate and precise notification of rescue services. Automatic crash detection systems go a long way towards improving these notifications, and new cars currently sold in developed countries often come with such systems factory installed. However, the majority of life threatening accidents occur in low-income countries, where these novel and expensive solutions will not become common anytime soon. This paper presents a method for detecting car collisions, which requires a mobile phone only, and therefore can be used in any type of car. The method was developed and evaluated using data from real crash tests. It integrates data series from various sensors using an optimized decision tree. The evaluation results show that it can successfully detect even minor collisions while keeping the number of false positives at an acceptable level.

16:50
Corrosion detection on aircraft fuselage with multi-teacher knowledge distillation

ABSTRACT. The procedures of non-destructive inspection (NDI) are employed by the aerospace industry to reduce operational costs and the risk of catastrophe. The success of deep learning (DL) encourages us to apply autonomous DL models as an aid in non-destructive aircraft inspection. Herein we present the tests of employing convolutional neural network (CNN) architectures in detecting small spots of corrosion on the fuselage surface and rivets. We use very unique and difficult dataset consisting of $1.3\cdot 10^4$ 320x240 images of various fuselage parts from several aircraft types, brands, and service life. The images come from the non-invasive DAIS inspection system, which can be treated as an analog image enhancement device. By using CNNs ensemble and knowledge distillation paradigm we obtained 100\% detection of the images containing the "moderate corrosion" class on the test set. We also demonstrate that the proposed ensemble classifier, i.e., multi-teacher/single-student knowledge distillation architecture, yields significant improvement in the classification power in comparison to the baseline single ResNet50 neural network.

17:10
Warm-Start Meta-Ensembles for Forecasting Energy Consumption in Service Buildings

ABSTRACT. Energy Management Systems are equipments that normally perform the individual supervision of power controllable loads. With the objective of reducing energy costs, those management decisions result from algorithms that select how the different working periods of equipment should be combined, taking into account the usage of the locally generated renewable energy, electricity tariffs etc., while complying with the restrictions imposed by users and electric circuits. Forecasting energy usage, as described in this paper, allows to optimize the management being a major asset.

This paper proposes and compares three new meta-methods for forecasts associated to real-valued time series, applied to the buildings energy consumption case, namely: a meta-method which uses a single regressor (called Sliding Regressor -- SR), an ensemble of regressors with no memory of previous fittings (called Bagging Sliding Regressor -- BSR), and a warm-start bagging meta-method (called Warm-start Bagging Sliding Regressor -- WsBSR). The novelty of this framework is combination of the meta-methods, warm-start ensembles and time series in a forecast framework for energy consumption in buildings. Experimental tests done over data from an hotel show that, the best accuracy is obtained using the second method, though the last one has comparable results with less computational requirements.

17:30
Supporting the process of sewer pipes inspection using machine learning on embedded devices

ABSTRACT. We are currently seeing an increasing interest in using machine learning and image recognition methods to support routine human-made processes in various application domains. In the paper, the results of the conducted research on supporting the sewage network inspection process with the use of machine learning on embedded devices are presented. We analyze several image recognition algorithms on real-world data, and then we discuss the possibility of running these methods on embedded hardware accelerators.

17:50
Explanation-driven model stacking

ABSTRACT. With advances of artificial intelligence (AI), there is a growing need for provisioning of transparency and accountability to AI systems. These properties can be achieved with eXplainable AI (XAI) methods, extensively developed over last few years with relation to the machine learning (ML) models. However, the practical usage of XAI is limited nowadays in most of the cases to the feature engineering phase of the data mining (DM)process. We argue, that explainability as a property of a system should be used along with other quality metrics such as accuracy, precision, recall in order to deliver better AI models. In this paper we present a method that allows for weighted ML model stacking and demonstrate its practical use on an illustrative example.