ICCS 2021: INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE
PROGRAM FOR FRIDAY, JUNE 18TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-09:50 Session 12: Keynote Lecture 5
09:00
What do Climate Scientists Use Computer Resources for - The Role of Volcanic Activity in Climate and Global Change

ABSTRACT. Explosive volcanic eruptions are magnificent events that in many ways affect the Earth’s natural processes andclimate. They cause sporadic perturbations of the planet’s energy balance, activating complex climate feedbacksand providing unique opportunities to better quantify those processes. We know that explosive eruptions causecooling in the atmosphere for a few years, but we have just recently realized that they affect the major climatevariability modes and volcanic signals can be seen in the subsurface ocean for decades. The volcanic forcing of theprevious two centuries offsets the ocean heat uptake and diminishes global warming by about 30%. In the future,the explosive volcanism could slightly delay the pace of global warming and has to be accounted for in long-termclimate predictions. The recent interest in dynamic, microphysical, chemical, and climate impacts of volcaniceruptions is also excited by the fact these impacts provide a natural analogue for climate geoengineering schemesinvolving deliberate development of an artificial aerosol layer in the lower stratosphere to counteract globalwarming. In this talk I will discuss these recently discovered volcanic effects and specifically pay attention to howwe can learn about the hidden Earth-system mechanisms activated by explosive volcanic eruptions.

09:50-10:20Coffee Break
10:20-12:00 Session 13A: MT 13
10:20
Hierarchical Analysis of Halo Center in Cosmology

ABSTRACT. Ever-increasing data size raises many challenges for scientific data analysis. Particularly in cosmological N-body simulation, finding the center of dark matter halos suffers heavily from the curse of dimensionality given that a large halo in a modern simulation may have up to 20 million particles. In this work, we exploit the latent structure embed in a halo, and we propose a hierarchical approach to approximate the exact gravitational potential calculation for each particle in order to more efficiently find the halo center. Tests of our method on data from N-body simulations show that in many cases the hierarchical algorithm performs significantly faster than existing methods with a desirable accuracy.

10:40
Fast Click-Through Rate Estimation using Data Aggregates

ABSTRACT. Click-Through Rate estimation is a crucial prediction task in Real-Time Bidding environments prevalent in display advertising. The estimation provides information on how to trade user visits in various systems. Logistic Regression is a popular choice as the model for this task. Due to the amount, dimensionality and sparsity of data, it is challenging to train and evaluate the model. One of the techniques to reduce the training and evaluation cost is dimensionality reduction. In this work, we present Aggregate Encoding, a technique for dimensionality reduction using data aggregates. Our approach is to build aggregate-based estimators and use them as an ensemble of models weighted by logistic regression. The novelty of our work is the separation of feature values according to the value frequency, to better utilise regularization. For our experiments, we use the iPinYou data set, but this approach is universal and can be applied to other problems requiring dimensionality reduction of sparse categorical data.

11:00
A Model for Predicting n-gram Frequency Distribution in Large Corpora

ABSTRACT. The statistical extraction of multiwords (n-grams) from natural language text is challenged by BigData where searching and indexing are critical and computationally heavy for large \textit{corpora}. A low error prediction of the $n$-gram frequency distribution can be used for increasing efficiency of those operations. However, most n-gram frequency studies only consider single words, and corpora below few million words. We present results on the theoretical and empirical modeling of n-gram frequency distributions for different $n$-gram sizes (n>=1), with validation for two languages in a wide range of corpora sizes. Due to the importance of low frequency n-grams in the extraction of relevant expressions, the model predicts the sizes of groups (W(k, C)) of equal-frequency (k) n-grams (n>=1) especially for the low frequencies, k=1,2,.... We assume a finite language n-gram vocabulary for each n-gram size, and keep the functional form of Zipf's rank-frequency expression. We capture the dependence of the n-gram frequency distribution on the corpus size (C) by an analytical model of the Zipf's exponent, and a scaling model of the n-gram group size. The average relative errors of W(k, C) predictions for low frequent n-grams are around 4%, for English and French corpora from 62 Million to 8.6 Billion words. This is compared with a model based on the Poisson and Zipf distributions.

11:20
Exploiting Extensive External Information for Event Detection through Semantic Networks Word Representation and Attention Map

ABSTRACT. Event detection is one of the key tasks to construct knowledge graph and reason graph, also a hot and difficult problem in information extraction. Automatic event detection from unstructured natural language text has far-reaching significance for human cognition and intelligent analysis. Event detection is widely used in public opinion assessment and financial analysis. It helps public opinion monitoring and emergency warning. Besides, it is applied to prevention and control of risk and intelligent investment. However, limited by the source and genre, corpora for event detection can not provide enough information to solve the problems of polysemy, synonym association and lack of information. These problems causes all triggers can not be accurately extracted. To solve these problems, this paper proposes a brand new Event Detection model based on Extensive External Information (EDEEI). Specifically, the model employs external corpus, semantic network, part of speech and attention map to extract complete and accurate triggers. The external corpus and semantic network are used to increase the information in word vectors. At the same time, an attention mechanism is introduced in the model. By generating attention maps based on feature maps, the extracted features are optimized, and the related triggers are extracted more completely and precisely. Experiments on ACE 2005 dataset show that the model effectively uses the external knowledge to extract events, and is significantly superior to the state-of-the-art event detection methods.

11:40
A New Consistency Coefficient in the Multi-Criteria Decision Analysis Domain

ABSTRACT. The logical consistency of decision making matrices is an important topic in developing each multi-criteria decision analysis (MCDA) method. For instance, many published papers are addressed to the decisional matrix's consistency in the Analytic Hierarchy Process method (AHP), which uses the Saaty's seventeen-values scale.

This work proposes a new approach to measuring consistency for using a simple three-value scale (binary with a tie). The paper's main contribution is a proposal of a new consistency coefficient for decision matrix containing judgments from an expert. We show this consistency coefficient based on effective MCDA method called the Characterictic Objects METhod (COMET). The new coefficient is explained based on the Matrix of Expert Judgment (MEJ), which is the critical step of the COMET method. The proposed coefficient is based on analysing the relationship between judgments from the MEJ matrix and transitive principles (triads analysis). Four triads classes have been identified and discussed. The proposed coefficient makes it easy to determine the logical consistency and, thus, the expert responses' quality is essential in the reliable decision-making process. Results are presented in some short study cases.

10:20-12:00 Session 13B: MT 14
10:20
Scientific Paper Age Prediction

ABSTRACT. In this paper we show how the age of scientific papers can be predicted given a diachronic corpus of papers from a particular domain published over a period of a number of years. We train ordinal regression models for the task of predicting the age of individual sentences by fine-tuning series of BERT models for binary classification. We aggregate the prediction results on individual sentences into a final result for entire papers. Using two corpora of publications from the International World Wide Web Conference and the Journal of Artificial Societies and Social Simulation we compare various result aggregation methods and show that this sentence-based approach produces better results than the document-level method from our previous work.

10:40
Data Augmentation for Copy-Mechanism in Dialogue State Tracking

ABSTRACT. Traditional dialogue state tracking (DST) approaches need a predefined ontology to provide candidate values for each slot. To handle unseen slot values, the copy-mechanism has been widely used in DST models recently, which copies slot values from user utterance directly. Even though the state-of-the-art approaches have shown a promising performance on several benchmarks, there is still a significant gap between seen slot values (values that occur in both training set and test set) and unseen ones (values that only occur in the test set). In this paper, we aim to find out the factors that influence the generalization capability of the copy-mechanism based DST model. Our key observations include two points: 1) performance on unseen values is positively related to the diversity of slot values in the training set; 2) randomly generated strings can enhance the diversity of slot values as well as real values. Based on these observations, an interactive data augmentation algorithm is proposed to train copy-mechanism models, which augments the input dataset by duplicating user utterances and replacing the real slot values with randomly generated strings. Experimental results on three widely used datasets: WoZ 2.0, DSTC2 and Multi-WoZ demonstrate the effectiveness of our approach.

11:00
Ensemble Labeling Towards Scientific Information Extraction (ELSIE)

ABSTRACT. Extracting scientific facts from unstructured text is difficult due to challenges specific to the complexity of the scientific named entities and relations to be extracted. This problem is well illustrated through the extraction of polymer names and their properties. Even in the cases where the property is a temperature, identifying the polymer name associated with the temperature may require expertise due to the use of complicated naming conventions and by the fact that new polymer names are being “introduced” to the vernacular as polymer science advances. While domain-specific machine learning toolkits exist that address these challenges, perhaps the greatest challenge is the lack of—time-consuming, error-prone and costly—labeled data to train these machine learning models. This work repurposes Snorkel, a data programming tool, in a novel approach as a way to identify sentences that contain the relation of interest in order to generate training data, and as a first step towards extracting the entities themselves. By achieving 94% recall and an F1 score of 0.92, compared to human experts who achieve 77% recall and an F1 score of 0.87, we show that our system captures sentences missed by both a state-of-the-art domain-aware natural language processing toolkit and human expert labelers. We also demonstrate the importance of identifying the complex sentences prior to extraction by comparing our application to the natural language processing toolkit.

11:20
Error estimation and correction using the forward CENA method

ABSTRACT. The increasing use of heterogeneous and more energy-efficient computing systems has led to a renewed demand for reduced- or mixed-precision floating-point arithmetic. In light of this, we present the forward CENA method as an efficient roundoff error estimator and corrector. Unlike the previously published CENA method, our forward variant can be easily used in parallel high-performance computing applications. Just like the original variant, its error estimation capabilities can point out code regions where reduced or mixed precision still achieves sufficient accuracy, while the error correction capabilities can increase precision over what is natively supported on a given hardware platform, whenever higher accuracy is needed. CENA methods can also be used to increase the reproducibility of parallel sum reductions.

11:40
Monte-Carlo Approach to the Computational Capacities Analysis of the Computing Continuum

ABSTRACT. This article proposes an approach to the problem of computational capacities analysis of the computing continuum via theoretical framework of equilibrium phase-transitions and numerical simulations. We introduce the concept of phase transitions in computing continuum and show how this phenomena can be explored in the context of workflow makespan, which we treat as an order parameter. We simulate the behavior of the computational network in the equilibrium regime within the framework of the XY-model defined over complex agent network with Barabasi-Albert topology. More specifically, we define Hamiltonian over complex network topology and sample the resulting spin-orientation distribution with the Metropolis-Hastings technique. The key aspect of the paper is derivation of the bandwidth matrix, as the emergent effect of the ''low-level'' collective spin interaction. This allows us to study the first order approximation to the makespan of the ''high-level'' system-wide workflow model in the presence of data-flow anisotropy and phase transitions of the bandwidth matrix controlled by the means of ''noise regime'' parameter $\eta$. For this purpose, we have built a simulation engine in Python 3.6. Simulation results confirm existence of the phase transition, revealing complex transformations in the computational abilities of the agents. Notable feature is that bandwidth distribution undergoes a critical transition from single to multi-mode case. Our simulations generally open new perspectives for reproducible comparative performance analysis of the novel and classic scheduling algorithms.

10:20-12:00 Session 13C: CGIPAI 1
10:20
Factors affecting the sense of scale in immersive, realistic Virtual Reality space

ABSTRACT. In this study, we analyze and identify a proper scale value when presenting real world space and everyday objects in immerse VR. We verify the impact of usage of reference points in the form of common objects known to the user such as windows, doors and furniture in the sense of scale in VR. We also analyze user behavior (position, rotation, movement, AOI and such) in the scale setting task. Finally, we propose optimal scale values for single objects presentation, architectural space with many points of references and a large scale space with less to no points of reference. The experiments were conducted on a group of experts the architects and common users to verify the translation of real-world object size analysis skills into the same capacity in the virtual world. Confirmation of the significance of the pre-immersion in VR for a sense of scale accuracy is also described.

10:40
Capsule Network versus Convolutional Neural Network in Image Classification Comparative Analysis

ABSTRACT. Many concepts behind Capsule Networks cannot be proved due to limited research, performed so far. In the paper, we compare the CapsNet architecture with the most common implementations of con- volutional networks (CNNs) for image classication. We also introduced Convolutional CapsNet - a network that mimics the original CapsNet architecture but remains a pure CNN - and compare it against CapsNet. The networks are tested using popular benchmark image data sets and additional test sets, specically generated for the task. We show that for a group of data sets, usage of CapsNet-specic elements in uences the network performance. Moreover, we indicate that the use of Capsule Network and CNN may be highly dependent on the particular data set in image classication.

11:00
State-of-the-art in 3D face reconstruction from a single RGB image

ABSTRACT. Since diverse and complex emotions need to be expressed by different facial deformation and appearances, facial animation has become a serious and on-going challenge for computer animation industry. Face reconstruction techniques based on 3D Morphable face model and deep learning provide one effective solution to reuse existing databases and create believable animation of new characters from images or videos in seconds, which greatly reduce heavy manual operations and a lot of time. In this paper, we review the databases and state-of-the-art methods of 3D face reconstruction from a single RGB image. First, we classify 3D reconstruction methods into three categories and review each of them. These three categories are: Shape-from-Shading (SFS) based 3D face reconstruction, 3D Morphable Face Model (3DMM) based 3D face reconstruction, and Deep Learning (DL) based 3D face reconstruction. Next, we introduce existing 2D and 3D facial databases. After that, we review 11 methods of deep learning-based 3D face reconstruction and evaluate four representative ones among them. Finally, we draw conclusions of this paper and discuss future research directions.

11:20
Towards understanding time varying triangle meshes

ABSTRACT. Time varying meshes are more popular than ever as a representation of deforming shapes, in particular for their versatility and inherent ability to capture both true and spurious topology changes. In contrast with dynamic meshes, however, they do not capture the temporal correspondence, which (among other problems) leads to very high storage and processing costs. Unfortunately, establishing temporal correspondence of surfaces is difficult, because it is generally not bijective: even when the full visible surface is captured in each frame, some parts of the surface may be missing in some frames due to self-contact. We observe that, in contrast with the inherent absence of bijectivity in surface correspondence, volume correspondence is bijective in a wide class of possible input data. We demonstrate that using a proper intitialization and objective function, it is possible to track the volume, even when considering only a pair of subsequent frames at the time. Currently, the process is rather slow, but the results are promising and may lead to a new level of understanding and new algorithms for processing of time varying meshes, including compression, editing, texturing and others.

11:40
Semantic similarity metric learning for sketch-based 3D shape retrieval

ABSTRACT. Since the development of the touch screen technology makes sketches simple to draw and obtain, sketch-based 3D shape retrieval has received increasing attention in the community of computer vision and graphics in recent years. The main challenge is the big domain discrepancy between 2D sketches and 3D shapes. Most existing works tried to simultaneously map sketches and 3D shapes into a joint feature embedding space, which has a low efficiency and high computational cost. In this paper, we propose a novel semantic similarity metric learning method based on a teacher-student strategy for sketch-based 3D shape retrieval. We first extract the pre-learned semantic features of 3D shapes from the teacher network and then use them to guide the feature learning of 2D sketches in the student network. The experiment results show that our method has a better retrieval performance.

10:20-12:00 Session 13D: SPU 1
10:20
The Necessity and Difficulty of Navigating Uncertainty to Develop an Individual-Level Computational Model

ABSTRACT. The design of an individual-level computational model requires modelers to deal with uncertainty by making assumptions on causal mechanisms (when they are insufficiently characterized in a problem domain) or feature values (when available data does not cover all features that need to be initialized in the model). The simplifications and judgments that modelers make to construct a model are not commonly reported or rely on evasive justifications such as `for the sake of simplicity', which adds another layer of uncertainty. In this paper, we present the first framework to transparently and systematically investigate which factors should be included in a model, where assumptions will be needed, and what level of uncertainty will be produced. We demonstrate that it is computationally prohibitive (i.e. NP-Hard) to create a model that supports a set of interventions while minimizing uncertainty. Since heuristics are necessary, we formally specify and evaluate two common strategies that emphasize different aspects of a model, such as building the `simplest' model in number of rules or actively avoiding uncertainty.

10:40
Predicting Soccer Results through Sentiment Analysis: A Graph Theory Approach

ABSTRACT. After a study conducted by Bloomberg during the 2018 FIFA World Cup, more than four out of 10 fans consider themselves soccer fans, making the game the world’s most popular sport. Sports are season based and constantly changing over time, as well, statistics vary according to the sport and league. Understanding sports communities in Social Networks and identifying fan’s expertise is a key indicator for soccer prediction. This research proposes a Machine Learning Model using polarity on a dataset of 3,000 tweets taken during the last game week on English Premier League season 19/20. The end goal is to achieve a flexible mechanism, which automatizes the process of gathering the corpus of tweets before a match, and classifies its sentiment to find the probability of a winning game by evaluating the network centrality.

11:00
Advantages of interval modification of NURBS curves in modeling uncertain boundary shape in boundary value problems

ABSTRACT. In this paper, the advantages of interval modification of NURBS curves for modeling uncertainly defined boundary shape in boundary value problems, are presented. The different interval techniques for modeling the uncertainty of linear as well as curvilinear shapes are considered. The uncertainty of the boundary shape is defined using interval coordinates of control points. The knots and weights in the proposed interval modification of NURBS curves are defined exactly. Such a definition improves modification of the uncertainly defined shape without any change of interval points. The interval NURBS curves are compared with other interval techniques. The correctness of modeling the shape uncertainty is confirmed by the problem solutions obtained using the interval parametric integral equations system method. Such solutions (obtained using implemented by authors program) confirm the advantages of using interval NURBS curves for modeling the boundary shape uncertainty. The shape approximation is improved using less number of interval input data and the obtained solutions are correct and less over-estimated.

11:20
New rank-reversal free approach to handle interval data in MCDA problems

ABSTRACT. In many real-life decision-making problems, decisions have to be based on partially incomplete of uncertain data. Since classical MCDA methods were created to be used with numerical data, they are often unable to process incomplete or uncertain data. There are several ways to handle uncertainty and incompleteness in the data, i.e. interval numbers, fuzzy numbers, and their generalizations. New methods are developed, and classical methods are modified to work with incomplete and uncertain data. In this paper, we propose an extension of the SPOTIS method, which is a new rank-reversal free MCDA method. Our extension allows for applying this method to decision problems with missing or uncertain data. Moreover, the proposed approach is compared in two study cases with other MCDA methods: COMET and TOPSIS. Obtained rankings would be analyzed using rank correlation coefficients.

11:40
Introducing Uncertainty Into Explanaible AI Methods

ABSTRACT. Learning from uncertain or incomplete data is one of the major challenges in building artificial intelligence systems. However, the research in this area is more focused on the impact of uncertainty on the algorithms performance or robustness, rather than on human understanding of the model and the explainability of the system. In this paper we present our work in the field of knowledge discovery from uncertain data and show its potential usage for the purpose of improving system interpretability by generating Local Uncertain Explanations (LUX) for machine learning models. We present a method that allows to propagate uncertainty of data into the explanation model, providing more insight into the certainty of the decision making process and certainty of explanations of these decisions. We demonstrate the method on synthetic, reproducible dataset and compare it to the most popular explanation frameworks.

10:20-12:00 Session 13E: CompHealth 4
10:20
Modelling Hospital Strategies in City-Scale Ambulance Dispatching

ABSTRACT. In many cities all over the world, overcrowding in the emergency department (ED) is a critical issue for patients with urgent medical conditions. Particularly for acute coronary syndrome (ACS) patients in the metropolitan areas of large cities the average mortality rate is often high due to long waiting time. Typical reasons can be the irregular inflow of patients and the limited number of facilities for angiography (a medical imaging technique requires professionals and equipment). An obvious solution is to expand the capacity of emergency departments. However, it was proofed costly and not effective. Another solution is that overcrowded emergency departments can divert incoming patients to another rather free emergency department. However, evidence shows that social optimum will not be achieved in a decentralised ambulance-dispatching system. Thus, the demand for investigating the behaviours of stakeholders in an ambulance-dispatching system is emerging.

This work aims to identify the dynamic structure of the system, and investigate the behaviours of stakeholders involved (i.e. hospitals or emergency departments, patients, ambulance squads, emergency medical service (EMS) and city authorities). Practically, we aim to build a generalised computational model that can simulate the ambulance dispatching for patients in critical conditions and validate the model by data.

First of all, the research sets up a GT-DES model coupling the game theory (GT) and discrete event simulation (DES). The GT-DES model can (1) simulate the behaviours of patients from being fetched by the ambulance to being discharged; (2) predict strategies of the emergency department to "Accept" or "Redirect" incoming patients. To validate the GT-DES model, a sensitivity analysis is conducted. Also, a case about modelling ambulance dispatching for ACS patients is studied based on the real-world data and expertise provided by specialists in Alamzov Medical Research Centre.

As a result, even the limited GT-DES model provide better prediction of the strategies for 11 hospitals located in Saint Petersburg. And the simulated average mortality for 11 hospitals shows higher correlation to the real-world data we collected with Pearson Correlation being 0.8232 (comparing to the "fixed-strategy" scenarios). In conclusion, the GT-DES is adaptable to simulate patients and hospitals behaviours at a city level under assumptions. Besides, the result of sensitivity analysis shows the decision-making process of hospitals is mainly guided by the level of patient inflow and the capability of medical resources owned by the hospital itself and nearby hospitals. Lastly, the case studied shows the practical significance of simulating ambulance dispatching for specific patients by the GT-DES model. And the model can be extended and applied to other city environments. The complete version of the study was presented in the MSc thesis by the first author.

10:40
des-ist: a simulation framework to streamline event-based in silico trials

ABSTRACT. To popularise in silico trials for development of new medical devices, drugs, or treatment procedures, we present the modelling framework des-ist (Discrete Event Simulation framework for In Silico Trials). This framework supports discrete event-based simulations. Here, events are collected in an acyclic, directed graph, where each node corresponds to a component of the overall in silico trial. A simple API and data layout are proposed to easily couple numerous simulations by means of containerised environments, i.e. Docker and Singularity. An example in silico trial is highlighted studying treatment of acute ischemic stroke, as considered in the INSIST project. The proposed framework enables straightforward coupling of the discrete models, reproducible outcomes by containerisation, and easy parallel execution by GNU Parallel. Furthermore, des-ist supports the user in creating, running, and analysing large numbers of virtual cohorts. In future work, we aim to provide a tight integration with validation, verification and uncertainty quantification analyses, to enable sensitivity analysis of individual components of in silico trials and improve trust in the computational outcome to successfully augment classical medical trials and thereby enable faster development of treatment procedures.

11:00
Data Assimilation in Agent-Based Modeling of Virtual Hospital Indoor Activity for Scheduling and Optimization

ABSTRACT. Data assimilation (DA) in agent-based modeling is a complex task with wide application areas such as forecasting of spread infection, resources optimization, decision support in real-time, and so on. Building a virtual hospital with an agent-based model faces many issues specific to complex scenario modeling. One of the key tasks is the identification of elements of agent-based models with limited to no observation taking into account the heterogeneity of agents (in real life) and limitations of data describing activities structure. Within our study, we consider several problems related to the development and application of DA techniques to improve the agent-based model. These problems are characterized by a) limited observation data (locality of the observations in time and space, uncertainty in measurements, and imperfection of knowledge to process the data); b) diversity of agents’ scenarios, intensions, motifs, etc.; c) limited formalization of agent behavior. Our agent-based model for indoor activity in virtual hospital represents 3 floors of the cardiological diagnostic center (Almazova National Medical Research Centre, Saint-Petersburg, Russia) with several roles of agents: staff (physicians, nurses, receptionists) and patients. This model aimed to queries and schedule optimization within data of agents’ location. We get early results for each problem definition. Our experimental study is based on applying extended Kalman and particle filters to datasets from iBeacon devices (Indoor positioning system) and sequences of events by each agent (Medical Information System). Data from iBeacon devices are a set of values of signal levels from each receiver (RSSI) and collected in real-time. We developed two simulation scenarios: (a) patient routing through rooms in the hospital (from input checkpoint and dressing room to endpoint) for queries and length-of-stay optimization; (b) resource management (workload on staff, rooms, departments) for reduction of administrative costs. The main idea of the application is to implement a method that allows restoring of an individual’s path using a limited number of sensors and limited observations. For the task of path identification, we implemented fingerprint-based algorithms for the indoor positioning of agents. These algorithms increase the accuracy of agents’ indoor positioning. Using the implemented method, it is possible to recreate the dynamics of staff movement, according to their job descriptions, and using data from the indoor positioning system (with restrictions on the location of sensors).

11:20
Identifying Synergistic Interventions to Address COVID-19 Using a Large Scale Agent-Based Model

ABSTRACT. There is a range of public health tools and interventions to address the global pandemic of COVID-19. Although it is essential for public health efforts to comprehensively identify which interventions have the largest impact on preventing new cases, most of the modeling studies that support such decision-making efforts have only considered a very small set of interventions. In addition, previous studies predominantly considered interventions as independent or examined a single scenario in which every possible intervention was applied. Reality has been more nuanced, as a subset of all possible interventions may be in effect for a given time period, in a given place. In this paper, we use cloud-based simulations and a previously published Agent-Based Model of COVID-19 (Covasim) to measure the individual and interacting contribution of interventions on reducing new infections in the US over 6 months. Simulated interventions include face masks, working remotely, stay-at-home orders, testing, contact tracing, and quarantining. Through a factorial design of experiments, we find that mask wearing together with transitioning to remote work/schooling has the largest impact. Having sufficient capacity to immediately and effectively perform contact tracing has a smaller contribution, primarily via interacting effects.

11:40
Modeling co-circulation of influenza strains in heterogeneous urban populations: the role of herd immunity and uncertainty factors

ABSTRACT. The aim of the current research was to assess the influence of herd immunity levels on the process of co-circulation of influenza strains in urban populations and to establish how the stochastic nature of epidemic processes might alter this influence. For this purpose, we developed a spatially explicit individual-based model of multistrain epidemic dynamics which uses detailed human agent databases as its input. The simulations were performed using a 2010 synthetic population of Saint Petersburg, Russia. According to the simulation results, the largest influenza outbreaks are assosiated with low immunity levels to the virus strains which caused these outbreaks. At the same time, high immunity levels do not prevent outbreaks, although they might affect the resulting disease prevalence. The results of the study will be used in the research of the long-term immunity formation dynamics to influenza strains in Russian cities.

10:20-12:00 Session 13F: CCI 1
10:20
A Method for Improving Word Representation Using Synonym Information

ABSTRACT. The emergence of word embeddings has created good conditions for natural language processing used in an increasing number of applications related to machine translation and language understanding. Several word-embedding models have been developed and applied, achieving considerably good performance. In addition, several enriching word embedding methods have been provided by handling various information such as polysemous, subwords, temporal, and spatial. However, prior popular vector representations of words ignored the knowledge of synonyms. This is a limitation, particularly for languages with large vocabularies and numerous synonym words. In this paper, we propose an approach to enrich the vector representation of words by considering the synonym information based on the vectors’ extraction and presentation from their context words. Our proposal includes three main steps: First, the context words of the synonym candidates are extracted using a context window to scan the entire corpus; second, these context words are grouped into small clusters using the latent Dirichlet allocation method; and finally, synonyms are extracted and converted into vectors from the synonym candidates based on their context words. In comparison to recent word representation methods, we demonstrate that our proposal achieves considerably good performance on a given task in terms of word similarity.

10:40
Fast Approximate String Search for Wikification

ABSTRACT. The paper presents a novel method for fast approximate string search based on neural distance metrics embeddings. Our research is focused primarily on applying the proposed method for entity re- trieval in the Wikification process, which is similar to edit distance-based similarity search on the typical dictionary. The proposed method has been compared with symmetric delete spelling correction algorithm and proven to be more efficient for longer stings and higher distance values, which is a typical case in the Wikification task.

11:00
ASH: A New Tool for Automated and Full-Text Search in Systematic Literature Reviews

ABSTRACT. Context: Although there are many tools for performing Systematic Literature Reviews (SLRs), none allows searching for articles using their full text across multiple digital libraries. Goal: This study aimed to show that searching the full text of articles is important for SLRs, and to provide a way to perform such searches in an automated and unified way. Method: The authors created a tool that allows users to download the full text of articles and perform a full-text search. Results: The tool, named ASH, provides a meta-search interface that allows users to obtain much higher search completeness, unifies the search process across all digital libraries, and can overcome the limitations of individual search engines. We use a practical example to identify the potential value of the tool and the limitations of some of the existing digital library search facilities. Conclusions: Our example confirms both that it is important to create such tools and how they can potentially improve the SLR search process. Although the tool does not support all stages of SLR, our example confirms its value for supporting the SLR search process.

11:20
A Voice-based Travel Recommendation System Using Linked Open Data

ABSTRACT. We introduce J.A.N.E. -- a proof-of-concept voice-based travel assistant. It is an attempt to show how to handle increasingly complex user queries against the web while balancing between an intuitive user interface and a proper knowledge quality level. As the use case, the search of travel directions based on user preferences regarding cuisine, art and activities was chosen. The system integrates knowledge from several sources, including Wikidata, LinkedGeoData and OpenWeatherMap. The voice interaction with the user is built on the Amazon Alexa platform. System architecture description is supplemented by discussion about the motivation and requirements for such complex assistants.

10:20-12:00 Session 13G: MLDADS 2
10:20
Using machine learning to correct model error in data assimilation and forecast applications

ABSTRACT. Recent developments in machine learning (ML) have demonstrated impressive skills in reproducing complex spatiotemporal processes. However, contrary to data assimilation (DA), the underlying assumption behind ML methods is that the system is fully observed and without noise, which is rarely the case in numerical weather prediction. In order to circumvent this issue, it is possible to embed the ML problem into a DA formalism characterised by a cost function similar to that of the weak-constraint 4D-Var (Bocquet et al., 2019; Bocquet et al., 2020). In practice ML and DA are combined to solve the problem: DA is used to estimate the state of the system while ML is used to estimate the full model.

In realistic systems, the model dynamics can be very complex and it may not be possible to reconstruct it from scratch. An alternative could be to learn the model error of an already existent model using the same approach combining DA and ML. In this presentation, we test the feasibility of this method using a quasi geostrophic (QG) model. After a brief description of the QG model model, we introduce a realistic model error to be learnt. We then asses the potential of ML methods to reconstruct this model error, first with perfect (full and noiseless) observation and then with sparse and noisy observations. We show in either case to what extent the trained ML models correct the mid-term forecasts. Finally, we show how the trained ML models can be used in a DA system and to what extent they correct the analysis.

Bocquet, M., Brajard, J., Carrassi, A., and Bertino, L.: Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models, Nonlin. Processes Geophys., 26, 143–162, 2019

Bocquet, M., Brajard, J., Carrassi, A., and Bertino, L.: Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization, Foundations of Data Science, 2 (1), 55-80, 2020

Farchi, A., Laloyaux, P., Bonavita, M., and Bocquet, M.: Using machine learning to correct model error in data assimilation and forecast applications, arxiv:2010.12605, submitted.

10:40
From macro to micro and back: Microstates initialization from chaotic aggregate time series

ABSTRACT. Often in real-world systems (eg., social or economic systems), we possess a sensible understanding of the interaction rules governing their dynamics; however, observations of the system are often sparse, noisy and only available in aggregation. In this paper, we infer the latent microstates that best reproduce an observed time series, where we assume that these observations are sparse, noisy and aggregated from the microstates with a (possibly) nonlinear observation operator. To infer them, we minimise a least-squares cost functional involving the observed and the model-simulated time series by first exploring the attractor of the system and then refining our estimate of the microstates with several gradient-based methods, where we find that Adam descent gives the most accurate results in the least number of iterations. Our method is similar to the 4D-Var used in numerical weather prediction, but here we focus on short, univariate time series with no information about the underlying states of the system. We validate this method for the Lorenz and Mackey-Glass systems by making out-of-sample predictions that outperform their Lyapunov characteristic times. Finally, we analyze the predicting power of our method as a function of the number of observations available, where we find a critical transition for the Mackey-Glass system, after which we can initialise it with arbitrary precision.

11:00
Low-dimensional Decompositions for Nonlinear Finite Impulse Response Modeling

ABSTRACT. This paper proposes a new decomposition technique for the general class of Non-linear Finite Impulse Response (NFIR) systems. Based on the estimates of projection operators, we construct a set of coefficients, sensitive to the separated internal system components with short-term memory, both linear and nonlinear. The proposed technique allows for the internal structure inference in the presence of unknown additive disturbance on the system output and for a class of arbitrary but bounded nonlinear characteristics. The results of numerical experiments, shown and discussed in the paper, indicate applicability of the method for different types of nonlinear characteristics in the system.

11:20
Latent GAN: using a latent space-based GAN for rapid forecasting of CFD models

ABSTRACT. The focus of this study was to attempt to simulate realistic fluid flow, through Machine Learning techniques that could be utilised in real-time forecasting of urban air pollution. We propose a novel Latent GAN architecture which looks at combining an AutoEncoder with a Generative Adversarial Network to predict fluid flow at the proceeding timestep of a given input, whilst keeping computational costs low. This architecture was applied to tracer flows and velocity fields around an urban city. We present a pair of AutoEncoders capable of dimensionality reduction of order $O(3)$. Further, we present a pair of Generator models capable of performing real-time forecasting of tracer flows and velocity fields. We demonstrate that the models, as well as the latent spaces generated, learn and retain meaningful physical features of the domain. Despite the domain of this project being that of computational fluid dynamics, the \textit{Latent GAN} architecture was designed to be generalisable such that it can be applied to other dynamical systems.

11:40
Intelligent Camera Cloud Operators for Convective Scale Numerical Weather Prediction

ABSTRACT. We present an innovational way of assimilating observations of clouds into the new weather forecasting model for regional scale: ICON-D2 (ICOsahedral Nonhydrostatic), which is operated by the German Weather Service (Deutscher Wetterdienst, DWD). A convolutional neural network is trained to detect clouds in pictures. We use photographs taken by cameras pointed towards the sky and extract the information of clouds by applying the aforementioned network. The result is a greyscale picture, in which each pixel has a value between 0 and 1, describing the probability of the pixel belonging to a cloud. By averaging over a certain section of the picture one gets a value for the cloud cover of that region. To build the forward operator, which maps an ICON model state into the observation space, we construct a three dimensional grid in space from the camera point of view and interpolate the ICON model variables onto this grid. We model the pixels of the picture as rays, originating at the camera location and take the maximum interpolated cloud cover (CLC) along each ray. CLC is a diagnostic variable of an ICON model state describing the probability of the cloud coverage within the respective grid box. We do monitoring experiments to compare the observations and model equivalents over time. The results of look promising with RMSE values below 0.32 and we continue by performing single assimilation steps at first. Further evaluation with assimilation cycles over several hours is the next step.

12:00-13:00Lunch
13:00-13:50 Session 14: Keynote Lecture 6
13:00
Implementing Serious Virtual Reality Applications

ABSTRACT. Virtual reality technology enables a new class of computer applications, in which a user is fully immersed in a surrounding synthetic 3D virtual world that can represent either an existing or an imaginary place. Virtual worlds can be interactive and multimodal, providing users with near-reality experiences.

The popularization of virtual reality (VR) and related technologies (XR) has been recently enabled by the significant progress in hardware performance, the availability of versatile input-output devices, and the development of advanced software platforms. XR applications have become widespread in entertainment, but only to a minimal extent are used in other “serious” civil application domains, such as education, training, e-commerce, tourism, and cultural heritage.

Several problems restrain the use of XR in everyday applications. The most important is the inherent difficulty of designing and managing non-trivial interactive 3D multimedia content. Not only the geometry and the appearance of particular elements must be properly represented, but also the temporal, structural, logical, and behavioral composition of virtual scenes and associated scenarios must be taken into account. Moreover, such virtual environments should be created and managed by domain experts or end-users without having to involve programmers or graphic designers each time. Other challenges include the diversity of XR platforms, large amounts of data required by XR applications, and difficulties in the implementation of accurate and efficient interaction in a 3D space.

These problems and proposed solutions, together with examples of practical “serious” virtual reality applications, will be discussed in this presentation.

14:00-15:40 Session 15A: DDCS 1
14:00
Addressing Missing Values in a Healthcare Dataset Using an Improved kNN Algorithm

ABSTRACT. Missing values are ubiquitous in many real-world datasets. In scenarios where a dataset is not very large, addressing its missing values by utilizing appropriate data imputation methods benefits significantly. In this paper, we leveraged and evaluated a new imputation approach called k-Nearest Neighbour with Most Significant Features and incomplete cases(KNNI_MSF) to impute missing values in a healthcare dataset. This algorithm uses k-Nearest Neighbour(kNN) and ReliefF feature selection techniques to address incomplete cases in the dataset. The merit of imputation is measured by comparing the classification performance of data models trained with the dataset using imputation and without imputation. We used a real-world dataset "very low birth weight infants" to predict the survival outcome of infants with low birth weights. Five different classifiers are used in the experiments. Classifiers built on the complete cases from the original dataset and classifiers built on imputed dataset are compared based on multiple performance metrics. The comparison evidently shows that classifiers built on imputed dataset produce much better outcomes. Our k-NN based imputation technique performed better in general than the k-Nearest Neighbour Imputation using Random Forest feature weights(KNNI_R_F) algorithm with respect to the balanced accuracy and specificity.

14:20
Improving Wildfire Simulations by Estimation of Wildfire Wind Conditions from Fire Perimeter Measurements

ABSTRACT. This paper shows how a gradient-free optimization method is used to improve the prediction capabilities of wildfire progression by estimating the wind conditions driving a FARSITE wildfire model. To characterize the performance of the prediction of the perimeter as a function of the wind conditions, an uncertainty weighting is applied to each vertex of the measured fire perimeter and a weighted least squares errors error is computed between the predicted and measured fire perimeter. In addition, interpolation of the measured fire perimeter and its uncertainty is adopted to match the number of vertices on the predicted and measured fire perimeter. The gradient free optimization based on iterative refined gridding provides robustness to intermittent erroneous results provided by FARSITE and quickly find optimal wind conditions by paralleling the wildfire model calculations. Result on wind condition estimation is illustrated on two historical wildfire events: the 2019 Maria fire that burned south of the community of Santa Paula in the area of Somis, CA and the 2019 Cave fire that started in the Santa Ynez Mountains of Santa Barbara County.

14:40
Scalable Statistical Inference of Photometric Redshift via Data Subsampling

ABSTRACT. Handling big data has largely been a major bottleneck in traditional statistical models. Consequently, when accurate point prediction is the primary target, machine learning models are often preferred over their statistical counterparts for bigger problems. But full probabilistic statistical models often outperform other models in quantifying uncertainties associated with model predictions. We develop a data-driven statistical modeling framework that combines the uncertainties from an ensemble of statistical models learned on smaller subsets of data carefully chosen to account for imbalances in the input space. We demonstrate this method on a photometric redshift estimation problem in cosmology, which seeks to infer a distribution of the redshift—the stretching effect in observing far-away galaxies—given multivariate color information observed for an object in the sky. Our proposed method performs balanced partitioning, graph-based data subsampling across the partitions, and training of an ensemble of Gaussian process models.

15:00
Timeseries based deep hybrid transfer learning framework: A case of electrical vehicle energy consumption

ABSTRACT. The scarce availability of electrical vehicle data has limited the research efforts for electrical vehicle load prediction. In this study transfer learning is introduced for electrical vehicle charging demand prediction as a way of circumventing the problem of data inadequacy. Data collected from three unique charging stations namely the residential, fast and slow commercial charging stations has been used for transfer learning in which data from the fast-commercial station is transferred to enhance prediction efforts in the slow and residential charging stations. In this study two deep hybrid neural network transfer learning methodologies are introduced and compared to the commonly used conventional CNN transfer model. The experiments showed an improvement in prediction performance on the station with little data as a result of the transferred knowledge from the data rich station, with greater improvement realised in hybrid models. Transfer learning is critical for ensuring a stable, economic and safe operation of the future power grid by assisting prediction efforts in charging stations with little data.

14:00-15:40 Session 15B: WTCS 1
14:00
Biophysical Modeling of Excitable Cells - a new Approach to Undergraduate Computational Biology Curriculum Development

ABSTRACT. As part of a broader effort of developing a comprehensive neuroscience curriculum, we implemented an interdisciplinary, one-semester, upper-level course called Biophysical Modeling of Excitable Cells (BMEC). The course exposes undergraduate students to broad areas of computational biology. It focuses on computational neuroscience (CNS), develops scientific literacy, promotes teamwork between biology, psychology, physics, and mathematics oriented undergraduate students. This course also provides pedagogical experience for senior Ph.D. students from the Neuroscience Department at the Medical University of South Carolina (MUSC). BMEC is a three-credit/three contact hours per week lecture-based course that includes a set of computer-based activities designed to gradually increase the undergraduates' ability to apply mathematics and computational concepts to solving biologically-relevant problems. The class brings together two different groups of students with very dissimilar and complementary backgrounds, i.e., biology or psychology and physics or mathematics oriented. The teamwork allows students with more substantial biology or psychology background to explain to physics or mathematics students the biological implications and instill realism into the computer modeling project they completed for this class. Simultaneously, the students with strong physics/mathematics background can apply techniques learned in specialized mathematics/physics/computer science classes to generate mathematical hypotheses and implement them in computer codes.

14:20
Non-Majors Biology with Laboratories Using Computer Simulations

ABSTRACT. We have developed a textbook, Biology for the Global Citizen, for a non-majors biology course that includes numerous applications and laboratory exercises using computer simulations. These exercises employ the scientific method, enhance quantitative literacy through interpretation of graphs and basic statistics with spreadsheets, employ data to support conclusions, emphasize critical thinking and analysis, and include questions that can be graded automatically as well as open-ended questions appropriate for classroom discussion. The simulations are written in the widely used agent-based simulation software, NetLogo, which is free, easy to use, and requires no programming experience. A video introduces each laboratory, demonstrating the model and emphasizing important biological concepts. Guided by laboratory exercises, students consider various scenarios, make hypotheses, adjust variables to test the effect of each scenario, run simulations multiple times, generate and interpret data and graphs, make observations, draw conclusions, apply their conclusions to decision making, and gain a deeper understanding of the science that the model simulates. Besides exposing students to the third paradigm of science, computation, use of computer simulations enables students to perform experiments, such as spread of disease, that are too difficult, time-consuming, costly, or dangerous to perform otherwise.

The authors adapted and added models to numerous models downloaded with NetLogo. Some of laboratory exercises are as follows: Membrane Formation; Blood Sugar Regulation, including an exercise on type 2 diabetes; Food Insecurity; Spread of Disease; Tumor; DNA Protein Synthesis; Mendelian Genetics; X-Linked Inheritance; Lac Operon; Epigenetics; Antibiotic Resistance; Peppered Moths; Fur Patterns; Enzyme Kinetics; Population Growth; Competition; Community; Carbon Cycle; Climate Change; Smog; and Corrosion.

The authors will continue to strive to keep the materials current with online materials, such as “What’s News in Science.” Consequently, one of the authors wrote a simulation and laboratory on COVID-19 and SARS, which is available on the text’s website.

Starting in 2015, some of the text’s materials were used and assessed favorably in a course at Wofford College. During the 2020-2021 academic year, seven classes at six institutions of higher education have class tested the materials and given valuable feedback. In the fall, the first edition will be available through Cognella Academic Publishing.

14:40
Increasing the impact of teacher presence in online lectures

ABSTRACT. We present a freely available, easy to use system for promoting teacher presence during slide-supported online lectures, meant to aid effective learning and reduce students' sense of isolation. The core idea is to overlay the teacher's body directly onto the slide and move it and scale it dynamically according to the currently presented content. Our implementation runs entirely locally in the browser and uses machine learning and chroma keying techniques to segment and project only the instructor's body onto the presentation. Students not only see the face of the teacher but they also perceive as the teacher, with his/her gaze and hand gestures, directs their attention to the areas of the slides being analyzed.

We include an evaluation of the system by using it for online teaching the programming courses for 134 students from 10 different study programs. The gathered feedback in terms of attention benefit, students satisfaction and perceived learning, strongly endorse the usefulness and potential of enhanced teacher presence in general, and our web application in particular.

15:00
Model-based approach to automated provisioning of collaborative educational services

ABSTRACT. The purpose of the presented work was to ease the creation of new educational environments to be used by consortia of educational institutions. The proposed approach allows teachers to take advantage of technological means and shorten the time it takes to create new remote collaboration environments for their students, even if the teachers are not adept at using cloud services. To achieve that, we decided to leverage the Model Driven Architecture, and provide the teachers with convenient, high-level abstractions, by using which they are able to easily express their needs. The abstract models are used as inputs to an orchestrator, which takes care of provisioning the described services. We claim that such approach both reduces the time of virtual laboratory setup, and provides for more widespread use of cloud-based technologies in day-today teaching. The article discusses both the model-driven approach and the results obtained from implementing a working prototype, customized for IT trainings, deployed in the Malopolska Educational Cloud testbed.

14:00-15:40 Session 15C: CGIPAI 2
14:00
ScatterPlotAnalyzer: Digitizing Images of Charts Using Tensor-based Computational Model

ABSTRACT. Charts or scientific plots are widely used visualizations for efficient knowledge dissemination from datasets. Nowadays, these charts are predominantly available in image format in print media, internet, and research publications. There are various scenarios where these images are to be interpreted in the absence of datasets, that were originally used to generate the charts. This leads to a pertinent need for automating data extraction from an available chart image. We narrow down our scope to scatter plots and propose a semi-automated algorithm, ScatterPlotAnalyzer, for data extraction from chart images. Our algorithm is designed around the use of second-order tensor fields to model the chart image. ScatterPlotAnalyzer integrates the following tasks in sequence: chart type classification, image annotation, object detection, text detection and recognition, data transformation, text summarization, and optionally, chart redesign. The novelty of our algorithm is in analyzing both simple and multi-class scatter plots. Our results show that our algorithm can effectively extract data from images of different resolutions. We also discuss specific test cases where ScatterPlotAnalyzer fails.

14:20
EEG-Based Emotion Recognition Using Convolutional Neural Networks

ABSTRACT. In this day and age, Electroencephalography-based methods for Automated Affect Recognition are becoming more and more popular. Owing to the vast amount of information gathered in EEG signals, such methods provide satisfying results in terms of Affective Computing. In this paper, we replicated and improved CNN-based method proposed by Li et al. [10]. We tested our model using a Dataset for Emotion Analysis using EEG, Physiological and Video Signals (DEAP) [9]. Performed changes in the data preprocessing and in the model architecture led to an increase in accuracy - 74.37% for valence, 73.74% for arousal.

14:40
Improving Deep Object Detection Backbone with Feature Layers

ABSTRACT. Deep neural networks are the frontier in object detection, a key modern computing task. The dominant methods involve two-stage deep networks that heavily rely on features extracted by the backbone in the first stage. In this study, we propose an improved model, ResNeXt101S, to improve feature quality for layers that might be too deep. It introduces splits in middle layers for feature extraction and a deep feature pyramid network (DFPN) for feature aggregation. This backbone is not much larger than the leading model ResNeXt. It is applicable to a range of different image resolutions. The evaluation of customized benchmark datasets using various image resolutions shows that the improvement is effective and consistent. In addition, the study shows input resolution does impact detection performance. In short, our proposed backbone can achieve better accuracy under different resolutions comparing to state-of-the-art models.

15:00
Procedural Level Generation with Difficulty Level Estimation for Puzzle Games

ABSTRACT. This paper presents a complete solution for procedural creation of new levels, implemented in an existing puzzle video game. It explains the development, going through an adaptation to the genre of game of the approach to puzzle generation and talking in detail about various difficulty metrics used to calculate the resulting grade. Final part of the research presents the results of grading a set of hand-crafted levels to demonstrate the viability of this method, and later presents the range of scores for grading generated puzzles using different settings. In conclusion, the paper manages to deliver an effective system for assisting a designer with prototyping new puzzles for the game, while leaving room for future performance improvements.

15:20
ELSA: Euler-Lagrange Skeletal Animations - novel and fast motion model applicable to VR/AR devices

ABSTRACT. EulerLagrangeSkeletalAnimation(ELSA)isthenovelandfastmodelforskeletal animation, based on the Euler Lagrange equations of motion and configuration and phase space notion. Single joint’s animation is an integral curve in the vector field generated by those PDEs. Considering the point in the phase space belonging to the animation at current time, by adding the vector pinned to this point and multiplied by the elapsed time, one can designate the new point in the phase space. It defines the state, especially the position (or rotation) of the joint after this time elapses. Starting at time 0 and repeating this procedure N times, there is obtained the approximation, and if the N → ∞ the integral curve itself. Applying above, to all joint in the skeletal model constitutes ELSA. The crucial properties of such a representation are, firstly, the single point in the phase space is sufficient to find the integral curve, hence having the generated phase space, one can define whole skeletal animation by its initial pose and the final pose perchance. Secondly, the designation of the poses in consecutive frames are generated by the above procedure without need of any further operations. Those two properties generates the savings in terms of computational complexity and storage demands. There are also other benefits in unique recreations of the same animation and future research opportunities, which exemplary, can better the performance of mixing two animations.

14:00-15:40 Session 15D: SPU 2
14:00
Vector and triangular representations of project estimation uncertainty: effect of gender on usability

ABSTRACT. The paper proposes a new visualization in the form of vectors of not-fully-known quantitative features. The proposal is put in the context of project de-fining and planning and the importance of visualization for decision making. The new approach is empirically compared with the already known visuali-zation utilizing membership functions of triangular fuzzy numbers. The de-signed and conducted experiment was aimed at evaluating the usability of the new approach according to ISO 9241-11. Overall 76 subjects performed 72 experimental conditions designed to assess the effectiveness of uncertain-ty conveyance. Efficiency and satisfaction were examined by participants subjective assessment of appropriate statements. The experiment results show that the proposed visualization may constitute a significant alternative to the known, triangle-based visualization. The paper emphasizes potential advantages for the proposed representation for project management and in other areas.

14:20
The use of type-2 fuzzy sets to assess delays in the implementation of the daily operation plan for the operating theatre

ABSTRACT. In the paper we present a critical time analysis of the project, in which there is a risk of delay in commencing project activities. We assume that activity times are type-2 fuzzy numbers. When experts estimate shapes of member-ship functions of times of activities, they take into account both situations when particular activities of the project start on time and situations when they start with a delay. We also suggest a method of a sensitivity analysis of these delays to meeting the project deadline. We present a case study in which the critical tome analysis was used to analyse processes implemented in the operating ward of a selected hospital in the South of Poland. Data for the empirical study was collected in the operating theatre of this hospital. This made it possible to identify non-procedural activities at the operating ward that have a significant impact on the duration of the entire operating process. In the hospital selected for testing implementation of the daily plan of surgeries was at risk every day. The research shows that the expected de-lay in performing the typical daily plan - two surgeries in one operating room – could be about 1 hour. That may result in significant costs of overtime. Additionally, the consequence may also include extension of the queue of patients waiting for their surgeries. We show that elimination of occurrence of surgery activity delays allows for execution of the typical daily plan of surgeries within a working day in the studied hospital.

14:40
Linguistic Summaries using Interval-valued Fuzzy Representation of Imprecise Information - an Innovative Tool for Detecting Outliers

ABSTRACT. The practice of textual and numerical information processing often involves the need to analyze and test a database for the presence of items that differ substantially from other records. Such items, referred to as outliers, can be successfully detected using linguistic summaries. In this paper, we extend this approach by the use of non-monotonic quantifiers and interval-valued fuzzy sets. The results obtained by this innovative method confirm its usefulness for outlier detection, which is of major practical relevance for database analysis applications.

15:00
Combining heterogeneous indicators by adopting Adaptive MCDA: dealing with Uncertainty

ABSTRACT. Adaptive MCDA systematically supports the dynamic combination of heterogeneous indicators to assess overall performance. The method is completely generic and is currently adopted to undertake a number of studies in the area of sustainability. The intrinsic heterogeneity characterizing this kind of analysis leads to a number of biases, which need to be properly considered and understood to correctly interpret computational results in context. While on one side the method provides a comprehensive data-driven analysis framework, on the other side it introduces a number of uncertainties that are object of discussion in this paper. Uncertainty is approached holistically, meaning we address all uncertainty aspects introduced by the computational method to deal with the different biases. As extensively discussed in the paper, by identifying the uncertainty associated with the different phases of the process and by providing metrics to measure it, the interpretation of results can be considered more consistent, transparent and, therefore, reliable.

15:20
Solutions and Challenges in Computing FBSDEs with Large Jumps for Dam and Reservoir System Operation

ABSTRACT. Optimal control of Lévy jump-driven stochastic differential equations plays a central role in management of resource and environment. Problems involving large Lévy jumps are challenging due to their mathematical and computational complexities. We focus on numerical control of a real-scale dam and reservoir system from the viewpoint of forward-backward stochastic differential equations (FBSDEs): a new mathematical tool in this research area. The problem itself is simple but unique, and involves key challenges common to stochastic systems driven by large Lévy jumps. We firstly present an exactly-solvable linear-quadratic problem and numerically analyze convergence of different numerical schemes. Then, a more realistic problem with a hard constraint of state variables and a more complex objective function is analyzed, demonstrating that the relatively simple schemes perform well.

14:00-15:40 Session 15E: CompHealth 5
14:00
Two-Way Coupling Between 1-D Blood Flow and 3-D Tissue Perfusion Models

ABSTRACT. Accurately predicting brain tissue perfusion and infarct volume after an acute ischaemic stroke requires the two-way coupling of perfusion models on multiple scales. We present a method for such two-way coupling of a one-dimensional arterial blood flow model and a three-dimensional tissue perfusion model. The coupling occurs through the pial surface, where the pressure drop between the models is captured using a coupling resistance. The coupled model is used to simulate arterial blood flow and tissue perfusion during an acute ischaemic stroke. Infarct volume is estimated by setting a threshold on the perfusion change. By coupling these two models, we can capture the effect of retrograde flow and its effect on tissue perfusion and infarct volume.

14:20
Applying DCT combined cepstrum for arteriovenous fistula state estimation

ABSTRACT. This paper focused on comparison of effectiveness of artificial intelligence techniques in diagnosis of arteriovenous fistula condition. The use of DCT combined cepstrum in the feature extraction process made it possible to increase the value of classification quality indicators by about 10% compared to the previous approach based on averaged energy values in third octave bands. The methodology of extracting features from the acoustic signal emitted by the fistula is presented. The supervised machine learning technique of k-NN, Multilayer Perceptron, RBF Network and Decision Tree C4.5 classifires was applied to develop the classification model. For this, we used signals obtain of 38 patients on chronic hemodialysis. The results show that the cepstral analysis and obtained features yields an accuracy above 90% in properly detecting vascular access stenosis.

14:40
Electrocardiogram Quality Assessment with Autoencoder

ABSTRACT. ECG recordings from wearable devices are affected with a relatively high amount of noise due to body motion and long time of the examination, which leads to many false alarms on ill-state detection and forces medical staff to spend more time on describing each recording. ECG quality assessment is hard due to impulse character of the signal and its high variability. In this paper we propose a novel approach to this issue, which is an anomaly detection algorithm based on the Autoencoder. Presented method achieves a normalized F1 score of 92.94% on the test set extracted from public dataset of 2011 PhysioNet / Computing in Cardiology Challenge, outperforming the solution based on the best competition participants.

15:00
Stenosis assessment via volumetric flow rate calculation

ABSTRACT. Coronary artery stenosis is a condition that restricts blood flow to the myocardium, potentially leading to ischemia and acute coronary events. To decide whether an intervention is needed, different criteria can be used, e.g. calculation of fractional flow reserve (FFR). FFR can also be computed based on computer simulations of blood flow (virtual FFR, vFFR). Here we propose an alternative, more direct, metric for assessing the hemodynamic value of stenosis from computational models, the computed volumetric flow drop (VFD). VFD and vFFR are computed for several stenosis locations using a 1D model of the left coronary tree, and also an analytical model is presented to show why FFR value may differ from the true flow reduction. The results show that FFR = 0.8, which is often used as a criterion for stenting, may correspond to a reduction in volumetric flow from less than 10% to almost 30% depending on the stenosis location. The implications are that FFR-based assessment may overestimate the hemodynamic value of stenosis, and it’s preferable to use a more direct metric for simulation-based estimation of stenosis value.

14:00-15:40 Session 15F: CCI 2
14:00
Learning from Imbalanced Data Streams based on Over-Sampling and Instance Selection

ABSTRACT. Learning from imbalanced data streams is one of the challenges for classification algorithms and learning classifiers. The goal of the paper is to propose and validate a new approach for learning from data streams. However, the paper references a problem of class-imbalanced data. In this paper, a hybrid approach for changing the class distribution towards a more balanced data using the over-sampling and instance selection techniques is discussed. The proposed approach assumes that classifiers are induced from incoming blocks of instances, called data chunks. These data chunks consist of incoming instances from different classes and a balance between them is obtained through the hybrid approach. These data chunks are next used to induce classifier ensembles. The proposed approach is validated experimentally using several selected benchmark datasets and the computational experiment results are presented and discussed. The results of the computational experiment show that the proposed approach for eliminating class imbalance in data streams can help increase the performance of online learning algorithms.

14:20
Computational Intelligence Techniques for Assessing Data Quality: Towards Knowledge-Driven Processing

ABSTRACT. Since the right decision is made from the correct data, assessing data quality is an important process in computational science when working in a data-driven environment. Appropriate data quality ensures the validity of decisions made by any decision-maker. A very promising area to overcome common data quality issues is computational intelligence. This paper examines from past to current intelligence techniques used for assessing data quality, reflecting the trend for the last two decades. Results of a bibliometric analysis are derived and summarized based on the embedded clustered themes in the data quality field. In addition, a network visualization map and strategic diagrams based on keyword co-occurrence are presented. These reports demonstrate that computational intelligence, such as machine and deep learning, fuzzy set theory, evolutionary computing is essential for uncovering and solving data quality issues.

14:40
The Power of a Collective: Team of Agents Solving Instances of the Flow Shop and Job Shop Problems

ABSTRACT. The paper proposes an approach for solving difficult combinatorial optimization problems integrating the mushroom picking popu\-lation-based metaheuristic, a collective of asynchronous agents, and a parallel processing environment, in the form of the MPF framework designed for the Apache Spark computing environment. To evaluate the MPF performance we solve instances of two well-known NP-hard problems – job shop scheduling and flow shop scheduling. In MPF a collective of simple agents works in parallel communicating indirectly through the access to the common memory. Each agent receives a solution from this memory and writes it back after a successful improvement. Computational experiment results confirm that the proposed MPF framework can offer competitive results as compared with other recently published approaches.

15:00
Application of the bagging method and the decision trees to independent data sources

ABSTRACT. The article is dedicated to the issue of classification based on independent data sources. In particular, we use classification trees for a set of decision tables that were collected independently. A new approach proposed in the paper is a classification method for independent local decision tables that is based on the bagging method. For each local decision table, sub-tables are generated with the bagging method, based on which the decision trees are built. Such decision trees classify the test object, and a probability vector is defined over the decision classes for each local table. The final, joint decision for all local tables is made by majority voting.

The experiments were performed for a different number of sub-tables generated based on one local table, from 10 to 50 bootstrap replicates were used. The results were compared with the baseline method of generating one decision tree based on one local table. It cannot be clearly stated that more bootstrap replicates always guarantee better classification quality. However, it was shown that the bagging classification trees produces more unambiguous results which are in many cases better than for the baseline method.

14:00-15:40 Session 15G: MLDADS 3
14:00
Data Assimilation in the Latent Space of a Convolutional Autoencoder

ABSTRACT. Data Assimilation (DA) is a Bayesian inference that combines the state of a dynamical system with real data collected by instruments at a given time. The goal of DA is to improve the accuracy of the dynamic system making its result as real as possible. One of the most popular technique for DA is the Kalman Filter (KF). When the dynamic system refers to a real world application, the representation of the state of a physical system usually leads to a big data problem. For these problems, KF results computationally too expensive and mandates to use of reduced order modeling techniques. In this paper we proposed a new methodology we called Latent Assimilation (LA). It consists in performing the KF in the latent space obtained by an Autoencoder with non-linear encoder functions and non-linear decoder functions. In the latent space, the dynamic system is represented by a surrogate model built by a Recurrent Neural Network. In particular, an LSTM network is used to train a function which emulates the dynamic system in the latent space. The data from the dynamic model and the real data coming from the instruments are both processed through the Autoencoder. We apply the methodology to a real test case and we show that the LA has a good performance both in accuracy and in efficiency.

14:20
Higher-order hierarchical spectral clustering for multidimensional data

ABSTRACT. Understanding the community structure of countries in the international food network is of great importance for policymakers. Indeed, clusters might be the key for the understanding of the geopolitical and economic interconnectedness between countries. Their detection and analysis might lead to a bona fide evaluation of the impact of spillover effects between countries in situations of distress. In this paper, we introduce a clustering methodology that we name Higher-order Hierarchical Spectral Clustering (HHSC), which combines a higher-order tensor factorization and a hierarchical clustering algorithm. We apply this methodology to a multidimensional system of countries and products involved in the import-export trade network (FAO dataset). We find a structural proxy of countries' interconnectedness that is not only valid for a specific product but for the whole trade system. We retrieve clusters that are linked to economic activity and geographical proximity

14:40
Neural Networks for Conditioning Surface-Based Geological Models with Uncertainty Analysis

ABSTRACT. Neural networks have been applied with remarkable success in geoscience to solve inverse problems of high-dimensional, non-linear, large-scale systems and physical processes. Generating realistic subsurface geological models that honour observations is an ill-posed inverse problem that involves inferring the right set of input parameters that matches the observed well data. We solve this problem using artificial neural networks in two ways. In a forward modelling step, a neural network is trained to predict with high accuracy, the material type (facies type) at locations within the model domain. To perform the inverse modelling and generate surface-based geological models (SBGMs) that are calibrated to observed data, the pre-trained network is used to replace the forward model. The inputs to the network are then optimised using the back-propagation technique used in training the network based on gradient descent schemes. The use of SBGMs allows the computationally efficient generation of many examples for training the network with high accuracy and speed, as few parameters are required to describe even complex geometries. To validate the approach, an uncertainty analysis of the conditioned realisations was performed by evaluating the spatial variations among multiple plausible realisations.

15:00
Towards data-driven simulation models for building energy management

ABSTRACT. The computational simulation of physical phenomena is an extremely complex and expensive process. Traditional simulation models, based on equations that describe the behavior of the system, do not allow generating data in sufficient quantity and speed to predict its evolution and make decisions accordingly automatically. This is particularly relevant in building energy simulations. In this work, we introduce the idea of data-driven simulation models (DDS). A DDS is capable of emulating the behavior of a system in a similar way to simulators based on physical principles, but requiring less effort in its construction ---it is learned automatically from historical data--- and less time to run ---no need to solve complex equations.

15:20
Data Assimilation using Heteroscedastic Bayesian Neural Network Ensembles for Reduced-Order Flame Models

ABSTRACT. This paper proposes a cheap, accurate and reliable solution for inferring the parameters of a G equation flame model using an ensemble of heteroscedastic Bayesian neural networks. The neural networks are trained on a library of 1.7 million simulated flame edge observations with known parameters. The ensemble produces samples from the posterior probability distribution of the parameters conditioned on the observations, as well as estimates of the uncertainties in the parameters. The predicted parameters and uncertainties are compared to those estimated by the ensemble Kalman filter technique. The flame shapes are re-simulated from the predicted parameters and both the flame edges and the surface area variations are compared with the experiments. The proposed technique achieves results matching those of the ensemble Kalman filter technique, at a fraction of the time and computational costs. Using this technique, parameters of a G equation flame model can be inferred online.

14:00-15:40 Session 15H: UNEQUIvOCAL 1
14:00
Detection of conditional dependence between multiple variables using multiinformation

ABSTRACT. We consider a problem of detecting the conditional dependence between multiple discrete variables. This is a generalization of well-known and widely studied problem of testing the conditional independence between two variables given a third one. The issue is important in various applications. For example, in the context of supervised learning, such test can be used to verify model adequacy of popular Naive Bayes classifier. In epidemiology, there is a need to verify whether the occurrences of multiple diseases are dependent. However, focusing solely on occurrences of diseases may be misleading, as one has to take into account the confounding variables (such as gender or age) and preferably consider the conditional dependencies between diseases given the confounding variables. To address the aforementioned problem, we propose to use conditional multiinformation (CMI), which is a measure derived from information theory. We prove some new properties of CMI. To account for the uncertainty associated with a given data sample, we propose a formal statistical test of conditional independence based on the empirical version of $CMI$. The main contribution of the work is determination of the asymptotic distribution of empirical CMI, which leads to construction of the asymptotic test for conditional independence. The asymptotic test is compared with the permutation test and the scaled chi squared test. Simulation experiments indicate that the asymptotic test achieves larger power than the competitive methods thus leading to more frequent detection of conditional dependencies when they occur. We apply the method to detect dependencies in medical data set MIMIC-III.

14:20
Uncertainty Quantification of Coupled 1D Arterial Blood Flow and 3D Tissue Perfusion Models Using the INSIST Framework

ABSTRACT. We perform uncertainty quantification on a one-dimensional arterial blood flow model and investigate the resulting uncertainty in a coupled tissue perfusion model of the brain. The application of interest for this study is acute ischemic stroke and consequently the outcome of interest is infarct volume, estimated using the change in perfusion between the healthy and occluded state, assuming no treatment. Secondary outcomes are the uncertainty in blood flow at the outlets of the network, which provide the boundary conditions to the pial surface of the brain in the tissue perfusion model. Uncertainty in stroke volume, blood density, and blood viscosity are considered. Results show uncertainty in blood flow at the network outlets is similar to the uncertainty included in the inputs, however the resulting uncertainty in infarct volume is significantly smaller. These results provide evidence when assessing the credibility of the coupled models for use in in silico clinical trials.

14:40
Semi-intrusive Uncertainty Quantification of a 3D In-stent Restenosis Model with surrogate modeling

ABSTRACT. Restenosis can happen in coronary arteries after stent deployment due to excessive growth of new tissue in the vessel's lumen (neointimal proliferation). It leads to the recurrence of angina symptoms or to an acute coronary syndrome. In-Stent Restenosis 3D (ISR3D) is a complex multiscale computational model to simulate this process [1,2]. It consists of three submodels: an initial deployment model, an agent-based smooth muscle cell model and a blood flow model using the Lattice Boltzmann method. In this work, we present the uncertainty quantification and sensitivity analysis of the ISR3D model with four uncertain parameters: endothelium regeneration time, the threshold strain for smooth muscle cells bond breaking, the balloon extension area and the percentage of fenestration in the internal elastic lamina. We study how these parameters affect the process of the neointimal growth. The semi-intrusive uncertainty quantification method [3] is applied to reduce the computational cost of ISR3D in a quasi-Monte Carlo simulation [4,5], in which a surrogate model is developed to replace the most computationally expensive submodel, the 3D blood flow simulation. The input uncertainty is propagated via the surrogate model in the same way as for any other non-intrusive methods: an ensemble of model outputs is obtained by running the model with different values of the uncertain parameters sampled according to their distributions. It allows us to conduct a more efficient uncertainty estimation while keeping a sufficiently high accuracy of the predictions of the agent-based model. The uncertainty estimates show that the percentage of the fenestrations in the internal elastic lamina is the most critical parameter for the neointimal growth throughout the process, while the endothelium regeneration time has a minor effect at the beginning but gradually becomes influential over time, and the threshold strain as well as the balloon extension area have a limited effect on the neointimal growth.

[1] Zun, Pavel S., et al. "A comparison of fully-coupled 3D in-stent restenosis simulations to in-vivo data." Frontiers in physiology 8 (2017): 284.

[2] Zun, P. S., et al. "Location-specific comparison between a 3D in-stent restenosis model and micro-CT and histology data from porcine in vivo experiments." Cardiovascular engineering and technology 10.4 (2019): 568-582.

[3] Nikishova, Anna, and Alfons G. Hoekstra. "Semi-intrusive uncertainty propagation for multiscale models." Journal of Computational Science 35 (2019): 80-90.

[4] Saltelli, Andrea, et al. "Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index." Computer physics communications 181.2 (2010): 259-270.

[5] Bratley, Paul, and Bennett L. Fox. "Algorithm 659: Implementing Sobol's quasirandom sequence generator." ACM Transactions on Mathematical Software (TOMS) 14.1 (1988): 88-100.

15:00
Uncertainty quantification of COVID-19 exit strategies in an individual-based and geographically stratified transmission model

ABSTRACT. Many countries are currently dealing with the issue of controlling the spread of the COVID-19 epidemic. For that purpose computational models are used to predict the spread of the virus and to assess the efficacy of policy measures before actual implementation. Care has to be taken though, as computational models are subject to uncertainties. These might be due to, for instance, limited knowledge about the input parameters values or to the intrinsic stochastic nature of part of the computational models. The presence of these uncertainties leads to uncertainties in the model predictions. A central question is therefore what distribution of values is produced by the model for key indicators of the severity of the epidemic. In this talk we present the results of the uncertainty quantification of four exit strategies implemented in an agent-based transmission model with geographical stratification. The exit strategies – defined as a set of non-pharmaceutical interventions adopted to limit the spread of the virus – considered here are termed Flattening the Curve, Contact Tracing, Intermittent Lockdown and Phased Opening. We take two key indicators of the severity of the pandemic, i.e. the maximum number of prevalent cases in intensive care and the total number of intensive care patient-days in excess, as quantities of interest. Our results show that uncertainties not directly related to the strategies are secondary, although they should still be considered when setting a minimum required level of, e.g., intervention. Computation of the Sobol indices discloses the crucial role of the intervention uptake by the population. Finally we are able to detect a safe operational area for the key parameters of the considered strategies.

This research if funded by the European Union Horizon 2020 research and innovation program under grant agreement #800925 (VECMA project).

15:40-16:10Coffee Break
16:10-17:50 Session 16A: DDCS 2
16:10
Hybrid machine learning for time-series energy data for enhancing energy efficiency in buildings

ABSTRACT. Buildings consume about 40 percent of the world's energy use. Energy efficiency in buildings is an increasing concern for the building owners. A reliable energy use prediction model is crucial for decision-makers. This study proposed a hybrid machine learning model for predicting one-day-ahead time-series electricity use data in buildings. The proposed SAMFOR model combined support vector regression (SVR) and firefly algorithm (FA) with conventional time-series seasonal autoregressive integrated moving average (SARIMA) forecasting model. Large datasets of electricity use in office buildings in Vietnam were used to develop the forecasting model. Results show that the proposed SAMFOR model was more effective than the base-lines machine learning models. The proposed model has the lowest errors, which yielded 0.90 kWh in RMSE, 0.96 kWh in MAE, 9.04 % in MAPE, 0.904 in R in the test phase. The prediction results provide building managers with useful information to enhance energy-saving solutions.

16:30
I-80 Closures: An Autonomous Machine Learning Approach

ABSTRACT. Road closures due to adverse and severe weather continue to affect Wyoming due to hazardous driving conditions and temporarily suspending interstate commerce. The mountain ranges and elevation in Wyoming makes generating accurate predictions challenging, both from a meteorological and machine learning stand point. In a continuation of prior research, we investigate the 80 kilometer stretch of Interstate-80 between Laramie and Cheyenne using autonomous machine learning to create an improved model that yields a 10\% increase in closure prediction accuracy. We explore both serial and parallel implementations run on a supercomputer. We apply \askl{}, a popular and well documented autonomous machine learning toolkit, to generate a model utilizing ensemble learning. In the previous study, we applied a linear support vector machine with ensemble learning. We will compare our new found results to previous results.

16:50
Energy Consumption Prediction for Multi-functional Buildings Using Convolutional Bidirectional Recurrent Neural Networks

ABSTRACT. In this paper, a Conv-BiLSTM hybrid architecture is proposed to improve building energy consumption reconstruction of a new multi-functional building type. Experiments indicate that using the proposed hybrid architecture results in improved prediction accuracy for two case multi-functional buildings in ultra-short term to short term energy use modelling, with R2 score ranging between 0.81 to 0.94. The proposed model architecture comprising the CNN, dropout, bidirectional and dense layer modules superseded the performance of the commonly used baseline deep learning models tested in the investigation, demonstrating the effectiveness of the proposed architectural structure. The proposed model is satisfactorily applicable to modelling multi-functional building energy consumption.

16:10-17:50 Session 16B: WTCS 2
16:10
A collaborative peer review process for grading coding assignments in coursework

ABSTRACT. With Software technology becoming one of the most important aspects of computational science, it is imperative that we train students in the use of software development tools and teach them to adhere to sustainable software development workflows. In this paper, we showcase how we employ a collaborative peer review workflow for the homework assignments of our course on Numerical Linear Algebra for High Performance Computing (HPC). In the workflow we employ, the students are required to operate with the git version control system, perform code reviews, realize unit tests, and plug into a continuous integration system. From the students’ performance and feedback, we are optimistic that this workflow encourages the acceptance and usage of software development tools in academic software development.

16:30
How Do Teams of Novice Modelers Choose An Approach? An Iterated, Repeated Experiment In A First-Year Modeling Course

ABSTRACT. There are a variety of factors that can influence the decision of which modeling technique to select for a problem being investigated, such as a modeler’s familiarity with a technique, or the characteristics of the problem. We present a study which controls for modeler familiarity by studying novice modelers choosing between the only modeling techniques they have been introduced to: in this case, cellular automata and agent-based models. Undergraduates in introductory modeling courses in 2018 and 2019 were asked to consider a set of modeling problems, first on their own, and then collaboratively with a partner. They completed a questionnaire in which they characterized their modeling method, rated the factors that influenced their decision, and characterized the problem according to contrasting adjectives. Applying a decision tree algorithm to the responses, we discovered that one question (Is the problem complex or simple?) explained 72.72% of their choices. When asked to resolve a conflicting choice with their partners, we observed the repeated themes of mobility and decision-making in their explanation of which problem characteristics influence their resolution. This study provides both qualitative and quantitative insight into factors driving modeling choice among novice modelers.

16:50
Undergraduate Capstone Project on One Dimensional Flow Problem

ABSTRACT. Abstract: The normal human heart is a strong muscular two stage pump which pumps continuously through the circulatory system. It is a four chambered pump that controls blood through a series of valves in one direction. Blood flow through a blood vessel, such as vein and artery can be modeled by Poiseuille law which establishes a relationship between velocity of blood and the radius of the vessel. Our goal here is to solve the differential equations numerically by replacing with algebraic equations to obtain approximate solutions of velocity. In this talk, we will walk through the development of the simulation procedure to visualize how blood flows in steady state in one dimension. This is an ideal capstone project for undergraduate research.

16:10-17:50 Session 16C: CGIPAI 3
16:10
Composite generalized elliptic curve-based surface reconstruction

ABSTRACT. Cross-section curves play an important role in many fields. Analytically representing cross-section curves can greatly reduce design variables and related storage costs and facilitate other applications. In this paper, we propose composite generalized elliptic curves to approximate open and closed cross-section curves, present their mathematical expressions, and derive the mathematical equations of surface reconstruction from composite generalized elliptic curves. The examples given in this paper demonstrate the effectiveness and high accuracy of the proposed method. Due to the analytical nature of composite generalized elliptic curves and the surfaces reconstructed from them, the proposed method can reduce design variables and storage requirements and facilitate other applications such as level of detail.

16:30
Supporting Driver Physical State Estimation by Means of Thermal Image Processing

ABSTRACT. In the paper we address a problem of estimating a physical state of an observed person by means of analysing facial portrait captured in thermal spectrum. The algorithm consists of facial regions detection combined with tracking and individual features classification. We focus on eyes and mouth state estimation. The detectors are based on Haar-like features and AdaBoost, previously applied to visible-band images. Returned face region is subject to eyes and mouth detection. Further, extracted regions are filtered using Gabor filter bank and the resultant features are classified. Finally, classifiers' responses are integrated and the decision about driver's physical state is taken. By using thermal image we are able to capture eyes and mouth states in very adverse lighting conditions, in contrast to the visible-light approaches. Experiments performed on manually annotated video sequences have shown that the proposed approach is accurate and can be a part of current Advanced Driver Assistant Systems.

16:50
Smart Events in Behavior of Non-player characters in Computer Games

ABSTRACT. This work contains a solution improvement for Smart Events, which are one of the ways to guide the behavior of NPCs in computer games. The improvement consists of three aspects: introducing the possibility of group actions by agents, i.e. cooperation between them, extending the SE with the possibility of handling ordinary events not only emergency, and introducing the possibility of taking random (but predetermined) actions as part of participation in the event.

In addition, two event scenarios were designed that allowed the Smart Events operation to be examined. The study consisted of comparing the performance of the SE with another well-known algorithm (FSM) and of comparing different runs of the same event determined by the improved algorithm.

Comparing the performance required proposing measures that would allow for the presentation of quantitative differences between the runs of different algorithms or the same algorithm in different runs. Three were proposed: time needed by the AI subsystem in one simulation frame, the number of decisions in the frame, and the number of frames per second of simulation.

17:10
Place Inference via Graph-based One-class Decisions on Deep Embeddings and Blur Detections

ABSTRACT. Current approaches to visual place recognition for loop closure do not provide information about confidence of decisions. In this work we present an algorithm for place recognition on the basis of graph-based one-class decisions on deep embeddings and blur detections. The graph constructed in advance permits together with information about the room category enable inference on usefulness of place recognition, and in particular, it enables the evaluation the confidence of final decision. We demonstrate experimentally that thanks to proposed blur detection the accuracy of scene categorization is much higher. We evaluate performance of place recognition on the basis of manually selected places for recognition with corresponding sets of relevant and irrelevant images. The algorithm has been evaluated on large dataset for visual place recognition that contains both images with severe (unknown) blurs and sharp images. Images with 6-DOF viewpoint variations were recorded using a humanoid robot.

17:30
Football Players Movement Analysis in Panning Videos

ABSTRACT. In this paper, we present an end-to-end application to perform automatic multiple player detection, unsupervised labelling and semi-automatic approach to finding transformation matrices. It is achieved by integrating available computer vision approaches. The proposed approach consists of two stages. The first stage is an analysis of camera movement in the input video by calculating dense optical flow. Characteristic frames are chosen based on camera movement analysis and user-assisted calibration is performed on these characteristic frames. Transformation matrices are interpolated for every angle of the camera view. From input video 10 evenly spaced frames are collected and used with YOLOv3 detector for getting bounding boxes that describe players positions. For each detected player we calculate visual feature vector which is a concatenation of hue and saturation histograms. Collected samples are then used for learning classification model using unsupervised clustering.

At the detection stage for each input frame, we perform soccer pitch segmentation using color ranges. Then players are detected using YOLOv3 detector and classified with model made at the initial stage. Players positions are transformed to pitch model coordinates. The experimental results demonstrate that our method is reliable with generating heatmaps from players positions in case of moderate camera movement.

17:50
Shape reconstruction from point clouds using closed form solution of a fourth-order partial differential equation

ABSTRACT. Partial differential equation (PDE) based geometric modelling has a number of advantages such as fewer design variables, avoidance of stitching adjacent patches together to achieve required continuities, and physics-based nature. Although a lot of papers have investigated PDE-based shape creation, shape manipulation, surface blending and volume blending as well as surface reconstruction using implicit PDE surfaces, there is little work of investigating PDE-based shape reconstruction using explicit PDE surfaces, specially satisfying the constraints on four boundaries of a PDE surface patch. In this paper, we propose a new method of using an accurate closed form solution to a fourth-order partial differential equation to reconstruct 3D surfaces from point clouds. It includes selecting a fourth-order partial differential equation, obtaining the closed form solutions of the equation, investigating the errors of using one of the obtained closed form solutions to reconstruct PDE surfaces from different number of 3D points.

16:10-17:50 Session 16D: SPU 3
16:10
Optimization of Resources Allocation in High Performance Computing under Utilization Uncertainty

ABSTRACT. In this work, we study resources co-allocation approaches for a dependable execution of parallel jobs in high performance computing systems with heterogeneous hosts. Complex computing systems often operate under conditions of the resources availability uncertainty caused by job-flow execution features, local operations, and other static and dynamic utilization events. At the same time, there is a high demand for reliable computational services ensuring an adequate QoS level. Thus, it is necessary to maintain a trade-off between the available scheduling services (for example, guaranteed resources reservations) and the overall resources usage efficiency. The proposed solution can optimize resources allocation and reservation procedure for parallel jobs’ execution considering static and dynamic features of the resources’ utilization by using the resources availability as a target criterion.

16:30
A comparison of the Richardson extrapolation and the approximation error estimation on the ensemble of numerical solutions

ABSTRACT. The epistemic uncertainty quantification concerning the estimation of the approximation error using the differences between numerical solutions treated in the Inverse Problem statement is addressed and compared with the Richardson extrapolation. The Inverse Problem is posed in the variational statement with the zero order Tikhonov regularization. The ensemble of numerical results, obtained by the OpenFOAM solvers for the inviscid compressible flow with a shock wave is analyzed. The approximation errors, obtained by the Richardson extrapolation and the Inverse Problem are compared with the true error. The Inverse problem based approach is demonstrated to be an inexpensive alternative to the Richardson extrapolation.

16:50
Predicted Distribution Density Estimation for Streaming Data

ABSTRACT. Recent growth in interest concerning streaming data has been forced by the expansion of systems successively providing current measurements and in-formation, which enables their ongoing, consecutive analysis. The subject of this research is the determination of a density function characterizing poten-tially changeable distribution of streaming data. Stationary and nonstation-ary conditions, as well as both appearing alternately, are allowed. Within the distribution-free procedure investigated here, when the data stream becomes nonstationary, the procedure begins to be supported by a forecasting appa-ratus. Atypical elements are also detected, after which the meaning of those connected with new tendencies strengthens, while diminishing elements weaken. The final result is an effective procedure, ready for use without studies and laborious research.

17:10
LSTM processing of experimental time series with varied quality

ABSTRACT. Automatic processing and verification of data obtained in experiments have an essential role in modern science. In the paper, we discuss the assessment of data obtained in meteorological measurements conducted in Biebrza National Park in Poland. The data is essential for understanding the complex environmental processes, such as global warming. The measurements of CO2 flux brings a vast amount of data but suffer from drawbacks like high uncertainty. Part of the data has a high-level of credibility while, others are not reliable. The method of automatic evaluation of data with varied quality is proposed. We use LSTM networks with a weighted square mean error loss function. This approach allows incorporating the information on data reliability in the training process.

17:30
Sampling method for the robust single machine scheduling with uncertain parameters

ABSTRACT. For many years most of the research related to the optimization problems was focusing on deterministic models which assume well defined parameters. Unfortunately, many real problems are defined in the uncertain environments and during modeling we need to model parameters which are not deterministic anymore. In the paper we consider a single machine scheduling problem with uncertain parameters modeled by random variables with the normal distribution. We propose the sampling method which we investigate as an extension to the tabu search algorithm. Sampling provides very promising results and it is also a very universal method which can be easily adapted to many other optimization algorithms, not only tabu search. Conducted computational experiments confirm that results obtained by the proposed method are much more robust than the ones obtained by the deterministic approach.

16:10-17:50 Session 16E: CompHealth 6
16:10
Fuzzy ontology for patient emergency department triage

ABSTRACT. Triage in emergency department (ED) is adopted procedure in several countries using different emergency severity index systems. The objective is to subdivide patients into categories of increasing acuity to allow for prioritization and re-duce emergency department congestion. However, while several studies have focused on improving the triage system and managing medical resources, the classification of patients depends strongly on nurse's subjective judgment and thus is prone to human errors. So, it is crucial to set up a system able to model, classify and reason about vague, incomplete and uncertain knowledge. Thus, we propose in this paper a novel fuzzy ontology based on a new Fuzzy Emergency Severity Index (F-ESI_2.0) to improve the accuracy of current triage systems. Therefore, we model some fuzzy relevant medical subdomains that influence the patient's condition. Our approach is based on continuous structured and un-structured textual data over more than two years collected during patient visits to the ED of the Lille University Hospital Center (LUHC) in France. The result-ing fuzzy ontology is able to model uncertain knowledge and organize the pa-tient's passage to the ED by treating the most serious patients first. Evaluation results shows that the resulting fuzzy ontology is a complete domain ontology which can improve current triage system failures.

16:30
TBox - Risk Classification Models for Kidney Graft Failure

ABSTRACT. End-stage renal disease affects more than 10% of German population re-quires replacement therapy. Kidney transplantation is therapy of choice due to less costs, better survival and life quality. Despite improved short term trans-plant survival, long-term survival remains stagnant. A prediction model to dis-tinguish patients at risk for graft failure was presented recently. We aim to create a prediction model using our transplant database “TBase” to create a prediction model that stratifies patients at risk for transplant failure. Using demografic data from kidney only transplanted patients between 2000 and 2019 we create an end-to-end data pipeline applying exploratory data analysis to increase insights on the data as well as to detect and fix data issues. A strong focus on data homogenization and data cleansing is applied to ensure the best level of data quality. Finally data transformations (e.g. data standardi-zation, data normalization, ordinal encoding, and One-Hot encoding) are ap-plied to use the curated data of TBase. Two classification models (KNN) and Random Forest ensemble method for classification are compared. The data was split into a 75% training and 25% test set. For each experiment ten runs were performed and then the average accuracy score and the standard devia-tion were computed. The selected cohort (N=1570, 62% male recipients, 32% living donation, 51±14years, 18% graft loss) was reduced to 1483 patients after data prepro-cessing with a total of 84 features. The classification models achieved an (av-erage) accuracy (AUC) score of 82.8% and a standard deviation of 1.7% for the KNN model, and an impressive 97.3% and 0.7% for the Random Forest model. In KNN, the sensitivity, specificity, accuracy, and F1-score are 58.8%, 86.4%, 85.2%, and 26.6%, respectively, whereas, in Random Forest, the sensi-tivity, specificity, accuracy, and F1-Score are 100%, 96.8%, 97.3%, and 92.2%, respectively. These first results are both encouraging and promising. Both models will help to determine graft failure risk and individual decision making for follow-up care after renal transplantation.

16:50
Ontology-based decision support system for dietary recommendations for type 2 diabetes mellitus

ABSTRACT. Along with the massive influence of computational technologies in healthcare and medicine, the wide generation of personalized patient, clinical and lab tests data at various stages makes the assistance and potential of intelligent information systems very important for correct surveillance and advising of a particular patient. In this context decision support systems (DSS) play an increasingly important role in medical practice. By assisting physicians in making clinical decisions and subsequent recommendations, medical DSS are expected to improve the quality of healthcare. The role of DSS in diabetes treatment and in particular in post clinical treatment by organizing an improved regime of food balance and patients diets is the target area of the study. Based on the Diabetes Mellitus Treatment Ontology (DMTO), the developed DSS for dietary recommendations for patients with diabetes mellitus aimed at improvement of patient care. Having into account clinical history and lab test profiles of the patients these diet recommendations are rule-based decisions using the DMTO sub-ontologies for patient’s lifestyle improvement and are based on reasoning on the data from the patients records. Principal tasks of the work are focused at intelligent integration of all data related to a particular patient and reasoning on them in order to generate personalized diet recommendations. A special-purpose knowledge base has been created, which enriches the DMTO with a set of newly developed production rules and supports the elaboration of broader and more precise personalized dietary recommendations in the scope of the EHR services.

16:10-17:50 Session 16F: CCI 3
16:10
An Intelligent Social Collective with Facebook-based Communication

ABSTRACT. This paper describes the model of an intelligent social collective based on the Facebook social network. It consists of three main elements: social agents with a specified knowledge structure, a list of communication modes describing how agents send outbound messages, and a list of integration strategies describing how agents react to incoming messages. The model is described in detail, with examples given for important subalgorithms. The model is then analyzed in comparison to epidemic SI models in knowledge diffusion tasks and tested in a simulated environment. The tests show that it behaves according to the expectations for real world groups.

16:30
Multi-Agent Spatial SIR-Based Modeling and Simulation of Infection Spread Management

ABSTRACT. This paper proposes a multi-agent system for modeling and simulation of epidemics spread management strategies. The core of our modeling is a simple spatial Susceptible-Infected-Recovered stochastic discrete system. Our model aims to evaluate the effect of prophylactic and mobility limitation measures on the impact and magnitude of the epidemics spread. The paper introduces our modeling approach and then it proceeds to the development of a multi-agent simulation system. The proposed system is implemented and evaluated using the GAMA multi-agent platform using several simulation scenarios, while the experimental results are discussed in detail. Our model is abstract and well defined, making it very suitable as a starting point for extension and application to more detailed models of specific problems.

16:50
Multi-Criteria Seed Selection for Targeted In uence Maximization within Social Networks

ABSTRACT. Online platforms have evolved from basic technical systems into complex social networks gathering billions of users. With 49% of global population being social media users, viral marketing campaigns in social media have become an important branch of the online mar- keting. While majority of the existing research focuses on maximizing global coverage in the social network, this paper proposes a novel ap- proach with multi-attribute targeted in uence maximization (MATIM), in which multi-criteria decision analysis tools are used to target in the networks specied groups of users described by multiple attributes such as age, gender etc. The proposed approach is veried on a real network and compared to the classic approaches { target coverage superior by as much as 7.14% is achieved.

17:10
How Attachment to your Primary Caregiver Influences your First Adult Relationship: An Adaptive Network Model of Attachment Theory

ABSTRACT. The interactions between a person and his primary caregiver shape the attachment pattern blueprint of how this person behaves in intimate relationships later in life. While this attachment pattern has a lifelong effect on an individual, few studies have been conducted on how this attachment style evolves throughout a person's life. In this paper, an adaptive temporal-causal network was designed and simulated to pro-vide insights into how an attachment pattern is created and how this pattern then evolves as the person develops new intimate relationships at an older age. Two scenarios were simulated: the first concerns interactions with securely attached persons, and the second scenario simulates an individual with the anxious-avoidant attachment pattern who encounters a securely attached person.

16:10-17:50 Session 16G: MLDADS 4
16:10
A GPU algorithm for Outliers detection in TESS light curves

ABSTRACT. In recent years, Machine Learning (ML) algorithms have proved to be very helpful in several research fields, such as engineering, health-science, physics etc. Among these fields, Astrophysics also started to develop a stronger need of ML techniques for the management of big-data collected by ongoing and future all-sky surveys (e.g. Gaia, LAMOST, LSST etc.). NASA’s Transiting Exoplanet Survey Satellite (TESS) is a space-based all-sky time-domain survey searching for planets outside of the solar system, by means of transit method. During its first two years of operations, TESS collected hundreds of terabytes of photometric observations at a two minutes cadence. ML approaches allow to perform a fast planet candidates recognition into TESS light curves, but they require assimilated data. Therefore, different pre-processing operations need to be performed on the light curves. In particular, cleaning the data from inconsistent values is a critical initial step, but because of the large amount of TESS light curves, this process requires a long execution time. In this context, High-Performance computing techniques allow to significantly accelerate the procedure, thus dramatically improving the efficiency of the outliers rejection. Here, we demonstrate that the GPU-parallel algorithm that we developed improves the efficiency, accuracy and reliability of the outliers rejection in TESS light curves.

16:30
Data-driven deep learning emulators for geophysical forecasting

ABSTRACT. We perform a comparative study of different supervised machine learning time-series methods for short-term and long-term temperature forecasts on a real world dataset for the daily maximum temperature over North America given by DayMET. DayMET showcases a stochastic and high-dimensional spatio-temporal structure and is available at exceptionally fine resolution (a 1 km grid). We apply projection-based reduced order modeling to compress this high dimensional data, while preserving its spatio-temporal structure. We use variants of time-series specific neural network models on this reduced representation to perform multi-step weather predictions. We also use a Gaussian-process based error correction model to improve the forecasts from the neural network models. From our study, we learn that the recurrent neural network based techinques can accurately perform both short-term as well as long-term forecasts, with minimal computational cost as compared to the convolution based techniques. We see that the simple kernel based Gaussian-processes can also predict the neural network model errors, which can then be used to improve the long term forecasts.

16:50
NVIDIA SimNet™: An AI-Accelerated Multi-Physics Simulation Framework

ABSTRACT. We present SimNet, an AI-driven multi-physics simulation framework, to accelerate simulations across a wide range of disciplines in science and engineering. Compared to traditional numerical solvers, SimNet addresses a wide range of use cases - coupled forward simulations without any training data, inverse and data assimilation problems. SimNet offers fast turnaround time by enabling parameterized system representation that solves for multiple configurations simultaneously, as opposed to the traditional solvers that solve for one configuration at a time. SimNet is integrated with parameterized constructive solid geometry as well as STL modules to generate point clouds. Furthermore, it is customizable with APIs that enable user extensions to geometry, physics and network architecture. It has advanced network architectures that are optimized for high-performance GPU computing, and offers scalable performance for multi-GPU and multi-Node implementation with accelerated linear algebra as well as FP32, FP64 and TF32 computations. In this paper we review the neural network solver methodology, the SimNet architecture, and the various features that are needed for effective solution of the PDEs. We present real-world use cases that range from challenging forward multi-physics simulations with turbulence and complex 3D geometries, to industrial design optimization and inverse problems that are not addressed efficiently by the traditional solvers. Extensive comparisons of SimNet results with open source and commercial solvers show good correlation. The SimNet source code is available at https://developer.nvidia.com/simnet.

16:10-17:50 Session 16H: UNEQUIvOCAL 2
16:10
Comparison of polynomial chaos expansion and stochastic collocation methods: A case study in forced migration

ABSTRACT. Stochastic collocation (SC) and polynomial chaos expansion (PCE) methods are sampling-based approaches for uncertainty quantification (UQ) that mathematically represent stochastic variability in modelling and simulation. Specifically, SC generates a polynomial approximation of a quantity of interest in the stochastic space and interpolates obtained output using structured collocation point sets. While PCE projects output based on orthogonal stochastic polynomials in the random input parameters. Both of these approaches can serve the same purpose, but there are differences in their performance and efficacy, which we aim to investigate.

We conduct a comparative study by performing a sensitivity analysis on input parameters of forced migration simulations. To predict arrivals of forcibly displaced people in neighbouring countries, we use the Flee agent-based code, the FabFlee plugin of FabSim3 automation toolkit [1,2], and the VECMA toolkit component for verification, validation and UQ, namely EasyVVUQ [3]. Suleimenova et al. [4] proposed sensitivity-driven simulation development approach, performed a baseline investigation using the SC method and identified assumptions that are particularly pivotal to the validation result of forced migration prediction. Here, we perform analysis using PCE and compare the obtained results against the baseline study for seven input parameters, such as max_move_speed, max_walk_speed, camp_move_chance, conflict_move_chance, default_move_chance, camp_weight and conflict_weight, across four conflict situations of Mali, Burundi, South Sudan and the Central African Republic.

References 1. Suleimenova, D., Groen, D.: How policy decisions affect refugee journeys in South Sudan: A study using automated ensemble simulations. Journal of Artificial Societiesand Social Simulation 23(1) (2020) 2. Groen, D., Bhati, A.P., Suter, J., Hetherington, J., Zasada, S.J., Coveney, P.V.: Fabsim: Facilitating computational research through automation on large-scale and distributed e-infrastructures. Computer Physics Communications 270 (2016) 375–385 3. Richardson, R., Wright, D., Edeling, W., Jancauskas, V., Lakhlili, J., Coveney, P.: EasyVVUQ: A library for verification, validation and uncertainty quantification in high performance computing. Journal of Open Research Software 8(11) (2020) 1–8 4. Suleimenova, D., Arabnejad, H., Edeling, W., Groen, D.: Sensitivity-driven simulation development: A case study in forced migration. Philosophical Transactions of the Royal Society A (in press)

16:30
Gaussian Process surrogate models for uncertainty quantification in multiscale fusion simulations

ABSTRACT. One of the challenges in understanding fusion plasma is quantifying the effects of stochastic microscale turbulence on transport processes in a tokamak. Within the VECMA project, analysis using the Multiscale Fusion Workflow is performed where the transport coefficients are calculated based on the time-averaged energy flux values computed by a turbulence code GEM. The analysis of the uncertainty in transport coefficients is computationally expensive and requires numerous simulation runs of a microscale model.

Utilising Polynomial Chaos Expansion method implemented in EasyVVUQ library, such an analysis requires around 24 000 core-hours of computation time at the Marconi supercomputer. Furthermore, a simulation workflow for an ASDEXUpgrade tokamak core profiles evolution for 1 500 time-steps, a time scale of around 0.2 real-life seconds, takes about 20 000 core-hours. Reducing the number of turbulence code runs and total computational effort for these scenarios would require using a surrogate model for the statistical representation of microscale effects on transport

Using EasySurrogate, a part of the ECMA toolkit, we developed functionality for train-ing, testing and utilising such mod-els based on Gaussian Process Regression (GPR). Additionally, a closed-loop active learning scheme to reduce the model maximal uncertainty was implemented. Properties of Gaussian Process surrogates allow to calculate target value distributions efficiently and perform optimization to reduce uncertainty in key features.

A sensitivity analysis using GEM performed byEasyVVUQ shows that around 42% of the variance in electron transport fluxes is ex-plained by perturbations of the electron temperature gradient profile. First tests of a surrogate model for the response of simple analytic code GEM0 in this parameter indicate that only 9 turbulence simulations are re-choired to give a satisfactory fit. This talk will present the methods for adaptive sampling and active surrogate model training that were implemented during the work, as well as effort on software integration of surrogates into a simulation workflow, which would allow performing further studies on reducing computational time for uncertainty quantification in plasma models, localisation of regions of interest in plasma profiles parameter space, and to extend the research by utilising physically more accurate turbulence models.

16:50
Second Order Moments of Multivariate Hermite Polynomials

ABSTRACT. Polynomial chaos methods can be used to estimate solutions of partial differential equations under uncertainty described by random variables. The stochastic solution is represented by a polynomial chaos expansion, whose deterministic coefficient functions are recovered through Galerkin projections. In the presence of multiple uncertainties, the projection step introduces products (second order moments) of the basis polynomials. When the input random variables are correlated Gaussians, calculating these products (which have multivariate Hermite polynomials) is not straightforward and can become computationally expensive. We present a new expression for the products by introducing multiset notation for the polynomial indexing, which allows for simple and efficient evaluation of the second-order moments of correlated multivariate Hermite polynomials.