ICCS 2022: INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE
PROGRAM FOR WEDNESDAY, JUNE 22ND
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:50 Session 8: Keynote Lecture 3
09:00
Modeling Economies at Full-Scale: Every Worker, Every Firm

ABSTRACT. The growing availability of micro-level data has made it possible to construct high-fidelity models of economic processes at full-scale with actual economies. In this talk I will describe such a model of the U.S. private sector, involving some 120 million workers employed by nearly 6 million firms. A large number of gross patterns and regularities exist in the micro-data on firms and workers and in this talk I will describe how my group has built a large-scale agent-based model (ABM) that reproduces a substantial number of these features present in the data. An important part of creating such a model has been efficient parallelization and I will describe both what works and what does not work for parallelizing ABMs. Specifically, the importance of randomization and asynchronous agent activation for the suppression of computational artifacts will be emphasized. This model is described at length in my forthcoming book, “Dynamics of Firms: Data, Theories, and Models”.

09:50-10:20Coffee Break
10:20-12:00 Session 9A: MT 7
10:20
Classification methods based on fitting logistic regression to positive and unlabeled data

ABSTRACT. In our work, we consider the classification methods based on the model of logistic regression for positive and unlabeled data. We examine the following four methods of the posterior probability estimation, where the risk of logistic loss function is optimized, namely: the naive approach, the weighted likelihood approach, as well as the quite recently proposed methods - the joint approach, which is due to Teisseyre et al. \cite{Tes}, and the LassoJoint method from Furmańczyk et al. \cite{F1}. The objective of our study is to evaluate an accuracy, the recall, the precision and the F1-score of the considered classification methods. The corresponding assessments have been carried out by conducting some numerical experiments on selected real datasets.

10:40
Characterizing Wildfire Perimeter Polygons from QUIC-Fire

ABSTRACT. Wildfire perimeter, characterized by a closed polygon, is one of the most important inputs to wildfire modeling software for the prediction of the wildfire progression. However, most of the measured wildfire perimeters are still obtained manually, and even simulated wildfire perimeters cannot be created by some wildfire modelings. Although a variety of image analyses have been applied to the detection of wildfire perimeter, only a set of unordered boundary points describing the wildfire perimeter can be collected after most of image analyses, and few researches have been done in ordering the unordered boundary points to produce a polygon. As a contribution to this topic, this paper introduces two algorithms, iterative minimum distance algorithm (IMDA) and quadriculation algorithm (QA), on automatically obtaining the polygon of a wildfire perimeter. These two algorithms are applied on the same raster simulations from QUIC-Fire to illustrate the effectiveness. Iterative minimum distance algorithm is based on continually connecting two closest points in the set of unordered boundary points. A threshold value is set up to assist in determining whether two points are closely located. From a completely different point of view, quadriculation algorithm create the polygon by recursive dividing the raster image into an indivisible square or rectangle where all the internal pixels are of same color, and then merging together the adjacent squares or rectangles that have the same color. Both algorithms worked well in establishing the polygons for the wildfire perimeters, and computational time were less than one second for each image. The produced polygon can then be used in data assimilation to better estimate and predict the wildfire progression. Therefore, this work contributes to the development of affordable and fully automated process of characterizing the polygon of a wildfire perimeter.

11:00
Adaptive Regularization of B-Spline Models for Scientific Data

ABSTRACT. B-spline models are a powerful way to represent scientific data sets with a functional approximation. However, these models can suffer from spurious oscillations when the data to be approximated are not uniformly distributed. Model regularization (i.e., smoothing) has traditionally been used to minimize these oscillations; unfortunately, it is sometimes impossible to sufficiently remove unwanted artifacts without smoothing away key features of the data set. In this article, we present a method of model regularization that preserves significant features of a data set while minimizing artificial oscillations. Our method varies the strength of a smoothing parameter throughout the domain automatically, removing artifacts in poorly-constrained regions while leaving other regions unchanged. The behavior of our method is validated on a collection of two- and three-dimensional data sets produced by scientific simulations.

11:20
Hierarchical Ensemble based Imbalance Classification

ABSTRACT. In this paper, we propose a hierarchical ensemble method for improved imbalance classification. Specifically, we perform the first-level ensemble based on bootstrap sampling with replacement to create an ensemble. Then, the second-level ensemble is generated based on two different weighting strategies, where the strategy having better performance is selected for the subsequent analysis. Next, the third-level ensemble is obtained via the combination of two methods for obtaining mean and covariance of multivariate Gaussian distribution, where the oversampling is then realized via the fitted multivariate Gaussian distribution. Here, different subsets are created by (1) the cluster that the current instance belongs to, and (2) the current instance and its k nearest minority neighbors. Furthermore, Euclidean distance-based sample optimization is developed for improved imbalance classification. Finally, late fusion based on majority voting is utilized to obtain final predictions. Experiment results on 15 KEEL datasets demonstrate the great effectiveness of our proposed method.

11:40
imple and Efficient Acceleration of the Smallest Enclosing Ball for Large Data Sets in E2: Analysis and Comparative Results

ABSTRACT. Finding the smallest enclosing circle of the given points in E2 is seemingly simple problem. However, already proposed algorithms have high memory requirements or require special solution due to the great depth of recursion or high computational complexity unacceptable for large data sets, etc. This paper presents a simple and efficient method with speed-up over 100 times based on processed data reduction. It is based on efficient preprocessing which leads to significant reduction of points used in final processing. It also significantly reduces the depth of recursion and memory requirements, which is a limiting factor for large data processing. The proposed algorithm is easy to implement and it is extensible to the E3 case, too. The proposed algorithm was tested for upto 10^9 of points using the Halton’s and ”Salt and pepper” distribu- tions.

10:20-12:00 Session 9B: MT 8
Location: Newton South
10:20
Designing a training set for musical instruments identification

ABSTRACT. This paper presents research on one of the most challenging branches of music information retrieval - musical instruments identification. Millions of songs are available online, so recognizing instruments and tagging them by a human being is nearly impossible. Therefore, it is crucial to develop methods that can automatically assign the instrument to the given sound sample. Unfortunately, the number of well-prepared datasets for training such algorithms is very limited. Here, a series of experiments have been carried out to examine how the mentioned methods' training data should be composed. The tests were focused on assessing the decision confidence, the impact of sound characteristics (different dynamics and articulation), the influence of training data volume, and the impact of data type (real instruments and digitally created sound samples). The outcomes of the tests described in the paper can help make new training datasets and boost research on accurate classifying instruments that are audible in the given recordings.

10:40
Which Visual Features Impact the Performance of Target Task in Self-supervised Learning?

ABSTRACT. Self-supervised methods gain popularity by achieving results on par with supervised methods using fewer labels. However, their explaining techniques ignore the general semantic concepts present in the picture, limiting to local features at a pixel level. An exception is the visual probing framework that analyzes the vision concepts of an image using probing tasks. However, it does not explain if analyzed concepts are critical for target task performance. This work fills this gap by introducing amnesic visual probing that removes information about particular visual concepts from image representations and measures how it affects the target task accuracy. Moreover, it applies Marr's computational theory of vision to examine the biases in visual representations. As a result of experiments and user studies conducted for multiple self-supervised methods, we conclude, among others, that removing information about 3D forms from the representation decrease classification accuracy much more significantly than removing textures.

11:00
Human-level Melodic Line Harmonization

ABSTRACT. This paper examines potential applicability and efficacy of Artificial Intelligence (AI) methods in automatic music generation. Specifically, we propose an Evolutionary Algorithm (EA) capable of constructing melodic line harmonization with given harmonic functions, based on the rules of music composing which are applied in the fitness function. It is expected that harmonizations constructed in accordance to these rules would be formally correct in terms of music theory and, additionally, would follow less-formalised aesthetic requirements and expectations. The fitness function is composed of several modules, with each module consisting of smaller parts. This design allows for its flexible modification and extension. The way the fitness function is constructed and tuned towards better quality harmonizations is discussed in the context of music theory and technical EA implementation. In particular, we show how could generated harmonizations be modelled by means of adjusting the relevance of particular fitness function components. The proposed method generates solutions which are technically correct (i.e. in line with music harmonization theory) and also ''nice to listen to'' (i.e. aesthetically plausible) as assessed by an expert - a harmony teacher.

11:20
Ultrafast Focus Detection for Automated Microscopy

ABSTRACT. Technological advancements in modern scientific instruments,such as scanning electron microscopes (SEMs), have significantly increased data acquisition rates and image resolutions enabling new questions to be explored; however, the resulting data volumes and velocities,combined with automated experiments, are quickly overwhelming scientists as there remain crucial steps that require human intervention, for example reviewing image focus. We present a fast out-of-focus detection algorithm for electron microscopy images collected serially and demonstrate that it can be used to provide near-real-time quality controlf or neuroscience workflows. Our technique, Multi-scale Histologic Feature Detection, adapts classical computer vision techniques and is based on detecting various fine-grained histologic features. We exploit the inherent parallelism in the technique to employ GPU primitives in order to accelerate characterization. We show that our method can detect of out-of-focus conditions within just 20ms. To make these capabilities generally available, we deploy our feature detector as an on-demand service and show that it can be used to determine the degree of focus in approximately 230ms, enabling near-real-time use.

11:40
PRISM: Principal Image Sections Mapping

ABSTRACT. Rapid progress in machine learning(ML) and artificial intelligence (AI) has brought increased attention to the potential vulnerability and reliability of AI technologies. To counter this issue a multitude of methods has been proposed. Most of them rely on Class Activation Maps (CAMs), which highlight the most important areas in the analyzed image according to the given model. In this paper we propose another look into the problem. Instead of detecting salient areas we aim to identify features that were recognized by the model and compare this insight with other images. Thus giving us information: which parts of the picture were common, which were unique for a given class. Proposed method has been implemented using PyTorch and is publicly available on GitHub: https://github.com/szandala/TorchPRISM.

10:20-12:00 Session 9C: AIHPC4AS 3
Location: Darwin
10:20
MgNet with Hat Activation Function

ABSTRACT. Activation function plays an important role in neural networks. We propose the hat activation function namely the first order B-spline function for MgNet, a special class of CNN. Different from commonly used activation functions like ReLU, hat function has a compact support. By our experiments of classification on MNIST, CIFAR10/100 and ImageNet, we firstly show that MgNet and ResNet with hat function obtain a better generalization performance than networks with ReLU function. In the talk, we will provide both theoretical analysis and further numerical experiments to demonstrate the advantages of using hat function over other activation functions.

10:50
Isogeometric Analysis of Bound States of a Quantum Three-Body Problem in 1D

ABSTRACT. In this paper, we initiate the study of isogeometric analysis (IGA) of a quantum three-body problem that has been well-known to be difficult to solve. In the IGA setting, we represent the wavefunctions by linear combinations of B-spline basis functions and solve the problem as a matrix eigenvalue problem. The eigenvalue gives the eigenstate energy while the eigenvector gives the coefficients of the B-splines that lead to the eigenstate. The major difficulty of isogeometric or other finite-element-method-based analyses lies in the lack of boundary conditions and a large number of degrees of freedom for accuracy. For a typical many-body problem with attractive interaction, there are bound and scattering states where bound states have negative eigenvalues. We focus on bound states and start with the analysis for a two-body problem. We demonstrate through various numerical experiments that IGA provides a promising technique to solve the three-body problems.

11:10
Neural-Network based Adaptation of Variation Operators' Parameters for Metaheuristics

ABSTRACT. The paper presents an idea of training an artificial neural network a relation between different parameters observed for a population in a metaheuristic algorithm. Then such trained network may be used for controlling other algorithms (if the knowledge gathered by the network is trained in such way, that it becomes agnostic regarding the problem). The paper focuses on showing the idea and also provides selected experimental results obtained after applying the proposed algorithm for solving popular benchmark problems in different dimensions

11:30
Recursive Singular Value Decomposition compression of refined isogeometric analysis matrices as a tool to speedup iterative solvers performance

ABSTRACT. The isogeometric analysis (IGA) uses higher-order and continuity base func-tions as compared to the traditional finite element method. The IGA has many applications in simulations of time-dependent problems. These simula-tions are often performed using an explicit time-integration scheme, which requires the solution of a system of linear equations with the mass matrix, constructed with high-order and continuity base functions. The iterative solvers are most commonly applied for large problems simulated over com-plex geometry. This paper focuses on recursive decomposition of the mass matrix using the Singular Value Decomposition algorithm. We build a recur-sive tree, where sub-matrices are expressed as multi-columns multiplied by multi-rows. When we keep the mass matrix compressed like that, the multi-plication of a matrix by a vector, as performed by an iterative solver, can be performed in O(Nr) instead of O(N2) computational cost. Next, we focus on refined isogeometric analysis (rIGA). We introduce the C0 separators into IGA sub-matrices and analyze the SVD recursive compression and computa-tional cost of an iterative solver when increasing the patch size and the order of B-spline base functions.

11:50
Performance of Computing Hash-codes with Chaotically-trained Artificial Neural Networks

ABSTRACT. The main goal of the research presented in this paper was to estimate the performance of applying neural networks trained with usage of a chaotic model, that may serve as hashing functions. The Lorenz Attractor chaotic model was used for training data preparation, and Scaled Conjugate Gradient was used as a training algorithm. Networks consisted of two layers: a hidden layer with sigmoid neurons and an output layer with linear neurons. An algorithm of bonding the input message with chaotic formula is presented. Created networks could return 256 or 512 bits of hash, however, this parameter can be easily adjusted before the training process. The performance analysis of networks is discussed (that is the time of hash computation) in comparison with popular standards SHA-256 and SHA-512 under the MATLAB environment. Further research may include analysis of networks' training parameters (like mean squared error or gradient) or analysis of results of the statistical tests performed on networks output. The presented solution may be used as a security algorithm complementary to a certificated one (for example for additional data integrity checking).

10:20-12:00 Session 9D: COMS 4
Location: Newton North
10:20
Validation of migration models in the face of inconsistent data: a perspective from four conflicts

ABSTRACT. Over the last decade, as UNHCR reports, the number of displaced people has been doubled which almost 40% of them forced to cross borders and settle in neighbouring countries. Furthermore, the initial cross border displacement can sometimes be the first step toward further migration along migratory routes to Europe, where due to the EU reports, the number of people arriving from different regions to seek protection has been increased dramatically. As a partner of the ITFLOWS project, we aim to develop, run and validate refugee movement simulations for four different conflict scenarios in three continents; Syria, Mali, Nigeria, and Venezuela, and help decision-makers to anticipate refugees’ arrival and plan possible humanitarian actions in support to host countries. We estimate the distribution of incoming refugees across destination camps through the agent modelling and simulation approach. For this aim, we obtain the required data from different data sources to construct the models. One of these data sources, the UNHCR data portal for refugees situations, often provides the total number of registered refugees and in some cases the number of registered refugees for each camp in the neighbouring countries. However, depending on each situation’s characteristic, provided data, time-span and time-frequency of observations can differ. Hence, we have to extract required data from the published reports for each situation, especially camp registrations data. As a result, we have two sets of validation data, one is the total number of refugees reported in the UNHCR data and the other is the sum of registered refugees found in each camp. In this paper, we aim to examine both sets for validation and compare the results to see which one provides a better prediction. For this, we calculate and compare the averaged relative difference for both validation sets. Due to data limitations for conflicts and disasters in 2021, we use May 2021 as the cut-off date in this study.

10:40
Responding to Current and Future Pandemics using Agent-Based Simulation

ABSTRACT. The world is currently going through one of the worst pandemics in history with over 420 million cases and five million deaths in a span of about two years. In addition to loss of lives, COVID-19 has also impacted the economy, mental health and education of people worldwide. Hence, it is essential to mitigate the damages of the ongoing pandemic as well as be better prepared for any future pandemics. One of the interesting aspects of the response to COVID-19 is its local nature. National and local governments have implemented various measures at various points in time according to the severity of the disease in that location, the economic constraints and inputs from experts. Moreover, similar sets of measures can vary in their ability to contain the spread of infection in different locations. Arguably, this is due to the differences among the regions in terms of demographics, the economy, the density and behaviour of the population, as well as the degree of compliance to public health preventive measures. Therefore, it is essential to evaluate the efficiency of these measures for a given region. To that end, we present the Flu and Coronavirus Simulator (FACS), a highly adaptable Agent-Based Simulation (ABS) software that uses geospatial, demographic and disease-specific data to model the temporal and spatial progression of COVID-19 and other similar air-borne diseases in a specific geographical location. It identifies the amenities and houses in the location and creates a local spatial network of agents and amenities. It then simulates the movement of population across the amenities according to their age and needs which results in the spread of infection. FACS has the capability to simulate the effects of various preventive measures such as lock-downs and vaccination strategies as well as the impact of emergence of new variants of the virus and the impact of distribution of amenities in the region. FACS can be configured using human-readable files which makes it flexible and easily adaptable for any changes in the characteristics of the pandemic. We have also developed a Graphical User Interface (GUI) to facilitate its use by users with limited simulation expertise. FACS is currently being used in the STAMINA project, an EU-funded project that develops a suite of solutions for Pandemic Crisis Management. The STAMINA Consortium consists of a wide range of technology experts, first responders and policy makers from across Europe and neighbouring countries. FACS is being demonstrated in a variety of pandemic crisis trial scenarios in Lithuania, Romania, Turkey and the United Kingdom. To show the flexibility and diverse utility of the model, we present initial simulation results of two exemplar cases. We show how FACS can be used to analyse various “what if” scenarios and evaluate the efficacy of various lock-down measures in two diverse geographical areas.

11:00
CHARM: A Discrete-Event Simulation for Dynamic ICU Bed Capacity Management for Covid-19

ABSTRACT. Hospitals across the globe face capacity challenges due to the Coronavirus pandemic. Large numbers of Covid-19 patients require admission to Intensive Care Unit (ICU) and very often for a long period of time. Another big challenge is the highly infectious nature of the virus. Hospitals make huge ward rearrangements in order to prevent nosocomial infections as well as create surge bed capacity for the anticipated Covid-19 admissions. During the first wave, hospitals cancelled all elective surgeries and were able to deal with only a small number of emergency incidents. When the first crisis started to ease and by the time it was apparent that there would be future waves, hospitals main question was how to manage future Covid-19 outbreaks and at the same time continue normal operation? Arguably, cancellation of all scheduled surgeries is not a viable strategy. The backlog of elective operations has a massive impact on the care that healthcare systems can provide to the population. Consequently, this has a negative impact on the quality of life and the economy. In an attempt to support hospitals in planning their ICU bed capacity, we developed the dynamiC Hospital wARd Management (CHARM) model. CHARM is a Discrete-Event Simulation (DES) of the ICU admission process that allows for dynamic reconfiguration of hospital wards. It considers three types of admissions for emergency, elective and Covid-19 arrivals. A routing logic allocates the patients to the respective wards. Covid-19 capacity can be pooled from elective and emergency capacity when there is a surge of Covid-19 admissions. The resources are reversed to their original configuration when the surge eases. CHARM is built in Python using SimPy (https://gitlab.com/team-simpy/simpy). It is used in the STAMINA project, an EU-funded project that develops a suite of solutions for Pandemic Crisis Management. The STAMINA Consortium consists of a wide range of technology experts, first responders and policy-makers from across Europe and neighbouring countries. CHARM is being demonstrated in a variety of pandemic crisis trial scenarios in Lithuania, Romania, Turkey and Tunisia.

11:20
Partitioning Dense Graphs with Hardware Accelerators

ABSTRACT. Graph partitioning is a fundamental combinatorial optimization problem that attracts a lot of attention from theoreticians and practitioners due to its broad applications. From multilevel graph partitioning to more general-purpose optimization solvers such as Gurobi and CPLEX, a wide range of approaches have been developed. Limitations of these approaches are important to study in order to break the computational optimization barriers of this problem. As we approach the limits of Moore's law, there is now a need to explore ways of solving such problems with special-purpose hardware such as quantum computers or quantum-inspired accelerators. In this work, we experiment with solving the graph partitioning on the Fujitsu Digital Annealer (a special-purpose hardware designed for solving combinatorial optimization problems) and compare it with the existing top solvers. We demonstrate limitations of existing solvers on many dense graphs as well as those of the Digital Annealer on sparse graphs which opens an avenue to hybridize these approaches.

11:40
Calibration window selection based on change-point detection for forecasting electricity prices

ABSTRACT. We employ a recently proposed change-point detection algorithm, the Narrowest-Over-Threshold (NOT) method, to select subperiods of past observations that are similar to the currently recorded values. Then, contrarily to the traditional time series approach in which the most recent τ observations are taken as the calibration sample, we estimate autoregressive models only for data in these subperiods. We illustrate our approach using a challenging dataset - day-ahead electricity prices in the German EPEX SPOT market - and observe a significant improvement in forecasting accuracy compared to commonly used approaches, including the Autoregressive Hybrid Nearest Neighbors (ARHNN) method.

10:20-12:00 Session 9E: CompHealth 2
Location: Cavendish
10:20
Neural Additive Models for Explainable Heart Attack Prediction

ABSTRACT. Heart attack (HA) is a sudden health when the flow of blood to the heart is blocked, causing damage to the heart. According to the World Health Organization (WHO), heart attack is one of the greatest the greatest cause of death and disability globally. Early recognition of the various warning signs of a HA can help reduce the severity. Different machine learning (ML) models have been developed to predict the heart attack. However, patients with arterial hypertension (AH) are especially prone to this disorder and have a number of features that distinguish them from other groups of patients. We apply these features to develop a special model for people suffering from AH. Moreover, we contribute to this field bringing more transparency to the modelling using interpretable machine learning. We also compare the patterns learned by methods with prior information used in heart attack scales and evaluate their efficiency.

10:40
Explainable AI with Domain Adapted FastCAM for Endoscopy Images

ABSTRACT. The enormous potential of artificial intelligence can be seen today in numerous products and services, especially in healthcare and medical technology. One of the most important success factors in the market is the explainability of the decisions made by AI applications. Explainability strengthens the acceptance of the technology among users. Above all, however, it is also a central prerequisite for licensing and certification procedures around the world and for the fulfilment of transparency obligations. SHAP, Integrated Gradients, LRP, DeepLift, GradCAM or FastCAM are explainability tools that increases the comprehensibility of object recognition in images using Convolutional Neural Networks, but lack of precision.

This paper uses FastCAM as an XAI tool and optimizes the FastCAM method for the specific domain, the detection of medical instruments in endoscopy images. The results show that the Domain Adapted FastCAM DA-FastCAM provides better results for the focus of the model than standard FastCAM and therefore, offers the possibility to assess CNNs through plausibility checks for clinical endoscopy images.

11:00
Hybrid Modeling for Predicting Inpatient Treatment Outcome: COVID-19 Case

ABSTRACT. This study presents two methods for supporting the treatment process of COVID-19 inpatients. The first method is for predictive treatment outcomes for COVID patients. This method is based on machine learning models, probabilistic graph models, and patient clusterization. The method shows high quality in terms of predictive metrics, and the structure of the model is confirmed by priory medical knowledge and other research. This method is used as a base for the second method. The second method is a practical tool for search an optimal plan of interventions for severe patients. This plan is a set of interventions for particular patients that are optimal in terms of minimization of lethal outcome probability. We have validated this method using virtual experiment (item 4.5), and for 30 percent of patients from all patients with observed lethal outcomes, methods found intervention plan which leads to recovering as treatment outcome according to prediction estimate. Both methods show high quality, and ather validation by physicians this method can be used as parts of Decision Supports Systems for medical specialists who work with COVID-19 patients.

11:20
Knowledge Discovery in Databases: Comorbidities in Tuberculosis Cases

ABSTRACT. Unlike the primary condition under investigation, the term comorbidities define coexisting medical conditions that influence patient care during detection, therapy, and outcome. Tuberculosis continues to be one of the 10 leading causes of death globally. The aim of the study is to present the exploration of classic data mining techniques to find relationships between the outcome of TB cases (cure or death) and the comorbidities presented by the patient. The data are provided by TBWEB and represent TB cases in the territory of the state of São Paulo-Brazil, from 2006 to 2016. Techniques of feature selection and classification models were explored. As shown in the results, it was found high relevance for AIDS and alcoholism as comorbidities in the outcome of TB cases. Although the classifier performance did not present a significant statistical difference, there was a great reduction in the number of attributes and in the number of rules generated, showing, even more, the high relevance of the attributes: age group, AIDS, and other immunology in the classification of the outcome of TB cases. The explored techniques proved to be promising to support searching for unclear relationships in the TB context, providing, on average, a 73% accuracy in predicting the outcome of the cases according to characteristics that were analyzed.

10:20-12:00 Session 9F: MMS 1
10:20
An agent-based forced displacement simulation: A case study of the Tigray crisis

ABSTRACT. Agent-based models (ABM) simulate individual, micro-level decision making to predict macro-scale emergent behaviour patterns. In this paper, we use ABM for forced displacement to predict the distribution of refugees fleeing from northern Ethiopia to Sudan. Since Ethiopia has more than 950,000 internally displaced persons (IDPs) and is home to 96,000 Eritrean refugees in four camps situated in the Tigray region, we model refugees, IDPs and Eritrean refugees. It is the first time we attempt such integration, but we believe it is important because IDPs and Eritrean refugees could become refugees fleeing to Sudan. To provide more accurate predictions, we review and revise the key assumptions in the Flee simulation code that underpin the model, and draw on new information from data collection activities. Our initial simulation predicts more than 75\% of the movements of forced migrants correctly in absolute terms with the average relative error of ~0.45. Finally, we aim to forecast movement patterns, destination preferences among displaced populations and emerging trends for destinations in Sudan.

10:40
Automating and Scaling Task-Level Parallelism of Tightly Coupled Models via Code Generation

11:00
Multipoint meshless FD schemes applied to nonlinear and multiscale analysis

ABSTRACT. The paper presents computational schemes of the multipoint meshless method – the numerical modelling tool that allows accurate and effective solving of boundary value problems. The main advantage of the multipoint general version is its generality – the basic relations of derivatives from the unknown function depend on the domain discretization only and are independent of the type of problem being solved. This feature allows to divide the multipoint computational strategy into two stages and is advantageous from the point of view of the calculation efficiency. The multipoint method algorithms applied to such engineering problems as numerical homogenization of heterogeneous materials and nonlinear analysis are developed and briefly presented. The paper is illustrated by several examples of the multipoint numerical analysis.

11:20
Neutron scattering calculation for coarse grain models

ABSTRACT. The combined use of X-ray and neutron scattering experiments with molecular simulation is increasingly being utilised to study multiscale structures in molecular biology and soft matter physics [1-4]. Despite the progress in the methods and force fields in all-atom models, sufficient sampling is computationally expensive for micellar systems like CTAB surfactants, due to the large time and length scales involved in the aggregation dynamics [3]. Furthermore, Small-Angle Neutron Scattering (SANS) can provide data at the length scale of 100s nm, an order of magnitude larger compared to the typical atomistic simulation. For such systems, coarse grain (CG) models are often utilised to reduce computational cost and to explore the global structures at larger length scale. Following the work by Soper and Edler [4], the differential cross-section for CG systems, FCG(Q), has been calculated for CG simulations and compared to those curves (F(Q)) obtained from atomistic ones. To compare and validate the method, a CG trajectory is generated by replacing the group of atoms in the atomistic trajectory with a CG bead (see the right panel of figure 1), thereby ensuring differences in the calculated scattering come only from the loss of atomistic resolution, not differences in structure. Variables in the scattering calculation include number of atoms per bead, bead radius, and the scattering form factor for the bead. Benchmark calculations are performed on fully atomistic polyamide-66 and C10TAB surfactant in water (example cases are shown in the left panel of figure 1). We obtain the scattering curves for CG trajectories with different bead sizes and form factors which are compared to the atomistic benchmark calculations. Our results in general show excellent agreement between F(Q) and FCG(Q) in the low Q region (Q<1.0) and differ with the increase in Q. We have developed and assessed the efficiency of this new computational tool for the calculation of neutron scattering curves from CG simulations allowing direct comparison of the CG simulations to a rigorous benchmark of structural experimental data. Our calculation will help in assessing the molecular force fields in CG simulation by comparison to the experimental scattering data. The resulting scientific output will be used for the improved analysis of the experimental scattering data for large and complex systems beyond the length and time scales available to atomistic simulation.

References: 1. Max C. Watson and Joseph E. Curtis, J. Appl. Cryst. (2013). 46, 1171–1177 2. David W. Wright and Stephen J. Perkins, J. Appl. Cryst. (2015). 48, 953–961 3. Daniel T. Bowron and Karen J. Edler, Langmuir 2017, 33, 262−271 4. Alan K. Soper, Karen J. Edler, Biochim. Biophys. Acta 2017, 1861, 1652

10:20-12:00 Session 9G: QCW 1
Location: Telford
10:20
Quantum-Classical Solution Methods for Binary Compressive Sensing Problems

ABSTRACT. Compressive sensing is a signal processing technique used to acquire and reconstruct sparse signals using significantly fewer measurement samples. Compressive sensing requires finding the most sparse solution to an underdetermined linear system, which is an NP-hard problem and as a consequence in practise is only solved approximately. In our work we restrict ourselves to the compressive sensing problem for the case of binary signals. For that case we have defined an equivalent formulation in terms of a quadratic binary optimisation (QUBO) problem, which we solve using classical and hybrid-quantum computing solving techniques based on quantum annealing. Phase transition diagrams show that this approach significantly improves the number of problem types that can be successfully reconstructed when compared to a more conventional L1 optimisation method. A challenge that remain is how to select optimal penalty parameters in the QUBO formulation as was shown can heavily impact the quality of the solution.

10:40
Reducing Memory Requirements of Quantum Optimal Control

ABSTRACT. Quantum optimal control is solved by the GRAPE algorithm, which suffers from exponential growth in storage with increasing number of qubits and linear growth in memory requirements with increasing number of timesteps. These memory requirements are a barrier for simulating larger models or longer time spans. We have created a non-standard automatic differentiation technique that can compute gradients needed by GRAPE by exploiting the fact that the inverse of a unitary matrix is its conjugate transpose. Our approach significantly reduces the memory requirements for GRAPE, at the cost of a reasonable amount of recomputation. We present our implementation in JAX, as well as benchmark results.

11:00
Quantum approaches for WCET-related optimization problems

ABSTRACT. This paper explores the potential of quantum computing on a WCET\footnote{Worst-Case Execution Time (of a program).}-related combinatorial optimization problem on a set of several polynomial specific cases. We consider the maximization problem of determining the most expensive path in a control flow graph. In these graphs, vertices represent blocks of code whose execution times are fixed and known in advance. We port the considered optimization problem to the quantum framework by expressing it as a QUBO. We then experimentally compare the performances in solving the problem of (classic) Simulated Annealing (SA), Quantum Annealing (QA), and QAOA.

11:20
Benchmarking D-Wave Quantum Annealers: spectral gap scaling of maximum cardinality matching problems

ABSTRACT. Quantum computing, in particular Quantum Annealing (QA), provides a theoretically promising alternative to classical methods for solving combinatorially difficult optimization problems. In particular, QA is suitable for problems that can be formulated as a Quadratic Unconstrained Binary Optimization(QUBO) problem, such as SAT, graph colouring and travelling salesman. With commercially available QA hardware, like that offered by D-Wave Systems (D-Wave), reaching scales capable of tackling real world problems, it is timely to assess and benchmark the performance of this current generation of hardware. This paper empirically investigates the performance of D-Wave’s 2000Q (2048 qubits) and Advantage (5640 qubits) quantum annealers in solving a specific instance of the maximum cardinality matching problem, building on the results of a prior paper that investigated the performance of earlier QA hardware from D-Wave. We find that the Advantage quantum annealer is able to produce optimal solutions to larger problem instances than the 2000Q. We further consider the problem’s structure and its implications for suitability to QA by utilizing the Landau-Zener formula to explore the potential scaling of the diabatic transition probability. We propose a method to investigate the behaviour of minimum energy gaps for scalable problems deployed to quantum annealers. We find that the minimum energy gap for our target QA problem does not scale favourably. This behaviour raises questions as to the suitability of this problem for benchmarking QA hardware, as it potentially lacks the nuance required to identify meaningful performance improvements between generations.

11:40
Efficient constructions for simulating Multi Controlled Quantum Gates

ABSTRACT. Multi Controlled Gates, with Multi Controlled Toffoli as primary example are a building block for a lot of complex quantum algorithms in the domains of discrete arithmetic, cryptography, machine learning, and image processing. However, these gates cannot be physically implemented in quantum hardware and therefore they need to be decomposed into many smaller elementary gates. In this work we analyse previously proposed circuit constructions for MCT gates and describe 6 new methods for generating MCT circuits with efficient costs, less restrictions, and improved applicability.

12:00-12:30 Session 10: Poster Session

The paper lineup is the same for all three Poster Sessions.

Location: Newton South
12:30-13:30Lunch
13:30-14:20 Session 11: Keynote Lecture 4
13:30
Quantum Simulations of Materials and Molecules on Hybrid Quantum-Classical Architectures

ABSTRACT. We present theoretical and computational strategies based on quantum mechanical calculations, aimed at predicting the properties of materials and molecules with desired characteristics for quantum technologies and energy applications. We discuss computational challenges related to the development and use of interoperable codes to compute multiple properties of complex systems.

14:30-16:10 Session 12A: MT 9
14:30
Efficient computational algorithm for stress analysis in hydro-sediment-morphodynamic models

ABSTRACT. Understanding of complex stress distributions in lake beds and river embankments is crucial in many designs in civil and geothecnical engineering. We propose an accurate and efficient computational algorithm for stress analysis in hydro-sediment-morphodynamic models. The governing equations consists of the linear elasticity in the bed topography coupled to the shallow water hydro-sediment-morphodynamic equations. Transfer conditions at the bed interface between the water surface and the bedload are also developed using frictional forces and hydrostatic pressures. A hybrid finite volume/finite element method is implemented for the numerical solution of the proposed model. Well-balanced discretization of the gradient fluxes and source terms is formulated for the finite volume and the treatment of dry areas in the model is discussed in the present study. The finite element method uses quadratic elements on unstructured meshes and interfacial forces are samples on the common nodes for finite volume and finite element grids. Numerical results are presented for a dam-break problem in hydro-sediment-morphodynamic models and the computed solutions demonstrated the ability of the proposed model in accurately capturing the stress distributions for erosional and depositional deformations. In addition, the coupled model is accurate, very efficient, well-balanced and it can solve complex geometries.

14:50
Peridynamic Damage Model Based on Absolute Bond Elongation

ABSTRACT. A bond-based peridynamic damage model is proposed to incorporate the deformation and the damage process into a unified framework. This new model is established based on absolute bond elongation,and both the elastic and damage parameters of the material are embedded in the constitutive relationship, which makes the model better characterize the process of material damage. Finally, different phenomenons for various damage patterns is observed by numerical experiments, rich damage patterns will make this model better suitable for damage simulation.

15:10
Incremental dynamic analysis and fragility assessment of buildings with different structural arrangements experiencing earthquake-induced structural pounding

ABSTRACT. Structural pounding is considered as one of the most critical phenomena occurring during earthquakes. This paper presents the incremental dynamic analysis and fragility assessment of buildings experiencing earthquake-induced pounding. Three 3-D buildings with different number of storeys and under different structural arrangements have been considered. Three pounding scenarios have been taken into account, i.e. pounding between 5-storey and 7-storey buildings, pounding between 5-storey and 9-storey buildings and pounding between 7-storey and 9-storey buildings. The incremental dynamic analysis and fragility assessment has been performed for these three buildings in the three pounding scenarios as well as for the no pounding case. The results of both incremental dynamic analysis and fragility assessment illustrate that pounding can be beneficial and destructive, depending on the structural response and ground motion shift versus time. No clear relation has been observed because pounding is a highly complicated phenomenon.

15:30
Investigating an optimal computational strategy to retrofit buildings with implementing viscous dampers

ABSTRACT. Civil engineering structures may seriously suffer from different damage states as ‎a result of earthquake loading. Nowadays, retrofitting the existing buildings is a ‎serious need among designers. Two important factors of required performance ‎level and cost of retrofitting play a crucial role in the retrofitting approach. In this ‎study, a new optimal computational strategy to retrofit structures by implementing ‎linear Viscous Dampers (VDs) is investigated to achieve a higher performance ‎level with lower implementation cost. Regarding this goal, a Tcl programming ‎code was developed with the capability of considering damaged structure due to ‎earthquake-induced structural pounding. The code allows us to improve structural ‎models to take into account the real condition of buildings using both MATLAB ‎and Opensees software simultaneously. To present the capability of this strategy, ‎‎3-, and 6-story colliding Steel Moment-Resisting Frames (SMRFs) were select-‎ed. Incremental Dynamic Analysis (IDA) was performed based on the interstory ‎drift ratio of floor levels as engineering demand parameter, and Sa(T1) as intensity ‎measure. Interstory median IDAs of floor levels of colliding SMRFs were plot-‎ted to find out the floor level prone to damage and to retrofit only this floor level ‎instead of all stories. The results show that implementing only two linear VDs ‎with a cost of two units can achieve a higher Life Safety (LS) performance level ‎in the case of 3-, and 6-story SMRFs. Moreover, the proposed computational ‎strategy can be used for any structure (with and without pounding conditions) ‎and in all performance levels prescribed in FEMA 356 code.‎

15:50
Phase-field modelling of brittle fracture using time-series forecasting

ABSTRACT. The crack propagation behavior can be considered a time-series forecasting problem and can be observed based on the changes of the Phase-field variable. In this work, we study the behavior of the Isotropic Brittle Fracture Model (BFM), and propose a hybrid computational technique that involves a time-series forecasting method for finding results faster when solving variational equations with a fine-grained mesh. Importantly, we use this case study to compare and contrast two different time-series forecasting approaches: a statistical method namely ARIMA, and a neural network learning-based method, namely LSTM. The study shows both methods come with different strengths and limitations. However, ARIMA method stands out due to its robustness and flexibility, especially when training data is limited because it can exploit a priori knowledge.

14:30-16:10 Session 12B: MT 10
Location: Newton South
14:30
Is Context All You Need? Non-Contextual vs Contextual Multiword Expressions Detection

ABSTRACT. Effective methods of the detection of multiword expressions are important for many technologies related to Natural Language Processing. Most contemporary methods are based on the sequence labeling scheme, while traditional methods use statistical measures. In our approach, we want to integrate the concepts of those two approaches. In this paper, we present a novel weakly supervised multiword expressions extraction method which focuses on their behaviour in various contexts. Our method uses a lexicon of Polish multiword units as the reference knowledge base and leverages neural language modelling with deep learning architectures. In our approach, we do not need a corpus annotated specifically for the task. The only required components are: a lexicon of multiword units, a large corpus, and a general contextual embeddings model. Compared to the method based on non-contextual embeddings, we obtain gains of 15 percentage points of the macro F1-score for both classes and 30 percentage points of the F1-score for the incorrect multiword expressions. The proposed method can be quite easily applied to other languages.

14:50
Compiling Linear Algebra Expressions into Efficient Code

ABSTRACT. In textbooks, linear algebra expressions often use indices to specify the elements of variables. This index form expressions cannot be directly translated into efficient code, since optimized linear algebra libraries and frameworks require expressions in index-free form. To address this problem, we developed Lina, a tool that automatically converts textbook-like linear algebra expressions into index-free linear algebra expressions that we map efficiently to NumPy and Eigen code.

15:10
MHGEE: Event Extraction via Multi-Granularity Heterogeneous Graph

ABSTRACT. Event extraction is a key task of information extraction. Existing methods are not effective due to two challenges of this task: 1) Most of previous methods only consider a single granularity information and they are often insufficient to distinguish ambiguity of triggers for some types of events. 2) The correlation among intra-sentence and inter-sentence event is non-trivial to model. Previous methods are weak in modeling interdependency among the correlated events and they have never modeled this problem for the whole event extraction task. In this paper, we propose a novel Multi-granularity Heterogeneous Graph-based event extraction model (MHGEE) to solve the two problems simultaneously. For the first challenge, MHGEE constructs multigranularity nodes, including word, entity and context and captures interactions among nodes by R-GCN. It can strengthen semantic and distinguish ambiguity of triggers. For the second, MHGEE uses heterogeneous graph neural network to aggregating the information of relevant events and hence capture the interdependency among the events. The experiment results on ACE 2005 dataset demonstrate that our proposed MHGEE model achieves competitive results compared with state-of-the-art methods in event extraction. Then we demonstrate the effectiveness in ambiguity of triggers and event interdependency with experiments.

15:30
Cyberbullying Detection with Side Information: A Real-World Application of COVID-19 News Comment in Chinese Language

ABSTRACT. Cyberbullying is an aggressive and intentional behavior committed by groups or individuals, and its main manifestation is to make offensive or hurtful comments on social media. The existing researches on cyberbullying detection underuse natural language processing technology, and is only limited to extracting the features of comment content. Meanwhile, the existing datasets for cyberbullying detection are non-standard, unbalanced, and the data content of datasets is relatively outdated. In this paper, we propose a novel Hybrid deep Model based on Multi-feature Fusion (HMMF), which can model the content of news comments and the side information related to net users and comments simultaneously, to improve the performance of cyberbullying detection. In addition, we present the JRTT: a new, publicly available benchmark dataset for cyberbullying detection. All the data are collected from social media platforms which contains Chinese comments on COVID-19 news. To evaluate the effectiveness of HMMF, we conduct extensive experiments on JRTT dataset with six existing pre-trained language models. Experimental results and analyses show that HMMF achieves state-of-the-art performances on cyberbullying detection. To facilitate research in this direction, we release the dataset and the project code at https://github.com/xingjian215/HMMF.

15:50
Deep Neural Sequence to Sequence Lexical Substitution for Polish Language

ABSTRACT. This paper aims to investigate the applicability of language models to the problem of lexical substitution in a low resource language setting. For this purpose, we focus on pre-trained models based on transformer architectures, in particular BERT and BART. We present a solution in the form of the BART-based sequence-to-sequence model. The paper illustrates the usage of semisupervised approaches in the form of generated datasets to improve the particular aspect of lexical substitution. Moreover, we explore the possibility of ranking the substitution model's proposals and adapting the linguistic error dataset for evaluating the quality of lexical substitution. We focus on Polish as an example of a strongly inflected language.

14:30-16:10 Session 12C: BBC 1
Location: Darwin
14:30
Continuous-to-continuous data model vs. discrete-to-discrete data model for the statistical iterative reconstruction method - optimization of the computational complexity

ABSTRACT. The article presents a comparison of two statistical approaches to the problem of image reconstruction from projections: the worldwide known concept based on a discrete-to-discrete data model and our original idea based on a continuous-to-continuous data model. Both reconstruction approaches are formulated taking into account the statistical properties of signals obtained by CT scanners. The main goal of this strategy is significantly improving the quality of the reconstructed images, so allowing a reduction in the x-ray dose absorbed by a patient during CT examinations. In the concept proposed by us, the reconstruction problem is formulated as a shift-invariant system. In consequence, that significantly improves the quality of the subsequently reconstructed images, and it allows to reduce the computational complexity compared to the reference method. The performed by us experiments have shown that our original reconstruction method outperforms the referential approach regarding the image quality obtained and the time of necessary calculations.

14:50
Modeling contrast perfusion and adsorption phenomena in the human left ventricle

ABSTRACT. This work presents a mathematical model to describe perfusion dynamics in cardiac tissue. The new model extends a previous one and can reproduce clinical exams of contrast-enhanced cardiac magnetic resonance imaging (MRI) of the left ventricle obtained from patients with cardiovascular diseases, such as myocardial infarct. The model treats the extra- and intravascular domains as different porous media where Darcy's law is adopted. Reaction-diffusion-advection equations are used to capture the dynamics of contrast agents that are typically used in MRI perfusion exams. The identification of the myocardial infarct region is modeled via adsorption of the contrast agent on the extracellular matrix. Different scenarios were simulated and compared with clinical images: normal perfusion, endocardial ischemia due to stenosis, and myocardial infarct. Altogether, the results obtained suggest that the models can support the process of non-invasive cardiac perfusion quantification.

15:10
Large Scale Study of Absolute Ligand-Protein Binding Free Energy Predictions

ABSTRACT. Introduction. The accurate and reliable prediction of protein-ligand binding affinities plays an important role in the drug discovery process. Relative binding free energy (RBFE) calculations have started to be used seriously in pharmaceutical companies in the last few years, thanks to a powerful combination of software and hardware, including the use of graphical processing units. A substantial limitation of RBFE is that it requires the compounds to be structurally and chemically similar to each other. To overcome this limitation, other approaches have been proposed, including absolute binding free energy (ABFE) method. In the alchemical ABFE approach, the two physical states – one with a ligand bound in the binding site of the protein and one without – are linked with some intermediate states along an alchemical path. Because of the larger differences in the two endpoints, ABFE faces even more demanding challenges to achieve convergence than RBFE does.

We have shown in our previous studies that ensemble approaches are required to get statistically significant results. The need of its use holds regardless of the approaches used, or the duration of the simulations performed. We have recently performed a detailed and systematic analysis of various factors affecting RBFE predictions, and provided definitive recommendations for the implementation of RBFE calculations. In the current study, we report the findings of ABFE calculations from a large dataset, and provide statistically robust analyses on the accuracy, precision and reproducibility of the ABFE calculations.

Method. We use the thermodynamic integration with enhanced sampling (TIES) method to calculate the absolute binding free energy corresponding to an alchemical transformation. The ABFE calculation is based on a double annihilation method. In this study, a thermodynamic cycle approach is used, which involves five nonphysical processes. The physical bound and unbound states are linked through these non-physical processes. Ensemble molecular dynamics simulations with different number of replicas are used for different nonphysical processes: some require 10 replicas whereas others require only 5 for the same level of precision. The uncertainties of ABFE calculations are computed using the standard TIES analysis method. The physical binding free energy of the ligand can then be determined from all of the free energy changes in the nonphysical processes, and be compared with the experimental measurements. It should be noted that both the calculated and experimental binding free energies are associated with uncertainties, which means that caution should be exercised when comparing them.

In the current study, we report the findings from a large dataset comprising of 231 ligands binding to a diverse set of 12 different protein targets. A length of 10 ns production run is used for each of the alchemical process. Simulations for a subset of states are extended to 50 ns to investigate the convergence of the predictions.

Results and Conclusions. The predicted ABFEs are in good agreement with experimental results for the well-studied benchmark molecular systems. In some specific cases, where significant structural differences exist between the apo and holo states, limited sampling does not provide accurate results albeit good rankings. In a drug development project, the ranking is more important than the absolute values for the selection of compounds for further investigations.

A critical aspect for the accuracy of ABFE calculations is the level of conformational sampling, which needs to generate sufficiently comprehensive representations for both the bound and unbound states. The approach used here initiates simulations from the bound state, making the sampling of unbound state less sufficient. We have attempted to get over this general issue through increased sampling. Extension of the simulations increases the conformational sampling at or near the unbound state, and improves the ABFE predictions. Inclusion of the unbound state explicitly in the calculations could further improve the ABFE predictions.

15:30
Resting-state EEG classification for PNES diagnosis

ABSTRACT. Non-epileptic psychogenic seizures represent a neurological disorder often diagnosed and pharmacologically treated as epilepsy. PNES subjects show the same symptoms as epileptic patients but do not have an EEG characterized by ictal patterns during psychogenic seizures. Diagnosis requires an EEG video, but this methodology is very time-consuming and dispensable in both time and cost. Our paper aims to define a novel methodology to support the clinical PNES diagnosis by analyzing electroencephalographic signals obtained in resting conditions. In this case, it is unnecessary to induce seizures in the subjects. A software pipeline was implemented based on robust feature extraction methods used in quantitative EEG analysis in the clinical setting, integrating them with machine learning classifiers. Unlike other similar works, the methodology was tested on a large dataset consisting of 225 EEGs (75 healthy, 75 PNES and 75 people with epilepsy), showing that it has a classification accuracy greater than 85%.

15:50
POTHER: Patch-Voted Deep Learning-based Chest X-ray Bias Analysis for COVID-19 Detection

ABSTRACT. A critical step in the fight against COVID-19, which continues to have a catastrophic impact on peoples lives, is the effective screening of patients presented in the clinics with severe COVID-19 symptoms. Chest radiography is one of the promising screening approaches. Many studies reported detecting COVID-19 in chest X-rays accurately using deep learning. A serious limitation of many published approaches is insufficient attention paid to explaining decisions made by deep learning models. Using explainable artificial intelligence methods, we demonstrate that model decisions may rely on confounding factors rather than medical pathology. After an analysis of potential confounding factors found on chest X-ray images, we propose a novel method to minimise their negative impact. We show that our proposed method is more robust than previous attempts to counter confounding factors such as ECG leads in chest X-rays that often influence model classification decisions. In addition to being robust, our method achieves results comparable to the state-of-the-art. The source code and pre-trained weights are publicly available (https://github.com/tomek1911/POTHER).

14:30-16:10 Session 12D: COMS 5
Location: Newton North
14:30
Automatic Generation of Individual Fuzzy Cognitive Maps from Longitudinal Data

ABSTRACT. Fuzzy Cognitive Maps (FCMs) are computational models that represent how factors (nodes) change over discrete interactions based on causal impacts (weighted directed edges) from other factors. This approach has traditionally been used as an aggregate, similarly to System Dynamics, to depict the functioning of a system. There has been a growing interest in taking this aggregate approach at the individual-level, for example by equipping each agent of an Agent-Based Model with its own FCM to express its behavior. Although frameworks and studies have already taken this approach, an ongoing limitation has been the difficulty of creating as many FCMs as there are individuals. Indeed, current studies have been able to create agents whose traits are different, but whose decision-making modules are often identical, thus limiting the behavioral heterogeneity of the simulated population. In this paper, we address this limitation by using Genetic Algorithms to create one FCM for each agent, thus providing the means to automatically create a virtual population with heterogeneous behaviors. Our algorithm builds on prior work from Stach and colleagues by introducing additional constraints into the process and applying it over longitudinal, individual-level data. A case study from a real-world intervention on nutrition confirms that our approach can generate heterogeneous agents that closely follow the trajectories of their real-world human counterparts. Future works include technical improvements such as lowering the computational time of the approach, or case studies in computational intelligence that use our virtual populations to test new behavior change interventions.

14:50
Adaptive Surrogate-Assisted Optimal Sailboat Path Search Using Onboard Computers

ABSTRACT. A new surrogate-assisted dynamic programming based optimal path search algorithm - studied in the context of high-performance sailing - is shown to be both effective and (energy) efficient. The key elements in achieving this - the fast and accurate physics-based surrogate model, the integrated refinement of the solution space and simulation model fidelity, and the OpenCL-based SPMD-parallelisation of the algorithm - are presented in detail. The included numerical results show the high accuracy of the surrogate model (relative approximation error medians smaller than 0.85%), its efficacy in terms of computing time reduction (from 39.2 to 45.4 times), and the high speedup of the parallel algorithm (from 5.5 to 54.2). Combining these effects gives (up to) 2461 times faster execution. The proposed approach can also be applied to other domains. It can be considered as a dynamic programming based optimal path planning framework parameterised by a problem specific (potentially variable-fidelity) cost-function evaluator (surrogate).

15:10
Local Search in Selected Crossover Operators

ABSTRACT. The purpose of the paper is to analyze an incorporation of a local search mechanisms into five different crossover operators (KPoint, AEX, HGreX, HProX and HRndX) used in genetic algorithms, compare the results depending on various parameters and draw the conclusions. The local search is used randomly with some probability instead of the standard crossover procedure in order to generate a new individual. We analyze injecting the local search in two situations: to resolve the conflicts and with a certain probability also without a conflict. The discussed mechanisms improve the obtained results and significantly accelerate the calculations. Moreover, we show that there exists an optimal degree of the local search component, and it depends on the particular crossover operator.

15:30
Numerical and Statistical Probability Distribution Transformation for Modeling Traffic in Optical Networks

ABSTRACT. For optical network operators, it is very important, considering budget planning reasons, to forecast network traffic. This is related to network expansion and equipment purchases. The underlying motivation is the constant increase in the demand for network traffic due to the development of new access technologies (5G, FTTH), which require particularly large amounts of bandwidth. The aim of this paper is to numerically calculate a transformation that allows determining probability distributions of network edge traffic based on known probability distributions of demand matrix elements. Statistical methods confirmed the proposed transformation. The study is performed for a practically relevant network within selected scenarios determined by realistic traffic demand sets.

15:50
Global Surrogate Modeling by Neural Network-Based Model Uncertainty

ABSTRACT. This work proposes a novel adaptive global surrogate mod- eling algorithm which uses two neural networks, one for prediction and the other for the prediction model uncertainty. Specifically, given a set of initial data, one for each neural network, the algorithm proceeds in cycles and adaptively enhances the neural network-based prediction model by selecting the next sampling point using an auxiliary neural network approximation of the spatial error in the prediction model. The proposed algorithm is tested numerically on the one-dimensional Forrester function and the two-dimensional Branin function. The results demonstrate that global modeling using neural network-based function prediction can be guided efficiently and adaptively by a neural network approximation of the prediction model uncertainty.

14:30-16:10 Session 12E: CompHealth 3
Location: Cavendish
14:30
Patient- and Ventilator-Specific Modeling to Drive the Use and Development of 3D Printed Devices for Rapid Ventilator Splitting During the COVID-19 Pandemic

ABSTRACT. In the early days of the COVID-19 pandemic, there was a pressing need for an expansion of the ventilator capacity in response to the COVID19 pandemic. Reserved for dire situations, ventilator splitting is complex, and has previously been limited to patients with similar pulmonary compliances and tidal volume requirements. To address this need, we developed a system to enable rapid and efficacious splitting between two or more patients with varying lung compliances and tidal volume requirements. We present here a computational framework to both drive device design and inform patient-specific device tuning. By creating a patient- and ventilator-specific airflow model, we were able to identify pressure-controlled splitting as preferable to volume-controlled as well create a simulation-guided framework to identify the optimal airflow resistor for a given patient pairing. In this work, we present the computational model, validation of the model against benchtop test lungs and standard-of-care ventilators, and the methods that enabled simulation of over 200 million patient scenarios using 800,000 compute hours in a 72 hour period.

14:50
National Network for Rare Diseases in Brazil: The Computational Infrastructure and Preliminary Results

ABSTRACT. According to the World Health Organization, rare diseases currently represent a global public health priority. Although it has a low prevalence in the general population, this type of condition collectively affects up to 10% of the entire world population. Therefore, these pathologies are numerous and of a diverse nature, and some factors imply significant challenges for public health, such as the lack of structured and standardized knowledge about rare diseases in health units, the need for communication between multidisciplinary teams to understand phenomena and definition of accurate diagnoses, and the scarcity of experience on specific treatments. In addition, the often chronic and degenerative nature of these diseases generates a significant social and economic impact. Thus, this paper aims to present an initiative to develop a network of specialized reference centers for rare diseases in Brazil, covering all country regions. We propose collecting, mapping, analyzing data, and supporting effective communication between such centers to share clinical knowledge, evolution, and patient needs, through well-defined and standardized processes. We used validated structures to ensure data privacy and protection from participating health facilities to create this digital system. We also applied systems lifecycle methodologies, data modeling techniques, and quality management. Currently, the retrospective stage of the project is in its final phase, and some preliminary results can be verified. We developed an intuitive web portal for consulting the information collected, offering filters for personalized queries on rare diseases in Brazil to support evidence-based public decision-making.

15:10
CLASSIFICATION OF UTERINE FIBROIDS IN ULTRASOUND IMAGES USING DEEP LEARNING MODEL

ABSTRACT. An abnormal growth develop in female uterus is uterus fibroids. Sometimes these fibroids may cause severe problems like miscarriage. If this fibroids are not de-tected it ultimately grows in size and numbers. Among different image modalities, ultrasound is more efficient to detect uterus fibroids. This paper proposes a model in deep learning for fibroid detection with many advantages. The proposed deep learning model overpowers the drawbacks of the existing methodologies of fi-broid detection in all stages like noise removal, contrast enhancement, Classifica-tion. Here, the input image speckle noise is removed by using improved DWT method and using the EMD-GCLAHE method. After contrast enhancement, the image is classified into two classes of data: fibroid and non-fibroid, which is done using the MBF-CDNN method. The method is validated using the parame-ters Sensitivity, specificity, accuracy, precision, F-measure. It is found that the sensitivity is 94.44%, specificity 95 % and accuracy 94.736%. The proposed classifier effectively detects the fibroids, which are experimentally proved by comparing it with existing classifiers.

15:30
Super-Resolution Convolutional Network for Image Quality Enhancement in Remote Photoplethysmography based Heart Rate Estimation

ABSTRACT. Remote Photoplethysmography allows for the optical, physiological measurement from face videos. Recently, Convolutional neural network (CNN) based imaging photoplethysmography (iPPG) has achieved more accuracy and computational efficiency over state of art plethysmography methods. This paper aims to provide an algorithmic framework with a preprocessing super-resolution network for vital information measurement such as heart rate from camera images/videos. This method helps to process low-resolution images obtaining more ac-curate physiological information.

15:50
A hybrid modeling framework for city-scale dynamics of multi-strain influenza epidemics

ABSTRACT. In the current paper we present a hybrid modeling framework which allows to simulate co-circulation of influenza strains in urban settings. It comprises a detailed agent--based model coupled with SEIR-type compartmental model. While the former makes it possible to simulate the initial phase of an outbreak when the heterogeneity of the contact network is crucial, the latter approximates the disease dynamics after the occurrence of mass infection thus dramatically increasing the framework performance. The numerical experiments with the model are presented and their results are discussed.

14:30-16:10 Session 12F: MMS 2
14:30
Modelling the Long-Term Impact of Covid-19

ABSTRACT. Increase of global travel and trade, overpopulated habitats and environmental changes are some of the factors that contribute to the spread of infectious diseases. Arguably, Modelling & Simulation (M&S) plays a crucial role in understanding the factors that affect the spread of diseases and helps decision-makers to plan targeted and cost-effective prevention and control actions. Covid-19 has shown that pandemic crises of that scale can have long-term health and socioeconomic impact on the population. We have also seen that not all population groups are affected in the same manner. Vulnerable populations, such as the elderly, people with comorbidities or lower socioeconomic status are amongst the groups that are most negatively affected from the pandemic. In an attempt to address these issues, we created the CoronAvirus Lifelong Modelling and Simulation (CALMS) model. CALMS is an agent-based microsimulation that calculates the risk of Covid-19 infection and severe disease throughout the lifetime of the population based on individual demo-graphic, medical, life-style and socioeconomic characteristics. It also calculates health-related quality of life as well as costs of treatment and preventive measures. These metrics are used to perform health economic evaluation of targeted interventions such as lockdowns and vaccination. CALMS incorporates various sub-models that support its functionalities. These include non-communicable disease risk models for cardiovascular diseases (CVDs) and Type 2 Diabetes (T2D), Covid-19 and Long Covid risk models, physical activity behavioural model and health economic model. The combination of these submodels allows for evaluation of different interventions that can be targeted to defined population groups and for specified time periods. It can therefore provide an evidence base to support short and longer-term policy-making. CALMS is built in JAVA using the Repast Suite (https://repast.github.io/). It can be used both as a desktop application with its own Graphical User Interface (GUI) and as a batch application that can be integrated in a distributed computing infrastructure. The model is developed within the EU STAMINA project and is part of a wider predictive modelling suite aiming to provide a wholistic approach in pandemic crisis management by providing solutions for different aspects of pandemic preparedness and response.

14:50
Role of mechanotransduction in promoting intracellular and intercellular transmigration of cancer cells through endothelium in microvasculature

ABSTRACT. Local hemodynamics impact the mechanotransduction in endothelial cells (ECs) lining the vascular network. On the other hand, cancer cells are shown to influence the local hemodynamics in their vicinity, in microvasculature. The first objective of present study is to explore how cancer cell-induced changes in local hemodynamics can impact the forces experienced by intra/inter-cellular organelles of ECs that are believed to play important roles in mechanotransduction. Moreover, extracellular matrix (ECM) stiffening has been shown to correlate with progression of most cancer types. However, it is still not well understood how ECM stiffness impacts ECs mechanosensors. The second objective of this study is to elucidate the role of ECM stiffness on mechanotransduction in ECs. A three-dimensional, multiscale, multicomponent, viscoelastic model of focally adhered ECs is developed to simulate the force transmission through ECs mechanosensors (actin cortical layer, nucleus, cytoskeleton, focal adhesions (FAs), and adherens junctions (ADJs)). Our results show that cancer cell-altered hemodynamics results in significantly high forces transmitted to subcellular organelles of ECs. This impact is more drastic on stress fibers (SFs) both centrally located and peripheral ones. Furthermore, we demonstrate that FAs experience higher stresses when attached to stiffer ECM while stresses transmitted to ADJs are completely independent of ECM. Cancer cell-induced-changes in ECs mechanotransduction represents an important potential mechanism for cancer cell transmigration in the microvasculature. The identification of ECs mechanosensors involved in early stages of EC-cancer cell interaction will help with developing more efficient therapeutic interventions to suppress cancer cell transmigration in the microvasculature.

15:10
Forced displacement in Mali - Analysing the effect of the physical environment on refugee flight routes

ABSTRACT. The twenty-first century has seen an increase in conflict-induced refugees (UNHCR, 2021). Furthermore, although empirical evidence is lacking, a changing climate might further increase the risk of conflict, especially in developing countries (Abel et al., 2019; Mach et al., 2019). Understanding the refugee movements that result from these conflicts could aid policy makers and humanitarian organizations in providing aid to and hosting these forcibly displaced peoples (Suleimenova et al., 2017).

Several studies for simulating refugee movements on a city or community wide scale have been performed (Anderson et al., 2007; Sokolowski et al., 2014, 2015). Furthermore, several frameworks for aiding the creation of computational refugee models have been developed (De Kock, 2019; Searle & Van Vuuren, 2021). On large-scale migration modelling, many papers have been written (Stillwell, 2005). However, in terms of computational studies on the movements of refugees in an armed conflict setting on a national scale, little has been done. To this end, Flee was developed (Groen, 2016). Flee is an agent-based social simulation framework for forecasting population displacements in an armed conflict setting (Anastasiadis et al., 2021; Suleimenova & Groen, 2020).

Currently the flee-model does not take into account the possibility of refugees taking off-road routes. However, experience from the Ministry of Defence has shown that a significant amount of travel in conflict areas occurs via such routes (Kpt. B. Ooink, personal communication, September 2021). The goal of this study is therefore to implement into Flee the possibility for off-road driving routes, and test the effectiveness thereof. The main question pertaining to this goal is posed as follows: ‘To what extent does the physical environment determine flight routes for refugees?’. The case study area for this study is Mali, because here offroad travel is very common, and the area was already implemented in the flee model

The off-road routes are determined by selecting features of the physical environment relevant to refugee movement, and representing these in raster data. This is done using NATO classification for mobility-altering features. Furthermore, the routes are calculated for four seasons, to represent accessibility changes in the physical environment caused by volume changes in the Niger river. Values are assigned to the selected features, to represent the degree of resistance that these features offer. The values are decided on through consultation of literature sources. The modelled resistance is changed to a cost raster, which allows for the plotting of routes of least cumulative resistance between two points. The resulting routes are used as input for Flee. This resulted in changed drive times for the existing routes, but also new, offroad, routs between existing nodes.

There is already a dataset of the real and simulated refugee flow of Mali in 2012. The newly created routes and driving times are added to the model. Flee is run several times, with only the routes between the nodes changing, to allow for comparison.

Overall, the model’s refugee allocation error has decreased by 16.5% as a result of the changes in routes. However, most of this change is caused by an improvement of allocation in one camp, that influences the total through its relatively large size. When comparing the impact of the routes on the difference in error per location, while weighting the camps equally, the error increases by roughly 7%. Moreover, on a temporal level, the first and last season’s error are lower due to border closure and camp capacity mechanics in Flee. The error in season two and three (April – Oct 2012) are thus the most reliable for testing the differences in model accuracy. These seasons show overall a negligible difference in error.

In conclusion, the addition of routes based on the physical environment does improve the overall accuracy of Flee’s refugee allocation in the Mali case study, but this the results are too inconsistent to determine whether this will be the case in other case studies as well. The addition of the factor of the physical terrain increases existing deviations between reality and the current flee model, but patterns do not change. This indicates that other factors are the root cause of current differences between the model and reality. These root causes include for example political factors, such as border restriction policies, and decision-making based on emotional factors, such as the attractiveness of cities over refugee camps.

Sources Abel, G. J., Brottrager, M., Crespo Cuaresma, J., & Muttarak, R. (2019). Climate, conflict and forced migration. Global Environmental Change, 54, 239–249. https://doi.org/10.1016/j.gloenvcha.2018.12.003 Anastasiadis, P., Gogolenko, S., Papadopoulou, N., Lawenda, M., Arabnejad, H., Jahani, A., Mahmood, I., & Groen, D. (2021). P-Flee: An Efficient Parallel Algorithm for Simulating Human Migration. 2021 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2021 - In Conjunction with IEEE IPDPS 2021, April, 1008–1011. https://doi.org/10.1109/IPDPSW52791.2021.00159 Anderson, J., Chaturvedi, A., & Cibulskis, M. (2007). Simulation tools for developing policies for complex systems: Modeling the health and safety of refugee communities. Health Care Management Science, 10(4), 331–339. https://doi.org/10.1007/s10729-007-9030-y De Kock, C. (2019). A framework for modelling conflict-induced forced migration according to an agent-based approach. https://scholar.sun.ac.za:443/handle/10019.1/107038 Groen, D. (2016). Simulating refugee movements: Where would you go? Procedia Computer Science, 80, 2251–2255. https://doi.org/10.1016/j.procs.2016.05.400 Mach, K. J., Kraan, C. M., Neil Adger, W., Buhaug, H., Burke, M., Fearon, J. D., Field, C. B., Hendrix, C. S., Maystadt, J.-F., O’loughlin, J., Roessler, P., Scheffran, J., Schultz, K. A., & Von Uexkull, N. (2019). Climate as a risk factor for armed conflict. Nature, 571. https://doi.org/10.1038/s41586-019-1300-6 Searle, C., & Van Vuuren, J. H. (2021). Modelling forced migration: A framework for conflict-induced forced migration modelling according to an agent-based approach. Computers, Environment and Urban Systems, 85, 101568. https://doi.org/10.1016/j.compenvurbsys.2020.101568 Sokolowski, J. A., Banks, C. M., & Hayes, R. L. (2015). Modeling population displacement in the Syrian city of Aleppo. Proceedings - Winter Simulation Conference, 2015-January, 252–263. https://doi.org/10.1109/WSC.2014.7019893 Sokolowski, J. A., Banks, C. M., & Modeling, V. (2014). A Methodology for Environment and Agent Development to Model Population Displacement KEY WORDS population displacement, early warning model, United Nations Human Rights Council (UNHRC), ABM environment matrix, ABM agent matrix. Stillwell, J. (2005). Inter-regional migration modelling - a review and assessment. European Regional Science Association (ERSA), 23(27), 23–27. http://hdl.handle.net/10419/117857www.econstor.eu Suleimenova, D., Bell, D., & Groen, D. (2017). A generalized simulation development approach for predicting refugee destinations. Scientific Reports, 7(1). https://doi.org/10.1038/s41598-017-13828-9 Suleimenova, D., & Groen, D. (2020). How policy decisions affect refugee journeys in South Sudan: A study using automated ensemble simulations. Jasss, 23(1). https://doi.org/10.18564/jasss.4193 UNHCR. (2021). Global Trends: Forced Displacement in 2020. Unhcr, 72.

14:30-16:10 Session 12G: QCW 2
Location: Telford
14:30
A first attempt at cryptanalyzing a (toy) block cipher by means of QAOA

ABSTRACT. The discovery of quantum algorithms that may have an impact on cryptography is the main reason of the rise of quantum computing. Currently, all quantum cryptanalysis techniques are purely theoretical and none of them can be executed on existing or near term quantum devices. So, this paper investigates the capability of already existing quantum computers to attack a toy block cipher using the Quantum Approximate Optimization Algorithm (QAOA). Starting from a known-plaintext key recovery problem, we transform it into an instance of the MAX-SAT problem. Then, we propose two ways to implement it in a QAOA circuit and we try to solve it using publicly available IBM Q Experience quantum computers. The results suggest that the limited number of qubits requires the use of exponential algorithms to achieve the transformation of our problem into a MAX-SAT instance and, despite encouraging simulation results, that the corresponding quantum circuit is too deep to work on nowadays too-noisy gate-based quantum computers.

14:50
Practical solving of discrete logarithm problem over prime fields using quantum annealing

ABSTRACT. This paper investigates how to reduce discrete logarithm problem over prime fields to the QUBO problem to obtain as few logical qubits as possible. We show different methods of reduction of discrete logarithm problem over prime fields to the QUBO problem. In the best case, if $n$ is the bitlength of a characteristic of the prime field $\mathbb F_p$, there are required approximately $2n^2$ logical qubits for such reduction. We present practical attacks on discrete logarithm problem over the $4$-bit prime field $\mathbb F_{11}$, over $5$-bit prime field $\mathbb F_{23}$ and over $6$-bit prime field $\mathbb F_{59}$. We solved these problems using D-Wave Advantage QPU. It is worth noting that, according to our knowledge, until now, no one has made a practical attack on discrete logarithm over the prime field using quantum methods.

15:10
Quantum annealing and algebraic attack on Speck cipher

ABSTRACT. Algebraic attacks using quantum annealing are a new idea of cryptanalysis. This paper shows how to obtain a QUBO problem equivalent to the algebraic attack on Speck cipher, using as small a number of logical variables as possible. The main idea of minimizing the number of variables in the algebraic attack on this ARX cipher was appropriate cipher partition and insertion of additional variables. Using such an idea, in the case of the most popular variants: Speck-128/128 and Speck-128/256, the equivalent QUBO problem has 19,311 and 33,721 logical variables. According to our experiments, applying quantum annealing to the algebraic attacks on Speck should be much more efficient than the same attack on AES cipher, where for AES-128 and AES-256, an equivalent QUBO problem consist of 29,770 and 72,597 logical variables, respectively. It is an open question if this kind of attack may overtake, in some cases, brutal or Grover's attack. However, assuming that the complexity of solving QUBO problem consisting of $N$ variables requires $O\left( e^{\sqrt{N}} \right)$ elementary operations, one can obtain an attack faster than the brute force attack on Speck-128/256 consisting of 31 of 34 rounds, which is better than the best known classical attack on this cipher variant, which works for 25 rounds.

15:30
Challenges and Future Directions in the Implementation of User Authentication Protocols Utilizing Quantum Computing

ABSTRACT. Quantum computing is a powerful concept in the technological world which is critically valued in information security due to its enhanced computation powers. Researchers have developed algorithms that allow quantum computers to hack into information security concepts that were previously considered difficult if not impossible, including asymmetric key cryptography and elliptic curve cryptography. To counter these vulnerabilities, studies have been done to focus on improving security protocols through quantum computing. One such focus is on the topic of quantum authentication (QA). Authentication plays a critical role in protecting communication and exchange online. While a number of QA protocols have been theorized, only a few have been implemented and further tested. Among the protocols, we selected and implemented five quantum authentication protocols to determine their feasibility in a real-world setting. In this late-breaking work, we discuss the difficulties and obstacles developers might face while implementing authentication protocols that use quantum computing. We comment on why these difficulties exist and how they impact the accuracy of these protocols. Finally, we conclude the future directions of this research on adding user studies to test the efficacy of these protocols.

15:50
Scheduling with Multiple Dispatch Rules : A quantum computing approach

ABSTRACT. Updating the set of Multiple Dispatch Rules (MDRs) for scheduling of machines in a Flexible Manufacturing System (FMS) is computationally intensive. It becomes a major bottleneck when these rules have to be updated in real-time in response to changes in the manufacturing environment. Machine Learning (ML) based solutions for this problem are considered to be state-of-the-art. However, their accuracy and correctness depend on the availability of high-quality training data. To address the shortcomings of the ML-based approaches, we propose a novel Quadratic Unconstrained Binary Optimization (QUBO) formulation for the MDR scheduling problem. A novel aspect of our formulation is that it can be efficiently solved on a quantum annealer. We solve the proposed formulation on a production quantum annealer from DWave and compare the results with a single dispatch rule-based baseline model.

16:10-16:40Coffee Break
16:40-18:20 Session 13A: MT 11
16:40
A Highly Customizable Information Visualization Framework

ABSTRACT. The human brain can quickly become overwhelmed by the amounts of data computers can process. Consequently, data abstraction is necessary for a user to grasp information and identify valuable patterns. Usually, data is abstracted in a pictorial or graphical format. In conjunction with interactivity, it is possible to adapt how data is abstracted to obtain more detailed conclusions in real-time. Nowadays, users demand more personalization from the systems they use. This work proposes a user-centered framework that aims to ease creating visualizations for the developers of a platform while offering the end-user a highly customizable experience. The conceptualized solution was prototyped and tested to ensure the information about the data is transmitted to the user in a quick and effective manner. The results of a user study showed that users are pleased with the usability of the prototype and prove that they desire control over the configuration of their visualizations. This work not only confirmed the usefulness of previously explored personalization options for visual representations, but also explored promising new ones.

17:00
A Note on Adjoint Linear Algebra

ABSTRACT. A new proof for adjoint systems of linear equations is presented. The argument is built on the principles of Algorithmic Differentiation. Application to scalar multiplication sets the base line. Generalization yields adjoint inner vector, matrix-vector, and matrix-matrix products leading to an alternative proof for first- as well as higher-order adjoint linear systems.

17:20
KP01 solved by an n-Dimensional Sampling and Clustering Heuristic.

ABSTRACT. In the field of optimization, NP-Hard problems play an important role concerning its real-world applications, such as resource allocation, scheduling, planning, logistics, etc. In this paper, we propose a heuristic search algorithm based on Montecarlo along with a clustering strategy that analyzes density and performs k-means partitions to solve the classic binary Knapsack Problem (KP01). Our heuristic method, which was designed to solve combinatorial optimization problems, has evolved and can adapt to other optimization problems, such as the KP01 that can be organized in an n-Dimensional search space. Regarding the methodology, we substantially reduced the search space while the areas of interest were located in the clustering stage, which brings us closer to the best solutions. After the experiments, we obtained a high-quality solution, which resulted in an average success rate of above 90%.

17:40
Acceleration of Optimized Coarse-grid Operators by Spatial Redistribution for Multigrid Reduction in Time

ABSTRACT. The multigrid reduction in time (MGRIT) method is one of the parallel-in-time approaches for time-dependent PDEs and typically uses coarse-grid operators with rediscretization. While its convergence struggles with hyperbolic problems, an optimization method for coarse-grid operators has been proposed to deal with this problem. This method provides improved convergence by using a coarse-grid operator with a slightly increased number of nonzero elements. However, it is more desirable for coarse-grid operators to be cheaper than fine-grid operators, and there is room for improvement in terms of parallel implementation. This work combines the spatial redistribution technique for MGRIT, which accelerates coarse-grid solvers using agglomerated idle processors, with the above optimization method. This combination attempts to achieve better scaling performance while maintaining high convergence. Numerical experiments demonstrate a 23% runtime reduction at most among the various assignments tried with specific amount of parallelism.

18:00
Approximate Function Classification

ABSTRACT. Classification of Boolean functions requires specific software or circuits to determine the class of a function or even to distinguish between two different classes. In order to provide a less costly solution, we study the approximation of the NPN function classification by machine learning. For this purpose we train a artificial neural network (ANN) for classification of four-bit Boolean functions and determine at what configuration of the ANN the NPN classification can be perfectly learned. Then we look at the possibility of learning the classification of four-bit Boolean functions using a set of three-bit Boolean neural classifiers, and determine the scalability. Finally we also learn a discriminator that can distinguish between two functions and determine their similarity or difference in their NPN classes. As a result we show that the approximate neural function classification is a convenient approach to implement an efficient classifier and class discriminator directly from the data.

16:40-18:20 Session 13B: MT 12
Location: Newton South
16:40
Out-of-distribution Detection in High-dimensional Data Using Mahalanobis Distance - Critical Analysis

ABSTRACT. Convolutional neural networks used in real-world recognition must be able to detect inputs that are Out-of-Distribution (OoD) with respect to the known or training data. A popular, simple method is to detect OoD inputs using confidence scores based on the Mahalanobis distance from known data. However, this procedure involves estimating the multivariate normal (MVN) density of high dimensional data using the insufficient number of observations (e.g., the dimensionality of features at the last two layers in the ResNet-101 model are 2048 and 1024, with ca. 1000-5000 examples per class for density estimation). In this work, we analyze the instability of parametric estimates of MVN density in high dimensionality and analyze the impact of this on the performance of Mahalanobis distance-based OoD detection. We show that this effect makes Mahalanobis distance-based methods ineffective for near OoD data. We show that the minimum distance from known data beyond which outliers are detectable depends on the dimensionality and number of training samples and decreases with the growing size of the training dataset. We also analyzed the performance of modifications of the Mahalanobis distance method used to minimize density fitting errors, such as using a common covariance matrix for all classes or diagonal covariance matrices. On OoD benchmarks (on CIFAR-10, CIFAR-100, SVHN, and Noise datasets), using representations from the DenseNet or ResNet models, we show that none of these methods should be considered universally superior.

17:00
Multi-Contextual Recommender using 3D Latent Factor Models and Online Tensor Decomposition

ABSTRACT. Traditional approaches to recommendation systems involve using collaborative filtering and content-based techniques which make use of the similarities between users and items respectively. Such approaches evolved to encapsulate model-based latent factor (LF) algorithms that use matrix decomposition to ingest a user-item matrix of ratings to generate recommendations. In this paper, we propose a novel approach based on 3D LF model and tensor decomposition method for devising personalized recommendations driven from additional contextual features. We also present our stacking method for tensor generation prior to incorporating LF models. We validate our proposed personalized recommender using two real-world datasets. Our experimental results showed that additional contextual features can help personalizing recommendations while maintaining similar or better performance compared to convectional 2D LF methods. Furthermore, our results demonstrate the importance of the quality of the contextual features to be used in 3D LF models. In addition, our experiments show effective performance of the our new stacking method on computation time and accuracy of the proposed recommender.

17:20
Content-aware generative model for multi-item outfit recommendation

ABSTRACT. Recently, deep learning-based recommender systems have received increasing attention of researchers and demonstrate excellent results at solving various tasks in various areas. One of the last growing trends is learning the compatibility of items in a set and predicting the next item or several ones by input ones. Fashion compatibility modeling is one of the areas in which this task is being actively researched. Classical solutions are training on existing sets and are learning to recommend items that have been combined with each other before. This severely limits the number of possible combinations. GAN models proved to be the most effective for decreasing the impact of this problem and generating unseen combinations of items, but they also have several limitations. They use a fixed number of input and output items. However, real outfits contain a variable number of items. Also, they use unimodal or multimodal data to generate only visual features. However, this approach is not guaranteed to save content attributes of items during generation. We propose a multimodal transformer-based GAN with cross-modal attention to simultaneously explore visual features and textual attributes. We also propose to represent a set of items as a sequence of items to allow the model to decide how many items should be in the set. Experimenting on FOTOS dataset at the fill-in-the-blank task is showed that our method outperforms such strong baselines as Bi-LSTM-VSE, MGCM, HFGN, and others. Our model has reached 0.878 accuracy versus 0.724 of Bi-LSTM-VSE, 0.822 of MGCM, 0.826 of HFGN.

17:40
Facial mask impact on human age and gender classification

ABSTRACT. The human face contains important information enabling the social identification of the owner about the age and gender. In technical systems, the face contains a number of important information that enables the identification of a person. The COVID-19 pandemic made it necessary to cover the face with a mask and thus hide a significant part of information content in the face, important for social or technical purposes. The paper analyses how covering the face with a mask makes it difficult to identify a person in terms of age and gender determination. Analyzes with the employment of state of the art models based on deep neural networks are performed. Their effectiveness is investigated in the context of the limited information available, as with the case of the face covered with a mask.

16:40-18:20 Session 13C: BBC 2
Location: Darwin
16:40
Dense Temporal Subgraphs in Protein-Protein Interaction Networks

ABSTRACT. Temporal networks have been successfully applied to represent the dynamics of protein-protein interactions. In this paper we focus on the identification of dense subgraphs in temporal protein-protein interaction networks, a relevant problem to find group of proteins related to a given functionality. We consider a drawback of an existing approach for this problem that produce large time intervals over which temporal subgraphs are defined. We propose a problem to deal with this issue and we design (1) an exact algorithm based on dynamic programming which solves the problem in polynomial time and (2) a heuristic, based on a segmentation of the time domain and the computation of a refinement. The experimental results we present on seven protein-protein interaction networks show that in many cases our heuristic is able to reduce the time intervals with respect to those computed by the existing methods.

17:00

ABSTRACT. Third-generation nanopore sequencing technologies, along with portable devices such as MinION Nanopore and Jetson Xavier NX, allow performing cost-effective metagenomic analysis in a portable manner. At the same time, we observe the growth of the serverless computing paradigm that offers high scalability with limited maintenance overhead for the underlying infrastructure. Recent advancements in serverless offerings make it a viable choice for performing operations such as basecalling. This paper aims to evaluate if a combination of edge and serverless computing paradigms can be successfully used to perform the basecalling process, with the focus on acceleration of offline edge-based processing with serverless-based infrastructure. For the purposes of the experiments, we proposed a workflow in which DNA sequence reads are processed simultaneously at the edge with Jetson Xavier NX and in the cloud with AWS Lambda in different network conditions. The results of our experiments show that with such a hybrid approach, we can reduce the processing time and energy consumption of the basecalling process compared to fully offline or fully online processing. We also believe that while so far, the adoption of serverless computing for bioinformatic applications is not high, the recent improvements to platforms such as AWS Lambda make it a compelling choice for an increasing number of bioinformatics workflows.

17:20
Tissue Damage Control Algorithm for Hyperthermia Based Cancer Treatments

ABSTRACT. Cancer is a worldwide health problem. The fatality rate of some types of cancer motivates the scientific community to improve the standard techniques used to fight against this disease, as well as to investigate new forms of treatments. One of these emerging treatments is hyperthermia using the injection of magnetic nanoparticles into the tumour area. Its basic idea is to heat the target tumour tissue leading to its necrosis. This study simulates the bioheat processes using Pennes' model to evaluate the tissue damage in silico. Furthermore, the differential evolution optimisation technique is applied to suggest the optimal location of injection considering the minimisation of damage to the healthy tissue and the maximisation of the tumour necrosis. The results suggest that the proposed algorithm is a promising tool for aiding hyperthermia-based treatment planning.

17:40
Musculoskeletal model of human lower limbs in gait simulation

ABSTRACT. One of the most important issues in the context of contemporary research on a human body biomechanics is musculoskeletal models. It turns out that the need for biomechanically correct movement model is enormous not only in medicine and rehabilitation, but also in animations stynthesis with computer game engines, where unsightly effects often occur between animation keys. Proposing a musculoskeletal model that could be efficiently adapted to a game development environment would speed up work and solve the difficulties in controlling movement sequence synthesis. This article proposes a musculosceletal model as the basis for solving these problems, and at the same time improves the analogical existing musculoskeletal solutions. The proposed model, using dynamic inversion simulation, includes Usik's equations which have been added to the 6DoF skeletal module. Usik uses the thermomechanics of a continuous medium, which takes into account the cross-effects of mechanical, electrical, chemical and thermodynamic phenomena in muscle tissue which makes it possible to better (closer to the nature) describe the reaction of muscles during movement. The validation value of the model correctness is the GRF (ground reaction force) where the simulated values are compared to the measured one. This work concerns human gait and is the basis for further development in the context of the identified problem in the field of computer game engines. Due to the more accurate GRF results against existing solutions, the proposed one is optimistic for further work on it.

18:00
Machine Learning Approaches in Inflammatory Bowel Diseases

ABSTRACT. The great flow of clinical data can be managed with efficiency and effectiveness, improving the speed of interpretation of information, through Machine Learning (ML) methodologies, aimed at overcoming the barriers present in the diagnosis and treatment processes of patients, such as those affected by Inflammatory Bowel Disease (IBD). In this paper we survey relevant ML applications used for managing the large flow of clinical data and for overcoming the barriers present in the diagnosis and treatment processes of patients, with special focus on IBD. In IBD settings, main data sources include cohort study data, administrative databases, e-Health applications, Electronic Health Records (EHR), medical image data, Omics data, Clinical trial data and social media. We also discuss the strengths and limitations of potential data sources that big data analytics could draw from in the field of IBD.

16:40-18:20 Session 13D: COMS 6
Location: Newton North
16:40
Analysis of Agricultural and Engineering Processes using Simulation Decomposition

ABSTRACT. This paper focuses on the analysis of agricultural and engineering processes using simulation decomposition (SD). SD is a technique that utilizes Monte Carlo simulations and distribution decomposition to visually evaluate the source and the outcome of different portions of data. Here, SD is applied to three distinct simulation-based problems: a simple analytical function problem (for illustration purposes), a nondestructive evaluation engineering problem, and an agricultural food-water-energy system problem. The results demonstrate successful implementations of SD for the range of problems, and the illustrate the potential of SD support new understanding of cause and effect relationships in complex systems.

17:00
Neural Network-Based Sequential Global Sensitivity Analysis Algorithm

ABSTRACT. Global sensitivity analysis (GSA) can be used to quantify the effects of input parameters on the outputs of simulation-based computations. Performing GSA can be challenging due to the combined effect of the high computational cost of the simulation models, a large number of input parameters, and the need to perform repetitive model evaluations. To reduce this cost, neural networks (NNs) are used to replace the expensive simulation model in the GSA process, which introduces the additional challenge of finding the minimum number of training data samples required to train the NNs accurately. In this work, a recently proposed NN-based GSA algorithm to accurately quantify the sensitivities is improved. The algorithm iterates over the number of samples required to train the NNs and terminates using an outer-loop sensitivity convergence criteria. The iterative surrogate-based GSA yields converged values for the Sobol' indices and, at the same time, alleviates the specification of arbitrary accuracy metrics for the NN-based approximation model. In this paper, the algorithm is improved by enhanced NN modeling, which lead to an overall acceleration of the GSA process. The improved algorithm is tested numerically on problems involving an analytical function with three input parameters, and a simulation-based nondestructive evaluation problem with three input parameters.

17:20
A Taxonomy Guided Method to Identify Metaheuristic Components

ABSTRACT. A component-based view of metaheuristics has recently been promoted to deal with several problems in the field of metaheuristic research. These problems include inconsistent metaphor usage, non-standard terminology and a proliferation of metaheuristics that are often insignificant variations on a theme. These problems make the identification of novel metaheuristics, performance-based comparisons, and selection of metaheuristics difficult. The central problem for the component-based view is the identification of components of a metaheuristic. This paper proposes the use of taxonomies to guide the identification of metaheuristic components. We developed a general and rigorous method, TAXONOG-IMC, that takes as input an appropriate taxonomy and guides the user to identify components. The method is described in detail, an example application of the method is given, and an analysis of its usefulness is provided. The analysis shows that the method is effective and provides insights that are not possible without the proper identification of the components.

17:40
DSCAN for Geo-Social Team Formation

ABSTRACT. Nowadays, geo-based social group activities have become popular because of the availability of geo-location information. In this paper, we propose a novel Geo-Social Team Formation framework using DSCAN, named DSCAN-GSTF, for impromptu activities, aim to find a group of individuals closest to a location where service requires quickly. The group should be socially cohesive for better collaboration and spatially close to minimize the preparation time. To imitate the real-world scenario, the DSCAN-GSTF framework considers various criteria which can provide effective Geo-Social groups, including a required list of skills, the minimum number of each skill, contribution capacity, and the weight of the user’s skills. The existing geo-social models ignore the expertise level of individuals and fail to process a large geo-social network efficiently, which is highly important for an urgent service request. In addition to considering expertise level in our model, we also utilize the DSCAN method to create clusters in parallel machines, which makes the searching process very fast in large networks. Also, we propose a polynomial parametric network flow algorithm to check the skills criteria, which boosts the searching speed of our model. Finally, extensive experiments were conducted on real datasets to determine a competitive solution compared to other existing state-of-the-art methods.

18:00
Numerical Stability of Tangents and Adjoints of Implicit Functions

ABSTRACT. We investigate errors in tangents and adjoints of implicit functions resulting from errors in the primal solution due to approximations computed by a numerical solver.

Adjoints of systems of linear equations turn out to be unconditionally numerically stable. Tangents of systems of linear equations can become instable as well as both tangents and adjoints of systems of nonlinear equations, which extends to optima of convex unconstrained objectives. Sufficient conditions for numerical stability are derived.

16:40-18:20 Session 13E: SPU 1
Location: Cavendish
16:40
Derivation and Computation of Integro-Riccati Equation for Ergodic Control of Infinite-dimensional SDE

ABSTRACT. Optimal control of infinite-dimensional stochastic differential equations (SDEs) is a challenging topic. In this contribution, we consider a new control problem of an infinite-dimensional jump-driven SDE with long (sub-exponential) memory aris-ing in river hydrology. We deal with the case where the dynamics follow a su-perposition of Ornstein–Uhlenbeck processes having distributed reversion speeds (called supOU process in short) as found in real problems. Our stochastic control problem is of an ergodic type to minimize a long-run linear-quadratic ob-jective. We show that solving the control problem reduces to finding a solution to an integro-Riccati equation and that the optimal control is infinite-dimensional as well. The integro-Riccati equation is numerically computed by discretizing the phase space of reversion speed. We use the supOU process with an actual data of river discharge in a mountainous river environment. Computational performance of the proposed numerical scheme is examined against different discretization pa-rameters. Convergence of the scheme is then verified with a manufactured solu-tion. Our paper thus serves as new modeling, computation, and application of an infinite-dimensional SDE.

17:00
Towards Mitigating the Eye Gaze Tracking Uncertainty in Virtual Reality

ABSTRACT. We propose a novel algorithm to evaluate and mitigate the uncertainty of data reported by eye gaze tracking devices embedded in virtual reality head-mounted displays. Our algorithm first is calibrated by leveraging unit quaternions to encode angular differences between reported and ground-truth gaze directions, then interpolates these quaternions for each gaze sample, and finally corrects gaze directions by rotating them using interpolated quaternions. The real part of the interpolated quaternion is used as the certainty factor for the corresponding gaze direction sample. The proposed algorithm is implemented in the VRSciVi Workbench within the ontology-driven SciVi visual analytics platform and can be used to improve the eye gaze tracking quality in different virtual reality applications including the ones for Digital Humanities research. The tests of the proposed algorithm revealed its capability of increasing eye tracking accuracy by 25% and precision by 32% compared with the raw output of the Tobii tracker embedded in the Vive Pro Eye head-mounted display. In addition, the certainty factors calculated help to acknowledge the quality of reported gaze directions in the subsequent data analysis stages. Due to the ontology-driven software generation, the proposed approach enables high-level adaptation to the specifics of the experiments in virtual reality.

17:20
Getting formal Ontologies closer to final users through Knowledge Graph Visualization: interpretation and misinterpretation

ABSTRACT. Knowledge Graphs are extensively adopted in a variety of disciplines to support knowledge integration, visualization, unification, analysis and sharing at different levels. On the other side, Ontology has gained a significant popularity within machine-processable environments, where it is extensively used to formally define knowledge structures. Additionally, the progressive development of the Semantic Web has further contributed to a consolidation at a conceptual level and to the consequent standardisation of languages as part of the Web technology. This work focuses on customizable visualization/interaction, looking at Knowledge Graphs resulting from formal ontologies. While the proposed approach in itself is considered to be scalable via customization, the current implementation of the research prototype assumes detailed visualizations for relatively small data sets with a progressive detail decreasing when the amount of information increases. Finally, issues related to possible misinterpretations of ontology-based knowledge graphs from a final user perspective are briefly discussed.

17:40
On the use of Sobol’ sequence for high dimensional simulation

ABSTRACT. When used in simulations, the quasi-Monte Carlo methods utilize specially constructed sequences in order to improve on the respective Monte Carlo methods in terms of accuracy mainly. Their advantage comes from the possibility to devise sequences of numbers that are better distributed in the corresponding high-dimensional unit cube, compared to the randomly sampled points of the typical Monte Carlo method. Perhaps the most widely used family of sequences are the Sobol' sequences, due to their excellent equidistribution properties. These sequences are determined by sets of so-called direction numbers, where researches have significant freedom to tailor the set being used to the problem at hand. The advancements in scientific computing lead to ever increasing dimensionality of the problems under consideration. On the other hand, due to the increased computational cost of the simulations, the number of trajectories that can be used is limited. In this work we concentrate on optimising the direction numbers of the Sobol' sequences in such situations, when the constructive dimension of the algorithm is relatively high, compared to the number of points of the sequence being used. We propose an algorithm that provides us with such sets of numbers, suitable for a range of problems. We then show how the resulting sequences perform in numerical experiments, compared with other well known sets of direction numbers. The algorithm has been efficiently implemented on servers equipped with powerful GPUs and is applicable for a wide range of problems.

16:40
Generative Networks Applied to Model Fluid Flows

ABSTRACT. The production of numerous high fidelity simulations has been a key aspect of research for many-query problems in fluid dynamics. The computational resources and time required to generate these simulations can be so large to be impractical. With several successes of generative models, we explore the performance and the powerful generative capabilities of both generative adversarial network (GAN) and adversarial autoencoder (AAE) to predict the evolution in time of a highly nonlinear fluid flow. These generative models are incorporated within a reduced-order model framework. The test case comprises twodimensional Gaussian vortices governed by the time-dependent NavierStokes equation. We show that both the GAN and AAE are able to predict the evolution of the positions of the vortices forward in time, generating new samples that have never before been seen by the neural networks.

17:00
Turbulent Mean Flow Reconstruction Using Physics-Informed Neural Networks

ABSTRACT. Motivation: Data-assimilation methods, in combination with the govern- ing laws, can be used to infer information about a flow from sparse mean flow measurements, unavailable in the underlying dataset. However, modelling the Reynolds stress terms, which are under-determined in the governing Reynolds- Averaged Navier-Stokes (RANS) equations, can present a challenge for data as- similation methods. The aim of this work is to investigate and compare differ- ent approaches to finding these problematic RANS closures, when using data- assimilation methods. Specifically, this work applies these methods to construct Physics Informed Neural Network (PINN) models. These are trained on sparse flow velocity measurements (first and second order statistics), and successfully infer the unknown pressure field and super-resolve the mean flow field.

Approach & Methods: In this work, PINNs are trained to model mean flow of the fully turbulent periodic hill flow. This flow field is governed by the RANS equations: ∇·U= 0, (U·∇)U= −∇P + 1/Re∇2U−∇·τ,

where U,P are the non-dimensionalised mean velocity and pressure fields, Re is the Reynolds number and τ is the Reynolds stress tensor. The mean, first order statistics, U,P, depends on the unclosed, second-order statistics τ.−

Several methods to determine and apply the Reynolds stresses are created by differing reformulations of the governing RANS equations. These are used to train the PINNs. As a baseline, an explicit Reynolds stress tensor approach is used. After training the model with sparse first and second-order velocity statistics, the PINN directly predicts the terms in the Reynolds stress tensor.

The approaches, suggested in [3, 1], were proposed to better condition the solution of data-driven turbulence models, by modifying the way the RANS closure terms are included in the underlying equations. These techniques have been applied to the PINN framework and include decomposing the Reynolds stress terms into a linear and non-linear component (Implicit Eddy Viscosity approach), and the use of the Reynolds Forcing Vector, as proposed by [1] to feed the closure terms into the equations.

The next class of approaches involve direct use of the Reynolds stress trans- port equations. The mean flow depends on the unclosed second-order statistics found in the Reynolds stress tensor. Similar to the derivation of the RANS equations, additional equations for the second-order statistics can be formu- lated, which depend on unclosed third-order terms. Thus, the problem of closure and under-determination shifts to these third-order statistics. The aim of this approach is to improve prediction accuracy, by introducing more physics to the model optimisation at the cost of greater complexity.

Main Findings: The explicit Reynolds stress approach, shown in Figure 1, is an early example of the use of PINNs for turbulent mean flow reconstruction. Trained on a sparse 20-by-10 rectilinear grid (shown) with first and second order velocity statistics, the model successfully super-resolved the velocity fields and extracted the pressure field. However, near walls and separations, and between training points velocity prediction error increases, which is common across litera- ture [2]. Additional analysis shows that areas with higher prediction error present with little difference in conservation error. This indicates that this approach with the data provided, whilst sufficient to close the solution, is poorly conditioned to model the correct physics in some areas, as a result of the under-determination.

A comparison and evaluation of the performance of various approaches to solving this problem will be presented. This will include the relative predictive accuracy of each method, how they tackle the issues seen in the explicit method and the advantages and disadvantages of the given approach.

This framework is applicable in a wide range of high-dimensional dynamical systems which require statistical manipulation (i.e. coarse-graining/homogenisation and thus closure modelling) due to the intractable number of degrees of freedom of the underlying fully-resolved system.

17:20
Supervised machine learning to estimate instabilities in chaotic systems: estimation of local Lyapunov exponents

ABSTRACT. In chaotic dynamical systems such as the weather, prediction errors grow faster in some situations than in others. Real-time knowledge about the error growth could enable strategies to adjust the modelling and forecasting infrastructure on-the-fly to increase accuracy and/or reduce computation time. For example one could change the spatio-temporal resolution of the numerical model, locally increase the data availability, etc. Local Lyapunov exponents are known indicators of the rate at which very small prediction errors grow over a finite time interval. However, their computation is very expensive: it requires maintaining and evolving a tangent linear model, orthogonalisation algorithms and storing large matrices. Our work investigates the capability of supervised machine learning to estimate the imminent local Lyapunov exponents, from input of current and recent time steps of the system trajectory, as an alternative to the classical method. Thus machine learning is not used here to emulate a physical model or some of its components, but non intrusively'' as a complementary tool. We present results from investigations in two settings. In the first setting, we test the accuracy of four popular supervised learning algorithms (regression trees, multilayer perceptrons, convolutional neural networks and long short-term memory networks) in two three-dimensional chaotic systems of ordinary differential equations, the Lorenz 63 and the Rössler models. We find that on average the machine learning algorithms predict the stable local Lyapunov exponent accurately, the unstable exponent reasonably accurately, and the neutral exponent only somewhat accurately. We show that greater prediction accuracy is associated with local homogeneity of the local Lyapunov exponents on the system attractor. Importantly, the situations in which (forecast) errors grow fastest are not necessarily the same as those where it is more difficult to predict local Lyapunov exponents with machine learning. In the second setting, we apply the same approach to the Lorenz 96 model, a spatially extended chaotic system. We will discuss the performance of various ML methods, their specific skill and how and if they rank differently from what is observed on the Lorenz 63 and Rössler systems. We finally discuss the challenges and opportunities that come with moving to higher dimensional systems.

17:40
Reduced order surrogate modelling and Latent Assimilation for dynamical systems

ABSTRACT. For high-dimensional dynamical systems, running high-fidelity physical simulations can be computationally expensive. Much research effort has been devoted to develop efficient algorithms which can predict the dynamics in a low-dimensional reduced space. In this paper, we developed a modular approach which makes use of different reduced-order modelling for data compression. Machine learning methods are then carried out in the reduced space to learn the dynamics of the physical systems. Furthermore, with the help of data assimilation, the proposed modular approach can also incorporate observations to perform real-time corrections with a low computational cost. In the present work, We applied this modular approach to the forecasting of wildfire, air pollution and fluid dynamics. Using the machine learning surrogate model instead of physics-based simulations will speed up the forecast process towards a real-time solution while keeping the prediction accuracy. The data-driven algorithm schemes introduced in this work can be easily applied/extended to other dynamical systems.

18:00
Using Complex Networks to Simplify a Marine Biogeochemical Model for ML Emulation

ABSTRACT. The European Regional Seas Ecosystem Model (ERSEM) is a complex biogeochemical model with O(10^8) degrees of freedom. Using conventional approaches, it is difficult to adequately address questions about the ecosystem’s sensitivity and behaviour due to the high computational cost of running ensemble simulations with varying model parameters and/or external forcing. However, through the lens of complex network theory, it may be possible to work around this constraint with creative representations of data that are better suited to the problem context. Network-informed machine learning techniques would ‘shortcut’ the time and computational cost of full-complexity models.

Complex networks represent a young and active area that has been used to great effect in capturing the interactions and behaviour of highly dimensional systems. Within the context of this work, complex networks will serve an informed reduction of the problem, in a representation that is highly human-interpretable and actionable. Further to this, the complex network approach will allow for smarter data usage as input for other techniques such as machine learning and data assimilation.

We will show recent work focused on extracting networks from a 3-year free-run of the ERSEM model in the region of the NWES. Each network is constructed using correlation metrics to link variables and/or spatial locations that exhibit similar temporal behaviour. Interpretation and analysis of these networks alone gives us a unique perspective from which to understand the behaviour of the NWES. Considering the large quantity of data in this analysis, we will discuss work done to understand the robustness of length scales across spatial domains for each variable. This is a key insight that not only allows for effective coarsening of the spatial domain (model reduction), but also has extended practical uses beyond the scope of the project.

18:20
Assimilating Sky Camera Cloud Information via Convolutional Neural Networks

ABSTRACT. The growing amount of sky cameras offers high-resolution cloud information that can be used to improve the prediction of cloudiness evolution, which in turn affects the forecasting of very relevant quantities such as solar irradiance and precipitation. Nonetheless, so far sky camera cloud observations have not been assimilated into operational numerical weather prediction (NWP) models. We present a novel data assimilation (DA) approach where convolutional neural networks (CNNs) are used to assimilate cloud camera observations into the German regional NWP model ICON-LAM. On the one hand, a CNN is trained to detect clouds in pictures, and the obtained network is subsequently used to generate estimated cloud cover (CLC) observations in the camera space. On the other hand, we developed a CLC forward operator, that allows us to map an ICON-LAM model state into the CLC camera space. We construct a three dimensional grid in space from the camera point of view and interpolate ICON-LAM variables onto this grid. Afterwards, we model the pixels of the picture as rays originating at the camera location and take the maximum interpolated CLC along each ray. Finally, we embedded our developments into the Kilometre-scale Ensemble Data Assimilation (KENDA) system of the German Weather Service, and performed DA experiments for a visible sky camera. Our results were very promising, showing good performance of the aforementioned forward operator as well as forecast error reduction for temperature and wind.

16:40-18:20 Session 13G: QCW 3
Location: Telford
16:40
Quantum Variational Multi-Class Classifier for the Iris Data Set

ABSTRACT. Recent advances in machine learning on quantum computers have been made possible mainly by two discoveries. Mapping the features into exponentially large Hilbert spaces makes them linearly separable — quantum circuits perform linear operations only. The parameter-shift rule allows for easy computation of objective function gradients on quantum hardware — a classical optimizer can then be used to find its minimum. This allows us to build a binary variational quantum classifier that shows some advantages over the classical one. In this paper we extend this idea to building a multi-class classifier and apply it to real data. A systematic study involving several feature maps and classical optimizers as well as different repetitions of the parametrized circuits is presented. The accuracy of the model is compared both on a simulated environment and on a real IBM quantum computer. A considerable accuracy drop is found for a real quantum computer. This is because of the noisy nature of the current quantum hardware. Also, additional gates have been added to the circuit by the transpiler mapping logical qubits onto physical ones.

17:00
Distributed Quantum Annealing on D-Wave for the single machine total weighted tardiness scheduling problem

ABSTRACT. In the work, we propose a new distributed quantum annealing method of algorithm construction for solving an NP-hard scheduling problem. A method of diversification of calculations has been proposed by dividing the space of feasible solutions and using the fact that the quantum annealer of the D-Wave machine is able to optimally solve (for now) small-size subproblems only. The proposed methodology was tested on a difficult instance of a single machine total weighted tardiness scheduling problem proposed by Lawler. An optimal solution to the problem under consideration was obtained in a repeatable manner.

17:20
Studying the cost of n-qubit Toffoli gates

ABSTRACT. There are several Toffoli gate designs for quantum computers in the literature. Each of these designs is focused on a specific technology or on optimising one or several metrics (T-count, number of qubits, etc.), and therefore has its advantages and disadvantages. While there is some consensus in the state of the art on the best implementations for the Toffoli gate, scaling this gate for use with three or more control qubits is not trivial. In this paper, we analyse the known techniques for constructing an n-qubit Toffoli gate, as well as the existing state-of-the-art designs for the 2-qubit version, which is an indispensable building block for the larger gates. In particular, we are interested in a construction of the temporary logical-AND gate with more than two control qubits. This gate is widely used in the literature due to the T-count and qubit reduction it provides. However, its use with more than two control qubits has not been analysed in detail in any work. The resulting information is offered in the form of figures of our own creation and comparative tables that will facilitate its consultation for researchers and people interested in the subject, so that they can easily choose the design that best suits their interests. As part of this work, the studied implementations have been reproduced and tested on both quantum simulators and real quantum devices.