next day
all days

View: session overviewtalk overview

09:20-10:10 Session 2: Keynote Lecture 1
Location: 100
An Overview of High Performance Computing and Future Requirements

ABSTRACT. In this talk we examine how high performance computing has changed over the last ten years and look toward the future in terms of trends. These changes have had and will continue to impact our numerical scientific software significantly. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed, and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder.

10:10-10:40Coffee Break
10:40-12:20 Session 3A: MT 1
Location: 100
Numerical simulation of the octorotor flying car in sudden rotor stop

ABSTRACT. Currently, manned drones, also known as flying cars, are attracting attention, but due to legal restrictions and fear of accidents, it is difficult to fly them. We therefore present a numerical simulation of the sudden stop of the rotor of an octrotor flying car. In this paper, we consider the interaction between fluid and rigid-body in a 6-degrees of freedom flight simulation of a flying car. For the purpose, the attitude of the aircraft is determined based on the force generated from the flow field around the aircraft due to the rotation of the rotor. The motion of the aircraft is obtained from the equations of motion of translation and rotation, and Newton's equation of motion and Euler's equation of rotation are used. A multi-axis sliding mesh is adopted for the rotation of the rotor, and calculations with multiple rotating bodies in the computational grid are performed. In addition, we use the motion computational domain (MCD) method to represent the free motion of the octrotor flying car by the motion computational domain itself. Using the above method, we will show the appropriate rotation method from various rotor stop patterns, demonstrate the safety of the octrotor flying car, and clarify the behavior of the aircraft and the surrounding flow field.

Automated identification and location of three dimensional atmospheric frontal systems

ABSTRACT. We present a novel method to identify and locate weather fronts at various pressure levels to create a three dimensional structure using weather data located at the North Atlantic. It provides statistical evaluations regarding the slope and weather phenomena correlated to the identified three dimensional structure. Our approach is based on a deep neural network to locate 2D surface fronts first, which are then used as an initialization to extend them to various height levels. We show that our method is able to detect frontal locations between $500 hPa$ and $1000 hPa$.

Downscaling WRF-Chem: Analyzing urban air quality in Barcelona city

ABSTRACT. Improving air quality in highly polluted cities is a challenge for today's society. Most of the proposed strategies include green policies whose objective is to introduce green infrastructures that help improve air quality. In order to design these new cities, the WRF-Chem model is used as analysis tool, which makes it possible to predict the evolution of the most common pollutants, as well as their dispersion due to the meteorology of the moment. However, most of the studies are not at urban scale (hundreds of meters of resolution) and those cases that manage to simulate the meteorology at this resolution do not take into account the morphology of the city. Using the city of Barcelona as a case study, this paper confirms that in order to design green cities, the modeling methodology used up to now must be reviewed. For this, certain limitations of the WRF-Chem model have been analyzed, including the BEP-BEM as urban canopy layer and the reasons for such a limitations are discussed.

Turning Flight Simulation with Fluid-rigid Body Interaction for Flying Car with Contra-rotating Propellers

ABSTRACT. Toward realization of the Digital Flight for the next-generation vehicle, numerical flight simulation of turning flights of a flying car was performed with consideration of fluid-rigid body interaction. The type of the vehicle is electric vertical takeoff and landing (eVTOL) octorotor type with four contra-rotating propeller units which successfully performed manned flight test. In this simulation, the flying car flies in the same way as real world under only the force generated by rotating its eight propellers. The moving computation-al domain method was adopted to realize the free movement of the vehicle in three-dimensional space. The whole computational grid is divided into eight domains to reproduce rotation of propellers. The propeller rotations are achieved without any simplification by applying the multi-axis sliding mesh approach. Moreover, to fly the flying car as intended, an attitude of the body in the flight is controlled properly to by the PID controller. As a result, the vehicle flies under only the lift generated by the propeller and the turning flights of a flying car with coaxial propellers are reproduced. In addition, this simulation results are different from analytical results based on simplified aerodynamic force. Therefore, it is suggested that this method is the effective one for numerical flight tests of a multi-rotor aircraft on a computer.

SLAM methods for Augmented Reality systems for flight simulators

ABSTRACT. In this paper we present the review and practical evaluation in the flight simulators of Simultaneous Localization and Mapping methods. We present the review of resent research and development in the SLAM application in wide range of domains, like autonomous driving, robotics and augmented reality (AR). Then we focus on the methods selected from the perspective of their usefulness in the AR systems for training and servicing the flight simulators. The localization and mapping in such an environment is much complex than in others since the flight simulator is relatively small and close area. Our previous experiments showed that the built-in SLAM system in HoloLens is insufficient for such areas and has to be enhanced with additional elements, like QRcodes. Therefore, presented study on other methods can improve the localization and mapping of AR system in flight simulators.

10:40-12:20 Session 3B: MT 2-ol
Location: 303
First-Principles Calculation to N-type Beryllium related Co-doping and Beryllium Doping in Diamond

ABSTRACT. The Beryllium-doped (Be-doped) diamond and Beryllium related (Be-X) co-doped diamond have been carefully investigated by the density functional theory (DFT) to explore the possibility to achieve effective and shallow n-type doping in diamond. Although the ionization energy and formation energy of interstitial/substitutional Be-doped diamond is not ideal, the introduction of Be-related co-doping techniques(Be-N/O/S) greatly improves the electrical properties in dia-monds. We found, for the first time, n-type diamond doping can be realized in Be-N, Be-O and Be-S co-doped systems, among which Be-N3 has the best performance. Be-N3 has the ad-vantages of low ionization energy(0.25eV) ,low formation energy(-1.59eV), and direct bandgap. The N-2p states play a crucial role in the conduction band edge of Be-N3 co-doped diamond. Hence, the Be-N3 could be expected to become a promising alternative for N-type shallow doping in diamond.

Introducing a computational method to retrofit ‎damaged buildings under seismic mainshock-aftershock ‎sequence

ABSTRACT. Retrofitting damaged buildings is a challenge in front of engineers, since the ‎commercial softwares have not the ability of considering the local damages and ‎deformed shape of building as the result of mainshock record of the earthquake ‎before applying aftershock record. Therefore, in this research, a computational ‎method for retrofitting damaged building under the seismic mainshock-aftershock ‎sequences is proposed, and the computational strategy is developed using Tcl ‎programming code in OpenSees and MATLAB. Since the developed program-‎ming code has the ability of conducting nonlinear dynamic analysis and Incre-‎mental Dynamic Analysis (IDA), different types of steel and reinforced concrete ‎structures, assuming different intensity measures and engineering demands, can ‎be on the benefit of this study. To present the ability of method, the 4-Story and ‎‎6-Story steel structures were selected and local damages were observed after the ‎seismic mainshock record. Then, the linear Viscous Dampers (VDs) are used for ‎retrofitting the damaged structures, and analyses were performed under after-‎shock record of the earthquake. The results of the study show that the proposed ‎method and computational program can provide the seismic performance level of ‎damaged frames based on the mainshock-aftershock sequences. In addition, the ‎damaged floor level of building is recognized by programming code and can be ‎effectively considered for local retrofit schemes.‎

Adsorption Characteristics and Thermal Stability of Hy-drogen Termination on Diamond Surface:A first-principles study

ABSTRACT. In this paper, we systematically investigated the adsorption characteristics, electronic struc-ture (DOS), band structure and thermal stability of diamond surface with Hydrogen termi-nals. We found that the most stable adsorption performance may occur on (100) surface. The adsorption stability of hydrogen atom on plane (110) is the second, and the worst on plane (111). A very shallow acceptor level is introduced through Hydrogen termination, explaining the ideal p-type diamond characteristics. The stability of the hydrogen terminal structure decreases as temperature rises. This structure has deteriorated significantly since 400K, and the instability of the hydrogen-terminated structure on the surface is the root cause of the decrease in the hole concentration of hydrogen-terminated diamond at high temperature.

Fast Electromagnetic Field Pattern Calculations with Fourier Neural Operator Networks

ABSTRACT. Calculating the field patterns arising from an array of radiating sources is a central problem in Computational ElectroMagnetics (CEM) and a critical operation for designing and developing antenna systems. Yet, it is a very time-consuming and computationally expensive operation when using traditional numerical approaches, including finite-difference in the time and spectral domains. To address this issue, we develop a new data-driven surrogate model for fast and accurate calculation of the field radiation pattern. The method is based on the Fourier Operator Neural: the Fourier basis kernel is a natural match to the wave propagation model. We show that we achieve a performance improvement of 30x when compared to the performance of the MEEP code when running on a commodity laptop CPU at the cost of a small accuracy loss.

10:40-12:20 Session 3C: AIHPC4AS 1
Location: 319
Combining Deep Learning and Computational Mechanics in structural damage assessment under environmental and operational variability

ABSTRACT. Structural Health Monitoring (SHM)\cite{FarrarSHM} is critical to maintaining infrastructure safety. Traditional assessment methods have relied on visual inspections, which are time-consuming and often insufficient for detecting hidden damage. Given the significant improvements achieved in monitoring and thanks to the irruption of Artificial Intelligence, large civil infrastructures are undergoing a new paradigm in assessment, maintenance, and ensuring safety.

One of the main limitations of data-driven approaches in real SHM practice is the lack of measurements corresponding to the possible damage scenarios that may occur (only the healthy or reference state is available in an in-service structure). This situation prevents addressing supervised learning approaches and thus restricts the assessment to a binary classification problem: healthy or unknown. However, if a more insightful diagnostic is desired, we need to complement the available monitoring measurements with an additional source of information: computer simulations. For that purpose, a simplified parametrization is required that represents the target structure. It introduces new sources of uncertainty and error but allows for recreating hypotheses and scenarios that are unfeasible in real practice~\cite{Combined}. These simulations often focus on introducing damage situations to be compared with a single healthy state. Hence, they dismiss that environmental and operational conditions (EOCs) are constantly changing and that various healthy scenarios exist. These varying EOCs may mask the presence of damage or even result in false alarms if they are not adequately embedded in the synthetic damage scenarios used for training.

In this talk, we will present a combination of Deep Learning and computer simulations from a simplified Finite Element parametrization to efficiently incorporate physical knowledge regarding damage scenarios in a diagnostic-intended supervised Deep Neural Network for SHM. We incorporate environmental and operational variability by identifying representative experimental measurements through a clustering technique (Gaussian Mixture Model, GMM)~\cite{GMM}. This technique allows for efficiently generating synthetic scenarios that capture the most operating and environmental variability.By incorporating these synthetic scenarios into the Deep Neural Network training phase, we can improve its accuracy in identifying different types of damage. We validate our approach by applying it to a real full-scale case study of a reinforced concrete bridge in Porto. Our results demonstrate that neglecting environmental and operational variability can lead to inadequate damage detection under unseen conditions.

Actor-based Scalable Simulation of N-body Problem\

ABSTRACT. Efficient solutions of the n-body problem make it possible to conduct large-scale physical research on the rules governing our universe. Vast amount of communication needed in order to make each body acquainted with the information on position of other bodies renders the accurate solutions very quickly inefficient and unreasonable. Many approximate approaches have been proposed, and the one introduced in this paper relies on actor-based concurrency, making the whole design and implementation significantly easier than using, e.g. MPI. In addition to presenting three methods, we provide the reader with tangible preliminary results that pave the way for future development of the constructed simulation system.

Least-squares space-time formulation for advection-diffusion problem with efficient adaptive solver based on matrix compression

ABSTRACT. We present the hierarchical matrix compression algorithms to speed up the computations of difficult unstable space-time finite element method. Namely, we focus on the non-stationary time dependent difficult advection dominated diffusion problem solved by using space-time finite element method. We formulate the problem on the space-time mesh, where two axis of coordinates system denotes the spatial dimension, and the third axis denotes the temporal dimension. By employing the space-time mesh, we avoid time iterations, and we solve the problem "at once" by calling a solver once for the entire mesh. This problem, however, is challenging, and it requires application of the special stabilization methods. We propose the stabilization method based on least-squares. We derive the space-time formulation, and solve it using adaptive finite element method. To speed up the solution process, we compress the matrix of the space-time formulation using the low-rank compression algorithm. We show, that the compressed matrix allows for quasi-linear computational cost matrix-vector multiplication. Thus, we apply the GMRES solver with hierarchical matrix-vector multiplications. Summing up, we propose a quasi-linear computational cost solver for stabilized space-time formulations of advection dominated diffusion problem.

Deep Learning Operator Network for Geometric Nonlinear Deformation with PyFEM

ABSTRACT. Physics-informed Neural Networks attracted interest into deep neural networks as universal approximators of solutions in various scientific and engineering communi-ties. The drawback, however, with neural networks in most existing approaches is that it can only provide solutions for a fixed set of input parameters such as material properties, loads, boundary and initial conditions. Changing these parameters would require re-training. The computational cost increases for this retraining, especially when the numerical simulations to be approximated involve non-linear (material or geometric) mechanical analysis. In this context, the newly introduced deep learning operator network (DeepONet) is particularly useful. DeepONet approximates linear and nonlinear solution operators by taking parametric functions (infinite-dimensional objects) as inputs and mapping them to other output spaces' solution functions. In this work, we investigate the effectiveness of DeepONets to solve the large displacement solutions in geometric deformation problems with the variable load magnitudes and directions as its parameters. The numerical solutions of a test case of a cantilever subjected to large deformations is solved in a finite strain continuum using Newton-Raphson iterations in a finite-element code, PyFEM. The DeepONet formulation is used to tackle the challenge of learning the spatial distribution of displacements on a 2D domain under variable load magnitudes and directions.

Fast solver for advection dominated diffusion using residual minimization and neural networks

ABSTRACT. Advection-dominated diffusion is a challenging computational problem that requires special stabilization efforts. Unfortunately, the numerical solution obtained with the commonly used Galerkin method delivers unexpected oscillation resulting in an inaccurate numerical solution. The theoretical background resulting from the famous inf-sup condition tells us that the finite-dimensional test space employed by the Galerkin method does not allow us to reach the supremum necessary for problem stability. We enlarge the test space to overcome this problem. We do it for a fixed trial space. The method that allows us to do so is the residual minimization method. This method, however, requires the solution to a much larger system of linear equations than the standard Galerkin method. We represent the larger test space by its set of optimal test functions, forming a basis of the same dimension as the trial space in the Galerkin method. The resulting Petrov-Galerkin method stabilizes our challenging advection-dominated problem. We train the optimal test functions offline with the neural network to speed up the computations. We also observe that the optimal test functions, usually global, can be approximated with local support functions, resulting in a low computational cost for the solver and a stable numerical solution.

10:40-12:20 Session 3D: BBC 1
Location: 220
Resting State Brain Connectivity analysis from EEG and FNIRS signals

ABSTRACT. Contemporary neuroscience is highly focused on the synergistic use of machine learning and network analysis. Indeed, network neuroscience analysis intensively capitalizes on clustering metrics and statistical tools. In this context, the integrated analysis of functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) provides complementary information about the electrical and hemodynamic activity of the brain. Evidence supports the mechanism of the neurovascular coupling mediates brain processing. However, it is not well understood how the specific patterns of neuronal activity are represented by these techniques. Here we have investigated the topological properties of functional networks of the resting-state brain between synchronous EEG and fNIRS connectomes, across frequency bands, using source space analysis, and through graph theoretical approaches. We observed that at global-level analysis small-world topology network features for both modalities. The edge-wise analysis pointed out increased inter-hemispheric connectivity for oxy-hemoglobin compared to EEG, with no differences across the frequency bands. Our results show that graph features extracted from fNIRS can reflect both short- and long-range organization of neural activity, and that is able to characterize the large-scale network in the resting state. Further development of integrated analyses of the two modalities is required to fully benefit from the added value of each modality. However, the present study highlights that multimodal source space analysis approaches can be adopted to study brain functioning in healthy resting states, thus serving as a foundation for future work during tasks and in pathology, with the possibility of obtaining novel comprehensive biomarkers for neurological diseases.

Phase Correction and Noise-to-Noise Denoising of Diffusion Magnetic Resonance Images using Neural Networks

ABSTRACT. Diffusion magnetic resonance imaging (dMRI) is an important technique used in neuroimaging. It features a relatively low signal-to-noise ratio (SNR) which poses a challenge, especially at stronger diffusion weighting. A common solution to the resulting poor precision is to average signal from multiple identical measurements. Indeed, averaging the magnitude signal is sufficient if the noise is sampled from a distribution with zero mean value. However, at low SNR, the magnitude signal is increased by the rectified noise floor, such that the accuracy can only be maintained if averaging is performed on the complex signal. Averaging of the complex signal is straightforward in the non-diffusion-weighted images, however, in the presence of diffusion encoding gradients, any motion of the tissue will incur a phase shift in the signal which must be corrected prior to averaging. Instead, they are averaged in the modulus image space, which is associated with the effect of Rician bias. Moreover, repeated acquisitions further increase acquisition times which, in turn, exacerbate the challenges of patient motion. In this paper, we propose a method to correct phase variations using a neural network trained on synthetic MR data. Then, we train another network using the Noise2Noise paradigm to denoise real dMRI of the brain. We show that phase correction made Noise2Noise training possible and that the latter improved the denoising quality over averaging modulus domain images.

Anomaly detection of motion capture data based on the autoencoder approach

ABSTRACT. Anomalies of gait sequences are detected on the basis of an autoencoder strategy in which input data are reconstructed from their embeddings. The denoising dense low-dimensional and sparse high-dimensional autoencoders are applied for segments of time series representing 3D rotations of the skeletal body parts. The outliers – misreconstructed time segments – are determined and classified as abnormal gait fragments. In the validation stage, motion capture data registered in the virtual reality of the Human Dynamics and Multimodal Interaction Laboratory of the Polish-Japanese Academy of Information Technology equipped with Motek CAREN Extended hardware and software are used. The scenarios with audio and visual stimuli are prepared to enforce anomalies during a walk. The acquired data are labeled by a human, which results in the visible and invisible anomalies extracted. The neural network representing the autoencoder is trained using anomaly-free data and validated by the complete ones. AP (Average Precision) and ROC-AUC (Receiver Operating Characteristic -- Area Under Curve) measures are calculated to assess detection performance. The influences of the number of neurons of the hidden layer, the length of the analyzed time segments and the variance of injected Gaussian noise are investigated. The obtained results, with AP=0.46 and ROC-AUC=0.71, are promising.

Influence of the capillaries bed in hyperthermia for cancer treatment

ABSTRACT. This work presents the computational modeling of solid tumor treatments with hyperthermia using magnetic nanoparticles, considering a bioheat transfer model proposed by Pennes(1948). The simulations consider a tumor seated in a muscle layer. The model was described with a partial differential equation, and its solution was approximated using the finite difference method in a heterogeneous porous medium using a Forward Time Centered Space scheme. Moreover, the Monte Carlo method was employed to quantify the uncertainties of the quantities of interest (QoI) considered in the simulations. The QoI considers uncertainties in three different parameters: 1) the angulation of blood vessels, 2) the magnitude of blood flow, and 3) the number of blood vessels per tissue unit. Since Monte Carlo demands several executions of the model and solving a partial differential equation in a bi-dimensional domain demands significant computational time, we use the OpenMP parallel programming API to speed up the simulations. The results of the in silico experiments showed that considering the uncertainties presented in the three parameters studied, it is possible to plan hyperthermia treatment to ensure that the entire tumor area reaches the target temperature that leads to damage.

Investigating the Sentiment in Italian Long-COVID Narrations

ABSTRACT. Through an overview of the history of the disease, Narrative Medicine (NM) aims to define and implement an effective, appropriate and shared treatment path. In the context of COVID-19, several blogs were produced, among those the "Sindrome Post COVID-19" contains narratives related to the COVID-19 pandemic. In the present study, different analysis techniques were applied in a dataset extracted from such "Sindrome Post COVID-19" blog. The first step of the analysis was to test the VADER polarity extraction tool. Then the analysis was extended through the application of Topic Modeling, using Latent Dirichlet Allocation (LDA). The results were compared to verify the correlations between the polarity score through VADER and the extraction of topics through LDA. The results showed a predominantly negative polarity consistent with the mostly negative topics represented by words on post virus symptoms.

10:40-12:20 Session 3E: QCW 1
Location: 120
Software aided approach for constrained optimization based on QAOA modifications

ABSTRACT. We present two variants of the QAOA modification for solving constrained combinatorial problems. The results were obtained by QHyper framework designed for that purpose and described in the paper. More specifically, we use the created framework to compare the QAOA results with its two modifications, namely: Weight-Free QAOA (WF-QAOA) and Hyper QAOA (H-QAOA). Additionally, we compare Basin-hopping global method for subsequent sampling of the initial points for the proposed QAOA modifications with a simple random search. The presented result for the Knapsack Problem show that the proposed solution outperforms original QAOA algorithm and can be promising for QUBO, where adjusting the relative importance of the cost function and the constraints is not a trivial issue.

Solving (Max) 3-SAT via Quadratic Unconstrained Binary Optimization

ABSTRACT. We introduce a novel approach to translate arbitrary 3-SAT instances to Quadratic Unconstrained Binary Optimization (QUBO) as they are used by quantum annealing (QA) or the quantum approximate optimization algorithm (QAOA). Our approach requires fewer couplings and fewer physical qubits than the current state-of-the-art which results in higher solution quality. We verified the practical applicability of the approach by testing it on a D-Wave quantum annealer.

GCS-Q: Quantum Graph Coalition Structure Generation

ABSTRACT. The problem of generating an optimal coalition structure for a given coalition game of rational agents is to find a partition that maximizes their social welfare and known to be NP-hard. Though there are algorithmic solutions with high computational complexity available for this combinatorial optimization problem, it is unknown whether quantum-supported solutions may outperform classical algorithms.

In this paper, we propose a novel quantum-supported solution for coalition structure generation in Induced Subgraph Games (ISGs). Our hybrid classical-quantum algorithm, called GCS-Q, iteratively splits a given $n$-agent graph game into two nonempty subsets in order to obtain a coalition structure with a higher coalition value. The GCS-Q solves the optimal split problem $\mathcal{O}(n)$ times, exploring $\mathcal{O}(2^n)$ partitions at each step. In particular, the optimal split problem is reformulated as a QUBO and executed on a quantum annealer, which is capable of providing the solution in linear time with respect to $n$. We show that GCS-Q outperforms the currently best classical and quantum solvers for coalition structure generation in ISGs with its runtime in the order of $n^2$ and an expected approximation ratio of $93\%$ on standard benchmark datasets.

10:40-12:20 Session 3F: MMS 1
Location: B103
Convolutional Recurrent Autoencoder for Molecular-Continuum Coupling

ABSTRACT. Molecular-continuum coupled flow simulations are used in many applications to build a bridge across spatial or temporal scales. Hence, they allow to investigate effects beyond flow scenarios modeled by any single-scale method alone, such as a discrete particle system or a partial differential equation solver. On the particle side of the coupling, often molecular dynamics (MD) is used to obtain trajectories based on pairwise molecule interaction potentials. However, since MD is computationally expensive and macroscopic flow quantities sampled from MD systems often highly fluctuate due to thermal noise, the applicability of molecular-continuum methods is limited. If machine learning (ML) methods can learn and predict MD based flow data, then this can be used as a noise filter or even to replace MD computations -- both generates potential for tremendous speed-up of molecular-continuum simulations, aiming to enable new applications emerging on the horizon.

In this paper, we develop an advanced hybrid ML model for MD data in the context of coupled molecular-continuum flow simulations: A convolutional autoencoder deals with the spatial extent of the flow data, while a recurrent neural network is used to capture its temporal correlation. We use the open source coupling tool MaMiCo to generate MD datasets for ML training and implement the hybrid model as a PyTorch-based filtering module for MaMiCo. It is trained with real MD data from different flow scenarios including a Couette flow validation setup and a three-dimensional vortex street. Our results show that the hybrid model is able to learn and predict smooth flow quantities, even for very noisy MD input data. We furthermore demonstrate that also the more complex vortex street flow data can accurately be reproduced by the ML module.

Developing an Agent-Based Simulation Model to Forecast Flood-Induced Evacuation and Internally Displaced Persons

ABSTRACT. Each year, natural disasters force millions of people to evacuate their homes and become internally displaced. Mass evacuations following a disaster can make it difficult for humanitarian organizations to respond properly and provide aid. To help predict the number of people who will require shelter, this study uses agent-based modelling to simulate flood-induced evacuations. We modified the Flee modelling toolkit, which was originally developed to simulate conflict-based displacement, to be used for flood-induced displacement. We adjusted the simulation parameters, updated the rule set, and changed the development approach to address the specific requirements of flood-induced displacement. We developed a test model, called DFlee, which includes new features, such as the simulation of internally displaced persons and returnees. We tested the model on a case study of a 2022 flood in Bauchi state, Nigeria, and validated the results against data from the International Organization for Migration’s Displacement Tracking Matrix. The model's goal is to help humanitarian organizations prepare and respond more effectively to future flood-induced evacuations.

Epistemic and Aleatoric Uncertainty Quantification and Surrogate Modelling in High-Performance Multiscale Plasma Physics Simulations

ABSTRACT. This work suggests several methods of uncertainty treatment in multiscale modelling and describes their application to a system of coupled turbulent transport simulations of a tokamak plasma. We propose a method to quantify the usually aleatoric uncertainty of a system in a quasistationary state, estimating the mean values and their errors for quantities of interest, which is average heat fluxes in the case of turbulence simulations. The method defines the stationarity of the system and suggests a way to balance the computational cost of simulation and the accuracy of estimation. This allows, contrary to many approaches, to incorporate of aleatoric uncertainties in the analysis of the model and to have a quantifiable decision for simulation runtime. Furthermore, the paper describes methods for quantifying the epistemic uncertainty of a model and the results of such a procedure for turbulence simulations, identifying the model’s sensitivity to particular input parameters and sensitivity to uncertainties in total. Finally, we introduce a surrogate model approach based on Gaussian Process Regression and present a preliminary result of training and analysing the performance of such a model based on turbulence simulation data. Such an approach shows a potential to significantly decrease the computational cost of the uncertainty propagation for the given model, making it feasible on current HPC systems.

10:40-12:20 Session 3G: CMCM
Location: B115
Predicting cell stress and deformation during bioprinting

ABSTRACT. 3D printing of living cells holds great promise for medical applications. A major current obstacle is the mechanical damage that cells suffer when flow-ing from the reservoir through the printing needle into the fabricated con-struct. Here, we present novel models to perform computer simulations of individual cell stress and deformation during bioprinting.

Simulating the Deformation and Stress Distribution of High-shear Platelet Aggregates Under Multiple Shear Flow Conditions

ABSTRACT. In order to understand the initial stages of thrombus formation, the first step, the formation of a shear-induced platelet aggregate needs to be investigated. This initial stage of rapid platelet accumulation plays an important role in the development of the final clot. Therefore, the mechanical properties of such aggregates are of interest, since they might have an influence on the formation and the stability of the whole clot. In this work, an incompressible neo-Hookean solid mechanical model is used to evaluate the small deformation and accompanying stress distribution inside platelet aggregates. The aggregate shapes and internal densities are recorded from novel in vitro experiments, and this data is translated to a computational model including internal porosity using our recent methods in [2]. The force interaction between the platelet aggregates and the surrounding fluid flow was investigated under three different wall shear rates (WSRs). These different WSRs also lead to variations in the aggregate shapes, that in turn had an influence on the stress distribution. The current work seeks to achieve an initial understanding on the mechanics of the shear-induced PLT aggregation process and subsequent differences in aggregate shape and porosity under different WSR conditions.

Simulating initial steps of platelet aggregate formation in a cellular blood flow environment

ABSTRACT. The mechano-chemical process of clot formation is relevant in both hemostasis and thrombosis. The initial phase of thrombus formation in arterial thrombosis can be described by the mechanical process of platelet adhesion and aggregation via hemodynamic interactions with von Willebrand factor molecules. Understanding the formation and composition of this initial blood clot is crucial to evaluate differentiating factors between hemostasis and thrombosis. In this work a cell-based platelet adhesion and aggregation model is presented to study the initial steps of aggregate formation. Its implementation upon the pre-existing cellular blood flow model HemoCell is explained in detail and the model is tested in a simple case study of initial aggregate formation under arterial flow conditions. The model is based on a simplified constraint-dependent platelet binding process that coarse-grains the most influential processes into a reduced number of probabilistic thresholds. In contrast to existing computational platelet binding models, the present method places the focus on the mechanical environment that enables the formation of the initial aggregate. Recent studies highlighted the importance of elongational flows on VWF-mediated platelet adhesion and aggregation. The cell-resolved scale used for this model allows to account for important hemodynamic phenomena such as the formation of a red blood cell free layer and platelet margination. This work focuses on the implementation details of the model and presents its characteristic behavior at various coarse-grained threshold values.

Estimating Parameters of 3D Cell Model using a Bayesian Recursive Global Optimizer (BaRGO)

ABSTRACT. In the field of Evolutionary Strategy, parameter estimation for functions with multiple minima is a difficult task when interdependencies between parameters have to be investigated. Most of the current routines that are used to estimate such parameters leverage state-of-theart machine learning approaches to identify the global minimum, ignoring the relevance of the potential local minima. In this paper, we present a novel Evolutionary Strategy routine that uses sampling tools deriving from the Bayesian field to find the best parameters according to a certain loss function. The Bayesian Recursive Global Optimizer (BaRGO) presented in this work explores the parameter space identifying both local and global minima. Applications of BaRGO to 2D minimization problems and to parameter estimation of Red Blood Cell model are reported.

Towards unresolved RBC – a deformable particle model for the unresolved CFD-DEM simulation of blood flow

ABSTRACT. Design and optimization of bio-microfluidic devices and lab-on-a-chip applications require a global picture of the blood flow motion and detecting the potential risks of cellular damage such as hemolysis and thrombosis. This global picture is commonly obtained by numerical simulations; however, the choice of computational tool would remain dependent on the physical scale of interest. While the cell-resolved techniques for the simulation of blood flow could provide a picture of cellular mechanics, their applicability is limited to the number of resolvable biological cells and computational costs. This study proposes an unresolved computational fluid dynamics-discrete element method (unresolved CFD-DEM) approach for the Eulerian-Lagrangian simulation of blood flow as a suspension of red blood cells (RBCs) in blood plasma. While the plasma flow is resolved as a continuum media, the RBCs are modelled as the Lagrangian particle whose dynamics is governed by Newton’s second law of motion. To preserve the main characteristic behavior of the RBCs, i.e. their deformability, as well as their interaction with the carrier fluid flow, we pursue a deformable particle model that adopts proper closure models for the under-resolved deformation-induced lift forces as the source term in the governing equations. Such a force model is derived based on a series of resolved simulations of a single RBC migration in channel, and is dependent on local flow properties such as shear rate that remains resolved by the computational grid during the simulation. As the first step, a workflow is established based on the particle focusing problem known as the Segrè-Silberberg effect in which the rigid spherical particles focus on an equilibrium position in a laminar Poiseuille flow. By a series of immersed boundary simulation for different particle Reynolds numbers, shear rates, and particle-channel confinement ratios, a lift force is derived and implemented for the unresolved CFD-DEM. Then, the model is tested for the particles smaller than the grid size, where the results demonstrate that the unresolved CFD-DEM with the new force model could reproduce the particle focusing pattern obtained by the immersed-boundary method, but with a huge speed-up in simulation time. The extension of this approach to our previously proposed reduced-order deformable particle model is currently being investigated to develop a force model for RBC particles dependent on the shear rate and the deformation index. The latter will enable the unresolved RBC model to be used for the hemolysis prediction. This approach will allow the simulation of millions of deformable particles with affordable computational cost and paves the path toward blood flow simulation on larger computational domains where cellular-level information is required for pathological analysis.

A novel high-throughput framework to quantify spatio- temporal tumor clonal dynamics

ABSTRACT. Clonal proliferation dynamics within a tumor channels the course of tumor

growth, drug response and activity. A high-throughput image screening tech- nique is required to analyze and quantify the spatiotemporal variations in cell

proliferation and influence of chemotherapy on clonal colonies. We present two protocols for generating spatial, Lentiviral Gene Ontology Vectors (LeGO) based, mono- and co-culture systems with provisions for temporal tracking of

clonal growth at the nucleus- and cytoplasm-level. The cultured cells are sub- jected to drug treatment and analyzed with a novel image processing frame- work. This framework enables alignment of cell positions based on motion cap- ture techniques, tracking through time and investigation of drug actions on indi- vidual cell colonies. Finally, utilizing this framework, we develop agent-based

models to simulate and predict the effects of the microenvironment and clonal density on cell proliferation. The model and experimental findings suggest growth stimulating effects of local clonal density irrespective of overall cell confluency.

10:40-12:20 Session 3H: CSCx 1
Location: B11
Toxicity in Evolving Twitter Topics

ABSTRACT. This paper investigates the relationship between the topic evolution and speech toxicity on Twitter. We construct a dynamic topic evolution model based on a corpus of collected tweets. A combination of traditional static Topic Modelling approaches and sBERT sentences Embeddings are leveraged to build a Topic Evolution Model that is then represented as a directed Graph. Furthermore, we propose a hashtag-based method to validate the consistency of a Topic Evolution Model and provide guidance for the hyperparameter selection. We identify five evolutionary steps - Topic Stagnation, Topic Merge, Topic Split, Topic Disappearance, and Topic Emergence. Utilizing a speech toxicity classification model, we analyze the dynamics of toxicity in the Evolution of Topics. In particular, we compare the aforementioned Topic Transition Types in terms of their toxicity. Our results indicate a positive correlation between the Popularity of a Topic and its Toxicity. The different transition types do not show any statistically significant difference in the presence of inflammatory speech.

Longitudinal Analysis of the Topology of Criminal Networks using a Simple Cost-Benefit Agent-Based Model

ABSTRACT. Recently, efforts have been made in computational criminology to study the dynamics of criminal organisations and improve law enforcement measures. To understand the evolution of a criminal network, current literature uses social network analysis and agent-based modelling as research tools. However, these studies only explain the short-term adaptation of a criminal network with a simplified mechanism for introducing new actors. Moreover, most studies do not consider the spatial factor, i.e. the underlying social network of a criminal network and the social environment in which it is active. This paper presents a computational modelling approach to address this literature gap by combining an agent-based model with an explicit social network to simulate the long-term evolution of a criminal organisation. To analyse the dynamics of a criminal organisation in a population, different social networks were modelled. A comparison of the evolution between the different networks was carried out, including a topological analysis (secrecy, flow of information and size of largest component). This paper demonstrates that the underlying structure of the network does make a difference in its development. In particular, with a preferentially structured population, the prevalence of criminal behaviour is very pronounced, giving the criminal organisation a certain efficiency.

Manifold Analysis for High-Dimensional Socio-Environmental Surveys

ABSTRACT. Recent studies on anthropogenic climate change demonstrate a disproportionate effect on agriculture in the Global South and North. Questionnaires have become a common tool to capture the impact of climatic shocks on household agricultural income and consequently, on farmers’ adaptation strategies. These questionnaires are high-dimensional and contain data on several aspects of an individual (household) such as spatial and demographic characteristics, socio-economic conditions, farming practices, adaptation choices, and constraints. The extraction of insights from these high-dimensional datasets is far from trivial. Standard tools such as Principal Component Analysis, Factor Analysis, and Regression models are routinely used in such analysis. However, the above methods either rely on a pairwise correlation matrix, assume specific (conditional) probability distributions in its construction, or assume that the high-dimensional survey data lies in a linear subspace. Recent advances in manifold learning techniques have demonstrated better detection of different behavioural regimes from surveys. This paper uses Bangladesh Climate Change Adaptation Survey data to compare three non-linear manifold techniques: Fisher Information Non-Parametric Embedding (FINE), Diffusion Maps and t-SNE. Using a simulation framework, we show that FINE appears to consistently outperform the other two methods except for questionnaires with high multi-partite information. While not being limited by the need to impose a grouping scheme on data, t-SNE and Diffusion Maps require some tuning and thus more computational effort since they are sensitive to the choice of hyperparameters, unlike FINE which is non-parametric. Finally, we show that FINE is able to detect adaptation regimes and corresponding key drivers from high-dimensional data.

Building Agent-Based Models for policy support based on Qualitative Inquiry. The case of Disaster Information Management in Jakarta, Indonesia

ABSTRACT. Qualitative research is a powerful means to capture human interactions and behavior. Although there are different methodologies to develop models based on qualitative research, a methodology is missing that enables to strike a balance between the comparability across cases provided by methodologies that rely on a common and context-independent framework and the flexibility to study any policy problem provided by methodologies that focus on capturing a case study without relying on a common framework. In this keynote, I will present a methodology targeting this gap for ABMs in two stages. First, a novel conceptual framework centered on a particular policy problem is developed based on existing theories and qualitative insights from one or more case studies. Second, empirical or theoretical ABMs are developed based on the conceptual framework and generic models. This methodology is illustrated by an example application for disaster information management in Jakarta, resulting in an empirical descriptive agent-based model.

10:40-12:20 Session 3I: NMA 1
Location: B10
Parallel Triangles and Squares Count for Multigraphs using Vertex Covers

ABSTRACT. Triangles and squares count are widely-used graph analytic metrics providing insights into the connectivity of a graph. While the literature has focused on algorithms for global counts in simple graphs, this paper presents parallel algorithms for counting triangles and squares in large multigraphs. The algorithms support global and per-node counts, with linear improvements in computational complexity as the number of cores increases. The triangle count algorithm has the same complexity as the best-known algorithm in the literature. The squares count algorithm has a lower execution time than previous methods. The proposed algorithms are evaluated on large-scale real-world graphs and multigraphs.

On filtering the noise in consensual communities

ABSTRACT. Community detection is a tool to understand how networks are organised. Ranging from social, technological, information or biological networks, many real-world networks exhibit a community structure. Consensual community detection fixes some of the issues of classical community detection like non-determinism. This is often done through what is called a consensus matrix. We show that this consensus matrix is not filled with relevant information only, it is noisy. We then show how to filter out some of the noise and how it could benefit existing algorithms.

Strokes and the brain's criticality viewed through the lens of network models

ABSTRACT. In recent tests of brain criticality in stroke patients, it was proposed that lesions cause a non-critical state of neural dynamics, and the critical state may subsequently be restored in parallel with a patient's post-stroke behavioral recovery. Our study suggests instead that the brain remains critical despite the injury. A stroke may, however, result in a decrease of integrity of the connectome, which can be quantified using graph-theoretical methods, and which may conceal criticality as measured by some of the commonly used indicators. We propose an explanation for this behavior and corroborate our interpretation with simulations of a modified Ising model and a more realistic Haimovici-Tagliazucchi-Chialvo model based on the Hagmann et al.'s connectome with ``artificial strokes'' performed by removing connections between two subsystems. In these models, we find standard indicators of criticality to behave similarly as in models based on real-world MRI scans of stroke patients. The apparent loss of criticality is an artifact of the division of the original connectome into weakly connected parts and does not result from genuine non-critical behavior. Because of the ubiquity of emergent critical behaviors and their universality, our findings are relevant not only to neurobiology but also to the analysis of network models of other complex systems operating at or near critical points.

12:20-12:50 Session 4: Poster Session

The paper lineup is the same for all three Poster Sessions.

Location: Atrium
Transferable Keyword Extraction and Generation with Text-to-Text Language Models

ABSTRACT. This paper explores the performance of the T5 text-to-text transfer-transformer language model together with some other generative models on the task of generating keywords from abstracts of scientific papers. Additionally, we evaluate the possibility of transferring keyword extraction and generation models tuned on scientific text collections to labelling news stories. The evaluation is carried out on the English component of POSMAC, a new corpus of scientific publications acquired from the Polish Library of Science. We compare the intrinsic and extrinsic performance of the models tested, i.e. T5 and mBART, which seem to perform similarly, although the former yields better results when transferred to the domain of news stories. A combination of the POSMAC and InTechOpen corpora seems optimal for the task at hand. We also make a number of observations about the quality and limitations of datasets used for keyword extraction and generation.

Low-Cost Behavioral Modeling of Antennas by Dimensionality Reduction and Domain Confinement

ABSTRACT. Behavioral modeling has been playing an increasing role in modern antenna de-sign. It is primarily employed to reduce the computational cost of procedures involving massive full-wave electromagnetic (EM) simulations, such as optimization or uncertainty quantification. Unfortunately, a construction of data-driven surrogates is impeded by the curse of dimensionality and the need for covering broad ranges of geometry and material parameters, as well as frequencies. The latter is important to ensure design utility of the model. This paper proposes a novel approach to reduced-cost surrogate modeling of antenna structures. Our methodology focuses the modeling process on parameter space regions containing high-quality designs, identified by randomized pre-screening. This allows for a considerable confinement of the model domain volume with simultaneous leaving out the parts containing poor-quality (therefore useless) designs. A supplementary dimensionality reduction is subsequently applied using the spectral analysis of the random observable set. The reduction process identifies the most important directions from the point of view of geometry parameter correlations, and spans the domain along a small subset thereof. As demonstrated using several examples of microstrip antennas, domain confinement as outlined above permits a dramatic improvement of the predictive power of the surrogates as compared to the state-of-the-art modeling approaches.

Feasibility and performance benefits of directional force fields for the tactical conflict management of UAVs

ABSTRACT. As we move towards scenarios where the adoption of unmanned aerial vehicles (UAVs) becomes massive, smart solutions are required to efficiently solve conflicts in the flight trajectories of aircraft so as to avoid potential collisions. Among the different possible approaches, the adoption of virtual force fields is a possible solution that is acknowledged for being simple, distributed, and yet effective. In this paper, we study the feasibility of a directional force field (D-FFP) approach, making a preliminary assessment of its performance benefits compared to a standard force field protocol (FFP) using Matlab simulation. Results show that, in typical scenarios associated to aerial traffic corridors, the proposed approach can reduce the flight time overhead 32% (on average), while maintaining the required flight safety distances between aircraft.

Hierarchical Classification of Adverse Events Based on Consumer’s Comments

ABSTRACT. In today’s fast-developing world, people strive to improve their quality and safety. The paper focuses on classifying adverse events based on consumers’ comments concerning Health and Hygiene products. The data – over 152000 comments were gathered from e-commerce sources and social me-dia. In the present research, the authors propose a language-independent approach allowing the analysis of comments written in various languages, which contrasts with the majority of the studies presented in the literature, where the comments written in only one language – usually in English are taken into consideration. Moreover, the presented study reflects the industry’s applicability so that it may be helpful for subsequent scientific re-searchers and businesses. Another differentiator of our approach is the efficient modelling of colloquial language, while other solutions primarily rely on professional jargon describing adverse events. The hierarchical and non-hierarchical classification approaches were tested based on Random Forest and XGBoost classifiers. The feature extraction and selection process al-lowed us to consider the tokens from minority classes. In order to assess the quality of classification quantitatively, the F1 score was applied. The technique of the hierarchical approach allows for the adjustment of thresholds, expanding the classification in the future, add another class to the classification process without repeating the complete learning process. Furthermore, hierarchical classification allows for conducting the classification process faster than the non-hierarchical approach for the XGBoost classifier. We obtained the promising results for XGBoost however it requires further analyses in a greater number of classes.

Weighted Hamming Metric and KNN Classification of Nominal-Continuous Data

ABSTRACT. The purpose of the article is to develop a new knn-based classification algorithm for nominal-continuous data. We start with Euclidean metrics for continuous and Hamming metrics for nominal part of data. The impact of specific features is modeled with corresponding weight in the metric definition. An algorithm for automatic weights detection is proposed. The weighted metric is then used in the standard knn classification algorithm. Series of numerical experiments show that the algorithm can successfully classify raw, non-normalized data.

Black Box Optimization Using QUBO and the Cross Entropy Method

ABSTRACT. Black-box optimization (BBO) can be used to optimize functions whose analytic form is unknown. A common approach to realising BBO is to learn a surrogate model which approximates the target black-box function which can then be solved via white-box optimization methods. In this paper, we present our approach BOX-QUBO, where the surrogate model is a QUBO matrix. However, unlike in previous state-of-the-art approaches, this matrix is not trained entirely by regression, but mostly by classification between 'good' and 'bad' solutions. This better accounts for the low capacity of the QUBO matrix, resulting in significantly better solutions overall. We tested our approach against the state-of-the-art on four domains and in all of them BOX-QUBO showed better results. A second contribution of this paper is the idea to also solve white-box problems, i.e. problems which could be directly formulated as QUBO, by means of black-box optimization in order to reduce the size of the QUBOs to the information-theoretic minimum. Experiments show that this significantly improves the results for MAX-k-SAT.

Quantum Factory Method: A Software Engineering Approach to Deal with Incompatibilities in Quantum Libraries

ABSTRACT. In this paper, we look over the current context of Quantum Computing regarding the available technologies to develop solutions. The extensive variety of tools and lack of methodologies can lead to incompatibilities across platforms, which end up as inconsistencies in the final result. The objective therefore is to solve said issues, for which we propose a design based on Software Engineering, and specifically Design Patterns. The results provided by the examples prove that the proposed solution is suitable for developing different cases, and we conclude on how this approach can be expanded for future work.

Estimating Chlorophyll Content from Hyperspectral Data Using Gradient Features

ABSTRACT. Non-invasive estimation of chlorophyll content in plants plays an important role in precision agriculture, as it allows us to understand the plants' stress and their nutrient status. This task may be tackled using hyperspectral imaging that acquires numerous narrow bands of the electromagnetic spectrum, which may reflect subtle plants' features, and inherently offers spatial scalability. Such imagery is, however, highly-dimensional, therefore it is challenging to transfer from the imaging device, store and investigate manually. In this paper, we propose a data-driven machine learning pipeline for estimating chlorophyll content from hyperspectral data. It benefits from the Savitzky-Golay filtering to smooth the (potentially noisy) spectral curves, and from gradient-based features extracted from such a smoothed signal. The experimental study, performed over a benchmark dataset, revealed that our approach significantly outperforms the state of the art according to the widely-established estimation quality metrics obtained for four chlorophyll-related parameters.

PIES in multi-region elastic problems including body forces

ABSTRACT. The paper presents the formulation of a parametric integral equation system (PIES) for problems with piecewise homogeneous media and body forces. The multi-region approach is used, in which each region is treated separately and modeled globally by a Bezier surface. Each subarea can have different properties and different forces can act on it. Finally, regions are connected by dedicated conditions. Two examples are solved to conrm the effectiveness of the proposed approach. The results are compared with analytical solutions and those received from the boundary element method (BEM).

r-softmax: Generalized Softmax with Controllable Sparsity Rate

ABSTRACT. Nowadays artificial neural network models achieve remarkable results in many disciplines. Functions mapping the representation provided by the model to the probability distribution are the inseparable aspect of deep learning solutions. Although softmax is a commonly accepted probability mapping function in the machine learning community, it cannot return sparse outputs and always spreads the positive probability to all positions. In this paper, we propose r-softmax, a modification of the softmax, outputting sparse probability distribution with controllable sparsity rate. In contrast to the existing sparse probability mapping functions, we provide an intuitive mechanism for controlling the output sparsity level. We show on several multi-label datasets that r-softmax outperforms other sparse alternatives to softmax and is highly competitive with the original softmax. We also apply r-softmax to the self-attention module of a pre-trained transformer language model and demonstrate that it leads to improved performance when fine-tuning the model on different natural language processing tasks.

Solving uncertainly defined curvilinear potential 2D BVPs by the IFPIES

ABSTRACT. The paper presents the interval fast parametric integral equations system (IFPIES) applied to model and solve uncertainly defined curvilinear potential 2D boundary value problems with complex shapes. Contrary to previous research, the IFPIES is used to model the uncertainty of both boundary shape and boundary conditions. The IFPIES uses interval numbers and directed interval arithmetic with some modifications previously developed by the authors. Curvilinear segments in the form of B\'ezier curves of the third degree are used to model the boundary shape. However, the curves also required some modifications connected with applied directed interval arithmetic. It should be noted that simultaneous modelling of boundary shape and boundary conditions allows for a comprehensive approach to considered problems. The reliability and efficiency of the IFPIES solutions are verified on 2D complex potential problems with curvilinear domains. The solutions were compared with the interval solutions obtained by the interval PIES. All performed tests indicated the high efficiency of the IFPIES method.

Memory-based Monte Carlo integration for solving Partial Differential Equations using Neural Networks

ABSTRACT. Monte Carlo integration is a widely used quadrature rule to solve Partial Differential Equations with neural networks due to its ability to guarantee overfitting-free solutions and high-dimensional scalability. However, this stochastic method produces noisy losses and gradients during training, which hinders a proper convergence diagnosis. Typically, this is overcome using an immense (disproportionate) amount of integration points, which deteriorates the training performance. This work proposes a memory-based Monte Carlo integration method that produces accurate integral approximations without requiring the high computational costs of processing large samples during training.

Fuzzy solutions of boundary problems using interval parametric integral equations system

ABSTRACT. This paper investigated the possibility of obtaining fuzzy solutions to boundary problems using the interval parametric integral equations system (IPIES) method. It focused on the IPIES method because, thanks to the analytical modification of the boundary integral equations (BIE), it does not require classical discretization. In this method, an original modification of directed interval arithmetic was also proposed. Solutions obtained using classical and directed interval arithmetic (known from the literature) were also presented for comparison. The extension of the IPIES method (to obtain fuzzy solutions) was to divide the fuzzy number into α-cuts (depending on the assumed confidence level). Then, such α-cuts were represented as interval numbers. Preliminary tests were carried out in which the influence of boundary condition uncertainty on fuzzy solutions (obtained using IPIES) was investigated. The analysis of solutions was presented on examples described by Laplace's equation. The accuracy verification of the fuzzy PIES solutions required a modification of known, exactly defined analytical solutions. They were defined using intervals and calculated using appropriate interval arithmetic in α-cuts to obtain fuzzy analytical solutions finally. The research showed the high accuracy of fuzzy solutions obtained using IPIES and confirmed the high potential of the method in obtaining such solutions.

A Robust Machine Learning Protocol for Prediction of Prostate Cancer Survival at Multiple Time-Horizons

ABSTRACT. Prostate cancer is one of the leading causes of cancer death in men in Western societies. Predicting patients' survival using clinical descriptors is important for stratification in the risk classes and selecting appropriate treatment. Current work is devoted to developing a robust Machine Learning (ML) protocol for predicting the survival of patients with metastatic castration-resistant prostate cancer. In particular, we aimed to identify relevant factors for survival at various time horizons.

To this end, we built ML models for eight different predictive horizons, starting at three and up to forty-eight months.The model building involved the identification of informative variables with the help of the MultiDimensional Feature Selection (MDFS) algorithm, entire modelling procedure was performed in multiple repeats of cross-validation. We evaluated the application of 5 popular classification algorithms: Random Forest, XGBoost, logistic regression, k-NN and naive Bayes, for this task. Best modelling results for all time horizons were obtained with the help of Random Forest. Good prediction results and stable feature selection were obtained for six horizons, excluding the shortest and longest ones. The informative variables differ significantly for different predictive time horizons. Different factors affect survival rates over different periods, however, four clinical variables: ALP, LDH, HB and PSA, were relevant for all stable predictive horizons. The modelling procedure that involves computationally intensive multiple repeats of cross-validated modelling, allows for robust prediction of the relevant features and for much-improved estimation of uncertainty of results.

A Hypergraph Model and Associated Optimization Strategies for Path Length-Driven Netlist Partitioning

ABSTRACT. Dividing modern circuits into multiple subcircuits is common when prototyping on multi-FPGA platforms. While existing partitioning algorithms focus on minimizing cut size, the degradation of the critical path length can appear when mapping long paths across multiple FP- GAs. We propose a hypergraph model to address this issue. We use it to design partitioning algorithms and a refinement method, which we com- bine in a multilevel framework with existing min-cut solvers to tackle path-length and cut-size objectives. We observe a significant reduction in critical path degradation, by 12%-40%, at the expense of a moderate increase in cut size, compared to path-agnostic min-cut methods.

TwitterEmo: Annotating Emotions and Sentiment in Polish Twitter

ABSTRACT. This article introduces a new dataset, TwitterEmo, which can be used for emotion and sentiment analysis tasks in the Polish language. So far, only domain-specific or multi-domain emotion/sentiment-annotated datasets were available for Polish. TwitterEmo is meant to broaden application of emotion recognition methods in Polish to non-domain specific and colloquial language text data with special regard to social media. The collection contains 36,280 Twitter entries (tweets) from a one year period, covering politics, social issues, and general topics. Each entry is annotated with Plutchik’s eight basic emotions and sentiment. Additionally, a ‘sarcasm’ category is included. Each entry was annotated by at least four annotators. Annotations were later unified partially automatically and through a group annotation. We evaluated the annotations consistency by calculating Positive Specific Agreement (PSA). We also present the results of the evaluation using several language models available including HerBERT and TrelBERT.

Discovering Process Models from Patient Notes

ABSTRACT. Process Mining typically requires event logs where each event is labelled with a process activity. That’s not always the case, as many process-aware information systems store process-related information in the form of text notes. An example are patient information systems (PIS), which store much information in the form of free-text patient notes. Labelling text-based events with their activity is not trivial, because of the amount of data involved, but also because the activity represented by a text note can be ambiguous. Depending on the requirements of a process analyst, we might need to label events with more or fewer unique activities: two similar events could represent the same activity (e.g. screen referral) or two different activities (e.g. screen adult ADHD referral and screen depression referral). We can therefore view activities as ontologies with an arbitrary number of entries. This paper proposes a method that produces an ontology for the activities of a process by analysing a text-based event log. We implemented an interactive tool that generates process models based on this ontology and the text-based event log. We demonstrate the proposed method’s usefulness by dis-covering a mental health referral process model from real-world data.

Automatic structuring of topics for natural language generation in community question answering in programming domain

ABSTRACT. This article describes the automatic generation of Stack Overflow responses using GPT-Neo. The process of forming a dataset and samples for experiments is described. Comparisons of the quality of generation for various topics, obtained using thematic modeling of the titles of questions and tags, were carried out. The experiments involved only questions, the answers to which are plain text. In the absence of consideration of the structures and themes of texts, it can be difficult to train models, so the question is being investigated whether thematic modeling of questions can help in solving the problem. Fine-tuning of GPT-neo for each topic also been conducted.

Improving LocalMaxs Multiword Expression Statistical Extractor

ABSTRACT. LocalMaxs algorithm extracts relevant Multiword Expressions from text corpora based on a statistical approach. It selects n-grams according to their relative cohesion values in the context of their neighbourhood, and uses no specific-language tools. Using a statistical extractor may become particularly useful when linguistic tools are not available. However, statistical extractors face an increased challenge of obtaining good practical results, compared to linguistic approaches which benefit from language-specific, syntactic and/or semantic, knowledge. This paper makes two main contributions. First, an improvement to the LocalMaxs algorithm is proposed, based on two modifications to the criterion for selecting relevant Multiword Expressions, namely: a more selective evaluation of the cohesion of each candidate with respect to its neighbourhood; and a filtering criterion guided by the location of stopwords within each candidate. Second, a new language-independent method is presented for the automatic self-identification of stopwords in corpora, requiring no external stopwords lists or linguistic tools.

The obtained results for LocalMaxs reach Precision values of about 80% for the languages tested, English, French, German and Portuguese, corresponding to an increase of around 12-13% compared to the previous LocalMaxs version. The performance of the self-identification of stopwords reach high Precision for top-ranked stopword candidates.

Similarity-based Memory Enhanced Joint Entity and Relation Extraction

ABSTRACT. Document-level joint entity and relation extraction is a challenging information extraction task that requires a unified approach where a single neural network performs four sub-tasks: mention detection, coreference resolution, entity classification, and relation extraction. Existing methods often utilize a sequential multi-task learning approach, in which the arbitral decomposition causes the current task to depend only on the previous one, missing the possible existence of the more complex relationships between tasks. In this paper, we present a multi-task learning framework with bidirectional memory-like dependency between tasks to address those drawbacks and perform the joint task more accurately. Our empirical studies show that the proposed approach outperforms the existing methods and achieves state-of-the-art results on the BioCreative V CDR corpus.

Compiling Tensor Expressions into Einsum

ABSTRACT. Tensors are a widely used representations of multidimensional data in scientific and engineering applications. However, efficiently evaluating tensor expressions is still a challenging problem, as it requires a deep understanding of the underlying mathematical operations. While many linear algebra libraries provide an Einsum function for tensor computations, it is rarely used, because Einsum is not yet common knowledge. Furthermore, tensor expressions in textbooks and scientific articles are often given in a form that can be implemented directly by using nested for-loops. As a result, many tensor expressions are evaluated using inefficient implementations. For making the direct evaluation of tensor expressions multiple orders of magnitude faster, we present a tool that automatically maps tensor expressions to highly tuned linear algebra libraries by leveraging the power of Einsum. Our tool is designed to simplify the process of implementing efficient tensor expressions, and thus making it easier to work with complex multidimensional data.

TAQOS: A Benchmark Protocol for Quantum Optimization Systems

ABSTRACT. The growing availability of quantum computers raises questions about their ability to solve concrete problems. Existing benchmark protocols still lack problem diversity and attempt to summarize quantum advantage in a single metric that measures the quality of found solutions. Unfortunately, the solution quality metric is insufficient for measuring quantum algorithm performance and should be presented along with time and instance coverage metrics. This paper aims to establish the TAQOS protocol to perform a Tight Analysis of Quantum Optimization Systems. The combination of metrics considered by this protocol helps to identify problems and instances liable to produce quantum advantage on Noisy-Intermediate Scale Quantum (NISQ) devices for useful applications. The methodology used for the benchmark process is detailed and an illustrative short case study on the Max-Cut problem is provided.

How to select superior Neural Network simulating inner city contaminant transport? Verification and validation techniques.

ABSTRACT. Artificial neural networks (ANNs) can learn via experience to solve almost every problem. However, the ANN application in a new task entails a necessity to perform some additional adaptations. First is fitting the ANNs type or structure by applying the various number of hidden layers and neurons in it or using different activation functions or other parameters, allowing ANN to learn the stated task. The second is the validation and verification method of the ANN quality that should be suited to the stated task. Occasionally the differences between the ANNs output are significant, and it is easy to choose the best network. However, sometimes the differences pronounced by standard performance parameters are minor, and it is difficult to distinguish which ANN has reached the best level of training. In such situations, a more detailed analysis is required to judge about validity of the given ANN model. This paper presents the results of training the ANN to predict the spatial and temporal evolution of the airborne contaminant over a city domain. Statistical performance measures have validated the trained ANNs performance. Finally, new measures allowing to judge of both time and spatial distribution of the ANN output have been proposed and used to select the prior ANN. The nominated ANN can be used as a surrogate model in the real-time localization system.

Korpusomat.eu: A Multilingual Platform for Effortless Building and Analysing Linguistic Corpora

ABSTRACT. The paper presents a new, free web-based platform Korpusomat for effortless building and analysing linguistic data sets (corpora). The aim of Korpusomat is to bridge the gap between corpus linguistics — which requires tools for corpus analysis based on various linguistic annotations — and modern multilingual machine learning-based approaches to text processing. A special focus is placed on multilinguality: the platform currently serves 29 languages, but more can be easily added per user request. We discuss architectural patterns developed to build multilinguality into the platform, e.g. using interchangeable natural language processing tools, flexible extraction, and tracking of tag sets used in tagging text. We provide a performance benchmark showing the speed of use as suitable for use in large-scale research. We discuss the use of Korpusomat in multidisciplinary research, and present a case study located at the intersection between discourse analysis and migration studies, based on a corpus generated and queried in the application. This provides a general framework for using the platform for research based on automatically annotated corpora, and demonstrates the usefulness of Korpusomat for supporting domain researchers in using computational science in their fields.

Knowledge hypergraph-based multidimensional analysis for natural language queries: application to medical data

ABSTRACT. In recent years, data is continuously evolving not only in volume but also in types and sources, which makes the multidimensional analysis using traditional approaches a complex and difficult one. In this paper, we propose a three-layer-based architecture to perform multidimensional analysis of natural language queries on health data: 1/ Treatment Layer aiming at xR2RML mappings generation and knowledge hypergraph building; 2/ Storage Layer allowing mainly to store the RDFs triples returned by the query of NoSQL databases, and 3/ Semantic Layer, based on a domain ontology which constitutes the knowledge base for the generation of the mappings and the building of the knowledge hypergraph. The originality of our proposal lies in the knowledge hypergraph, and its capacity to support multidimensional queries. A prototype is developed and the experiments have shown the relevance of the returned multidimensional query results as well as an improvement over traditional approaches.

Graph TopoFilter: a method for noisy labels detection for graph-structured classes

ABSTRACT. Detection of incorrectly conducted failure repairs is not a trivial task for companies manufacturing big volumes of goods. Extensive data sets of service calls are periodically updated and subject matters experts would not be efficient in manual annotating of the data. Symptoms described in free text form might be caused by different components - not necessarily by the most obvious. Classes are imbalanced due to different time to failure of particular components and thus actions taken for some rare failures might be noted as incorrect ones. The presented problem is similar to the problem of learning in a presence of noisy labels, which are caused by human errors, variation of annotator to annotator perception, faults made by annotating algorithms or by other reasons. There are multiple techniques to prevent neural networks from overfitting to the noisy data, but to our best knowledge none of them considers relationships between classes, which is crucial in engineering systems built from multiple components connected in a specific way. A novel approach of selecting clean data samples in an unsupervised manner is presented in this paper. It is based on a topological approach exploring the deep representation of the features in the hidden space, enriched with knowledge graphs reflecting the structure of the classes. We present the case study of the algorithm utilized for service calls data set for home appliances.

Multi-Granular Computing Can Predict Prodromal Alzheimer’s Disease Indications in Normal Subjects

ABSTRACT. The processes of neurodegeneration related to Alzheimer’s disease (AD) begin several decades before the first symptoms. We have used granular computing to classify cognitive data from BIOCARD study that have been started over 20 years ago with 354 normal subjects. Patients were evaluated every year by a team of neuropsychologists and neurologists and classified as normal, with MCI (mild cognitive impairments), or with dementia. As the decision attribute, we have used CDRSUM (Clinical Dementia Rating Sum of Boxes) as a more quantitative measure than the above classification. Based on 150 stable subjects with different stages of AD, and on the group of 40 AD, we have found sets of different rules (related to multi-granular computing) that classify cognitive attributes with CDRSUM as the disease stage. By applying these rules to normal (CDRSUM=0) 21 subjects we have predicted that one subject might get mild dementia (CDRSUM > 4.5), one very mild dementia (CDRSUM>2.25), four might get very mild dementia or questionable impairment and one other might get questionable impairment (CDRSUM>0.75). AI methods can find, invisible for neuropsychologists, patterns in cognitive attributes of normal subjects that might indicate their pre-dementia stage.

Quantifying Parking Difficulty with Transport and Prediction Models for Travel Mode Choice Modelling

ABSTRACT. Transportation planning and promoting sustainable transportation necessitates the understanding of what makes people select individual travel modes. Hence, classifiers are trained to predict travel modes, such as the use of a private car vs a bike for individual journeys in the cities. However, what data should be used as input for such models and how to transform these data into features is partly an open issue. In this work, we focus on parking-related factors to propose the way survey data including spatial data and origin-destination matrices of the transport model can be transformed into features. Such features are used to complement the input for travel mode choice prediction. Next, we propose how the impact of the newly proposed features on classifiers trained with different machine learning methods can be evaluated. This includes comparing the importance of parking-related features to other features used for travel mode prediction.

The methods and features proposed in this work were tested with datasets containing the data on real journeys and transport modes used for them reported by over 2000 respondents in three surveys performed in the City of Warsaw, Poland in 2022. Results of the extensive evaluation show that the features proposed in this study can significantly increase the accuracy of travel mode choice predictions.

Cloud Native approach to the implementation of an environmental monitoring system for Smart City based on IoT devices

ABSTRACT. In this paper, we present the architecture and implementation of the environmental monitoring system, which is one of the main elements of the Smart City system, deployed in a small town in Poland -- Boguchwała, Podkarpacie. The system is based on the Internet of Things devices and Cloud Native techniques, which allow for measuring several environmental parameters like pollution, electrosmog and acoustic threats. In addition to these parameters, characteristic of environmental monitoring, the system has been enhanced with video monitoring techniques, such as evaluating the load of the road on the main roads and crowd detection. In particular, a front-end application was implemented to visualize the results on a city map. The system is deployed on Raspberry Pi and NVidia Jetson platforms using Kubernetes as resources orchestrator. We managed to design, implement, and deploy a system that makes measurements and predicts the parameters indicated. The proposed solution has no significant impact on the energy consumption of the measuring stations while increasing the scalability and extensibility of the system.

Exploring the Capabilities of Quantum Support Vector Machines for Image Classification on the MNIST Benchmark

ABSTRACT. Quantum computing is a rapidly growing field of science with many potential applications emerging shortly. One such field is machine learning, which is applied in many areas of science and industry. Machine learning approaches can be enhanced using quantum algorithms and work even more effectively, as demonstrated in this paper. We present our experimental attempts to explore Quantum Support Vector Machine (QSVM) capabilities and test their performance on the collected well-known images of handwritten digits for image classification called the MNIST benchmark. A variational quantum circuit was adopted to build the quantum kernel matrix and successfully applied to the classical SVM algorithm. The proposed model obtained relatively high accuracy tested on noiseless quantum simulators. Finally, we performed computational experiments on real and recently setup IBM Quantum systems and achieved promising results, demonstrating and discussing the QSVM applicability and possible future improvements.

Determination of lower bounds of the goal function for a single machine scheduling problem on D-Wave quantum annealer

ABSTRACT. The fundamental problem of using metaheuristics and almost all other approximation methods for difficult discrete optimization problems is the lack of knowledge about the quality of the obtained solution. It can be either very close to the optimum, or even optimal (but with no guarantee of optimality), or several hundred percent worse. In this paper, we propose a methodology for efficiently estimating the quality of such approaches by rapidly -- practically in constant time -- generating good lower (for minimization problems) bounds on the optimal value of the objective function using a quantum machine implementing a quantum annealing procedure, which can be an excellent benchmark for comparing approximate algorithms. Another natural application is to use the proposed approach in the construction of exact algorithms based on the Branch and Bound method to obtain real optimal solutions.

Dynamic Data Replication for Short Time-to-Completion in a Data Grid

ABSTRACT. Science collaborations use a computer grid to run expensive computational tasks on large data sets. Tasks as jobs across the network demand data and thereby workload management and data allocation to maintain the computational workflow. Data allocation includes data placement with different replication factors (multiplicity) of data. A proposed data replication model can place multitudes of subsets of a data population in a distributed system, such as a computer cluster and computer grid. A stochastic simulation with a data and computing example from the ATLAS Physics Collaboration shows its effectiveness and usability. This collaboration uses one of the largest Computing Grids. This paper showcases data allocation with different replica factors and various numbers of subsets to improve the overall situation in a computer network.

Oscillatory behaviour of the RBF-FD approximation accuracy under increasing stencil size

ABSTRACT. When solving partial differential equations on scattered nodes using the radial basis function generated finite difference (RBF-FD) method, one of the parameters that must be chosen is the stencil size. Focusing on Polyharmonic Spline RBFs with monomial augmentation, we observe that the stencil size affects the approximation accuracy in a particularly interesting way - the solution error dependence on stencil size has several local minima. We find that we can connect this behaviour with the spatial dependence of the signed approximation error. Based on this observation we are then able to introduce a numerical quantity that indicates whether a given stencil size is close to one of those local minima.

Detection of Anomalous Days in Energy Demand using Leading Point Multi-Regression Model

ABSTRACT. In this paper, the Leading Point Multi-Regression Model was utilized to detect days with anomalous energy consumption profiles. The analyzed data came from the Polish energy system and contained hourly energy demands. Days with untypical daily profiles were identified based on the statistical analy-sis of relative errors of the model. The distributions of the absolute and relative model’s log-errors were Gaussian. Based on the analysis of compatibility of distributions, two ranges of error values were identified: 3.88% – 4.98% and above 4.98%. Days with anomalous energy consumption profiles were identified as major religious holidays in Poland: Easter, All Saints, and Christmas Eve, as well as days related to a celebration of the new year: New Year’s Eve and New Year.

Numerical method for 3D quantification of glenoid bone loss

ABSTRACT. Let a three-dimensional ball intersect a three-dimensional polyhedron given by its triangulated boundary with outward unit normals. We propose a numerical method for approximate computation of the intersection volume by using voxelization of the interior of the polyhedron. The approximation error is verified by comparison with the exact volume of the polyhedron provided by the Gauss divergence theorem. Voxelization of the polyhedron interior is achieved by the aid of an indicator function, which is very similar to the signed distance to the boundary of the polyhedron. The proposed numerical method can be used in 3D quantification of glenoid bone loss.

Solving Higher Order Binary Optimization Problems on NISQ Devices: Experiments and Limitations

ABSTRACT. With the recent availability of Noisy Intermediate-Scale Quantum devices, the potential of quantum computers to impact the field of combinatorial optimization lies in quantum variational and annealing-based methods. This paper further compares Quantum Annealing (QA) and the Quantum Approximate Optimization Algorithm (QAOA) in solving Higher Order Binary Optimization (HOBO) problems. This case study considers the hypergraph partitioning problem, which is used to generate custom HOBO problems. Our experiments show that D-Wave systems quickly reach limits solving dense HOBO problems. Although the QAOA demonstrates better performance on exact simulations, noisy simulations reveal that the gate error rate should remain under \(10^{-5}\) to match D-Wave systems' performance for low density problems.

On the Impact of Noisy Labels on Supervised Classification Models

ABSTRACT. The amount of data generated daily grows tremendously in virtually all domains of science and industry, and its efficient storage, processing and analysis pose significant practical challenges nowadays. To automate the process of extracting useful insights from raw data, numerous supervised machine learning algorithms have been researched so far. They benefit from annotated training sets which are fed to the training routine which elaborates a model that is further deployed for a specific task. The process of capturing real-world data may lead to acquring noisy observations, ultimately affecting the models trained from such data. The impact of the label noise is, however, under-researched, and the robustness of classic learners against such noise remains unclear. We tackle this research gap and not only thoroughly investigate the classification capabilities of an array of widely-adopted machine learning models over a variety of contamination scenarios, but also suggest new metrics that could be utilized to quantify such models' robustness. Our extensive computational experiments shed more light on the impact of training set contamination on the operational behavior of supervised learners.

Solving Complex Sequential Decision-Making Problems by Deep Reinforcement Learning with Heuristic Rules

ABSTRACT. Deep reinforcement learning (RL) has demonstrated great capabilities in dealing with sequential decision-making problems, but its performance is often bounded by suboptimal solutions in many complex applications. This paper proposes the use of human expertise to increase the performance of deep RL methods. Human domain knowledge is characterized by heuristic rules and they are utilized adaptively to alter either the reward signals or environment states during the learning process of deep RL. This prevents deep RL methods from being trapped in local optimal solutions and computationally expensive training process and thus allowing them to maximize their performance when carrying out designated tasks. The proposed approach is experimented with two different video games developed using the Arcade Learning Environment. With the extra information provided at the right time by human experts via heuristic rules, deep RL methods show greater performance compared to circumstances where human knowledge is not used. This implies that our approach of utilizing human expertise for deep RL has helped to increase the performance of deep RL and it has a great potential to be generalized and applied to solve complex real-world decision-making problems efficiently.

A New Algorithm for the Closest Pair of Points for Very Large Data Sets using Exponent Bucketing and Windowing

ABSTRACT. n this contribution, a simple and efficient algorithm for the closest-pair problem is described using the preprocessing based on exponent bucketing and respecting accuracy of the floating point representation. The preprocessing is of the O(N) complexity.

Experiments made for the uniform distribution proved significant speedup. The proposed approach is directly applicable to the E2 case.

Computational Steering: Interactive Design-through-Analysis for Simulation Sciences

ABSTRACT. Computational steering has seen regular incarnations in the Computational Science and Engineering (CSE) domain with every leap forward in computing and visualization technologies. While often associated with the ability to inter- act with large-scale simulations running on high-performance compute (HPC) clusters, this poster will introduce a novel computational steering approach: interactive design-through-analysis (DTA) through visual demonstration.

The DTA paradigm means the seamless integration of computer-aided de- sign and (simulation-based) analysis tools so that scientists, engineers & re- searchers can go back and forth between product design, analysis, and opti- mization. While coined already in the late 70’s [1], the DTA paradigm got new impetus with the advent of Isogeometric Analysis (IgA) [2], which emerged from the vision of bridging the gap between CAD and CAE by resorting to a com- mon mathematical framework – Non-Uniform Rational B-Splines (NURBS) – for modeling geometries and representing solution fields to PDE models.

The proposed approach’s novelty consists in replacing traditional simulation- based (isogeometric) analysis that often hinders rapid design-through-analysis workflows due to its high computational costs with our recently developed IgANets [3], which is the embedding of physics-informed machine learning as proposed in [4] into the IgA paradigm. More precisely, we train parameterized deep neural networks to predict solution coefficients of B-Spline/NURBS rep- resentations in a compute-intensive offline stage. Problem configurations and geometries are encoded as B-Spline/NURBS objects and passed to the network as inputs, to provide a mechanism for user interaction. Evaluation of IgANets is instantaneous, thereby enabling interactive DTA feedback loops.

Note to organizers We plan to complement the poster with an interactive demonstration of our approach. We anticipate to have a VR-prototype ready by the time of the conference which we intend to bring to the poster session. In addition, we aim to present the demonstrator on a tablet or laptop to demon- strate multi-user interaction. We would therefore appreciate if the organizers can provide access to power and internet if this abstract gets accepted.

References [1] J. A. Augustitus, M. M. Kamal, and L. J. Howell. Design through analysis of an experi- mental automobile structure. SAE Transactions, 86:2186–2198, 1977. [2] T.J.R. Hughes, J.A. Cottrell, and Y. Bazilevs. Isogeometric analysis: CAD, finite el- ements, NURBS, exact geometry and mesh refinement. Computer Methods in Applied Mechanics and Engineering, 194(39):4135–4195, 2005. [3] M. Mo ̈ller, D. Toshniwal, F. van Ruiten. Physics-informed machine learning embed- ded into isogeometric analysis. In: Mathematics: Key enabling technology for scientific machine learning. Platform Wiskunde, 57–59, 2021. [4] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.

13:50-14:40 Session 5: Keynote Lecture 2
Location: 100
Building Robust Simulation-based Forecasts During Emergencies

ABSTRACT. Many of today’s global crises, such as the 2015 migration crisis in Syria and the 2020 COVID pandemic, have a sudden evolution that complicates the preparation of a community response. Simulation-based forecasts for such crises can help to guide the development of mitigation policies or inform a more efficient distribution of support. However, the time required to develop, validate and execute these models can often be intractably long, causing many of these forecasts to only become accurate after the damage has occurred. In this talk I will share the experiences within our group in developing and delivering forecasting reports for two types of emergency situations: conflict-driven migration, and COVID-19 infectious disease outbreaks. We try to achieve this using open-source agent-based models, high performance computing and generic tools for automation, verification, validation and ensemble forecasting with uncertainty. It’s a feat that is extremely difficult to accomplish even with large and dedicated teams. We only have a small and partially dedicated team. Nevertheless, I’ll share the approaches we use to handle the challenge of rapidly developing simulation-based emergency forecasts. These approaches helped us to perform better and deliver more than widely feared, so we believe they could benefit other research teams too.

14:50-16:30 Session 6A: MT 3
Location: 100
Towards Automatic Generation of Digital Twins: Graph-based Integration of Smart City Datasets

ABSTRACT. This paper presents a graph-based approach to modelling and analysis of spatial (GIS) datasets supporting the deployment of *smart city* solutions. The presented approach is based on the *spatially-triggered graph transfromations* (STGT) methodology, which allows for *materialisation* of spatial relationships detected using suitable tools, as well as performing measurements and modifications of geometries. The theory is illustrated using a real-world example which concerns street lighting. It shows how an existing traffic sensor network can be used to enable dynamic dimming of lamps, which can result in significant energy usage savings. Also, network analysis is applied to broaden the coverage of such systems, even in case of sensor sparsity. The presented results have been obtained in a real-world project and are due for larger-scale validation in the near future.

Digital twin simulation development and execution on HPC infrastructures

ABSTRACT. Digital Twin paradigm in medical care has recently gained popularity among proponents of translational medicine, to enable a medical care professional to make informed choices regarding treatment on the basis of digital simulations. In this paper, we present an overview of functional and non-functional requirements related to specific IT solutions which enable such simulations – including the need to ensure repeatability and traceability of results – and propose an architecture which satisfies these requirements. We then describe a computational platform which facilitates digital twin simulations, and validate our approach in the context of a real-life medical use case: the BoneStrength application.

Improving the Resiliency of Decentralized Crowdsourced Blockchain Oracles

ABSTRACT. The emergence of blockchain technologies has created the possibility of transforming business processes in the form of immutable agreements called smart contracts. Smart contracts suffer from a major limitation; they cannot authenticate the trustworthiness of real-world data sources, creating the need for intermediaries called oracles. Oracles are trusted entities that connect on-chain systems with off-chain data, allowing smart contracts to operate on real-world inputs in a trustworthy manner. A popular oracle protocol is a crowdsourced oracle, where unrelated individuals attest to facts through voting mechanisms in smart contracts. Crowdsourced oracles have unique challenges: the trustworthiness and correctness of outcomes cannot be explicitly verified. These problems are aggravated by inherent vulnerabilities to attacks, such as sibyl attacks. To address this weakness, this paper proposes a reputation-based mechanism, where oracles are given a reputation value depending on the implied correctness of their actions over time. This reputation score is used to eliminate malicious agents from the participant pool. Additionally, two reputation-based voting mechanisms are proposed. The effectiveness of the proposed mechanism is evaluated using an agent-based simulation of a crowdsourced oracle platform, where a pool of witnesses performs evaluate Boolean queries.

Characterization of pedestrian contact interaction trajectories

ABSTRACT. A spreading process can be observed when a particular behavior, substance, or disease spreads through a population over time in social and biological systems. It is widely believed that contact interactions among individual entities play an essential role in the spreading process. Although the contact interactions are often influenced by geometrical conditions, little attention has been paid to understand their effects especially on contact duration among pedestrians. To examine how the pedestrian flow setups affect contact duration distribution, we have analyzed trajectories of pedestrians in contact interactions collected from pedestrian flow experiments of uni-, bi- and multi-directional setups. Based on standardized maximal distance, we have classified types of motions observed in the contact interactions. We have found that almost all motion in the unidirectional flow setup can be characterized as subdiffusive motion, suggesting that the empirically measured contact duration tends to be longer than one estimated by ballistic motion assumption. However, Brownian motion is more frequently observed from other flow setups, indicating that the contact duration estimated by ballistic motion assumption shows good agreement with the empirically measured one. Furthermore, when the difference in relative speed distributions between the experimental data and ballistic motion assumption is larger, more subdiffusive motions are observed. This study also has practical implications. For instance, it highlights that geometrical conditions yielding smaller difference in the relative speed distributions are preferred when diseases can be transmitted through face-to-face interactions.

Predicting ABM Results with Covering Arrays and Random Forests

ABSTRACT. Simulation is a useful and effective way to analyze and study complex, real-world systems. It allows researchers, practitioners, and decision makers to make sense of the inner working of a system that involves many factors often resulting in some sort of emergent behavior. The number parameter value combinations grows exponentially and quickly becomes infeasible to test them all or even to explore a suitable subset of them. How does one then efficiently identify the parameter value combinations that matter for a particular simulation study? In addition, is it possible to train a machine learning model to predict the outcome of an agent-based model without running the agent-based model for all parameter value combinations? We propose utilizing covering arrays to create $t$-way ($t$ = 2, 3, 4, etc.) combinations of parameter values to significantly reduce the parameter value exploration space for agent-based models. In our prior work we showed that covering arrays were useful for systematically decreasing the parameter space in an agent-based model. We now build on that work by applying it to Wilensky's HeatBugs model and training a random forest machine learning model to predict simulation results by using the covering arrays to select our training and test data. Our results show that a 2-way covering array provides sufficient training data to train our random forest to predict three different simulation outcomes. Our process of using covering arrays to decrease parameter space to then predict ABM results using machine learning is successful.

14:50-16:30 Session 6B: MT 4-ol
Location: 303
Vecpar – A Framework for Portabililty and Parallelization

ABSTRACT. Complex particle reconstruction software used by High Energy Physics experiments already pushes the edges of computing resources with demanding requirements for speed and memory throughput, but the future experiments pose an even greater challenge. Although many supercomputers have already reached petascale capacities using many-core architectures and accelerators, numerous scientific applications still need to be adapted to make use of these new resources. To ensure a smooth transition to a platform-agnostic code base, we developed a prototype of a portability and parallelization framework named vecpar. In this paper, we introduce the technical concepts, the main features and we demonstrate the framework’s potential by comparing the runtimes of the single-source vecpar implementation (compiled for different architectures) with native serial and parallel implementations, which reveal significant speedup over the former and competitive speedup versus the latter. Further optimizations and extended portability options are currently investigated and are therefore the focus of future work.

Online Runtime Environment Prediction for Complex Colocation Interference in Distributed Stream Processing

ABSTRACT. To improve system resource utilization, multiple operators are co-located in the distributed stream processing systems. In the colocation scenarios, the node runtime environment and co-located operators affect each other. The existing methods mainly study the impact of the runtime environment on operator performance. However, there is still a lack of in-depth research on the interference of operator colocation to the runtime environment. It will affect the performance prediction of co-located operators, and further affect the effect of operator placement. To solve these problems, we propose an online runtime environment prediction method based on the operator portraits for complex colocation interference. The experimental results show that compared with the existing works, our method can not only accurately predict the runtime environment online, but also has strong scalability and continuous learning ability. It is worth noting that our method exhibits excellent online prediction performance for runtime environments in large-scale colocation scenarios.

Development of 3D viscoelastic crustal deformation analysis solver with data-driven method on GPU

ABSTRACT. In this paper, we developed a 3D viscoelastic analysis solver with data-driven method on GPUs for fast computation of highly detailed 3D crustal structure models. Here, the initial solution is obtained with high accuracy using a data-driven predictor based on previous time-step results, which reduces the number of multi-grid solver iterations and thus reduces the computation cost. To realize memory saving and high-performance on GPUs, the previous time step results are compressed by multiplying a random matrix, and multiple Green's functions are solved simultaneously for improving the memory-bound matrix vector product kernel. The developed GPU-based solver attained 8.6-fold speedup from the state-of-art multi-grid solver when measured on compute nodes of AI Bridging Cloud Infrastructure at National Institute of Advanced Industrial Science and Technology. The fast analysis method enabled calculating 372 viscoelastic Green's functions for a large-scale 3D crustal model of Nankai Trough region with $4.2\times10^9$ degrees of freedom within 333 s per time step using 160 A100 GPUs, and such results were used to estimate coseismic slip distribution.

Optimization and Comparison of Coordinate- and Metric-Based Indexes on GPUs for Distance Similarity Searches

ABSTRACT. Distance similarity searches are a fundamental operation in large-scale data analytics, as they are used to find all points (or feature vectors) that are within a search distance of a query point. Given that new scientific instruments are generating a tremendous amount of data, it is critical that these searches are highly efficient. In recent years, GPU algorithms have been proposed to parallelize distance similarity searches. While most work shows that GPU algorithms largely outperform parallel multi-core CPU algorithms, there is no single GPU algorithm that outperforms all other state-of-the-art approaches. Algorithms have their niches, and it is not clear which algorithm should be selected based on the input dataset.

We compare two GPU distance similarity search algorithms that index the input dataset differently: (i) one that indexes points based on their data/representational coordinates; and (ii) a metric-based index that uses distances between data points to a set of reference points to index the data. We compare performance across seven real-world datasets with varying properties and make several observations. First, a counterintuitive finding is that the data dimensionality is not a good indicator of which algorithm should be used on a given dataset. Second, we find that the intrinsic dimensionality (ID) which quantifies structure in the data can be used to parameter tune the algorithms to improve performance over the baselines reported in prior work. Third, combining the data/representational dimensionality and ID can be used to select between the best performing GPU algorithm on a dataset. This allows datasets to be processed by the algorithm that achieves the best performance given data-dependent characteristics.

Efficiency Analysis for IA applications in HPC systems. Case study: k-means

ABSTRACT. Currently, AI applications require large-scale computing and memory to solve problems. The combination of IA application and HPC sometimes do not efficiently use the resources. Furthermore, many idle times are caused by communication resulting in increased computing time. This paper describes a methodology for AI parallel applications that analyses the performance and efficiency of these applications running on HPC resources to make decisions and select the more appropriate system resources. We validate our proposal by analysing the efficiency of the k-means application obtaining an efficiency of 99% in a target machine.

14:50-16:30 Session 6C: AIHPC4AS 2-ol
Location: 319
Intracellular Material Transport Simulation in Neurons Using Isogeometric Analysis and Deep Learning

ABSTRACT. Neurons exhibit striking complexity and diversity in their morphology, which is essential for neuronal functions and biochemical signal transmission. However, it also brings challenges to mediate intracellular material transport since most essential materials for neurons have to experience long-distance transport along axons and dendrites after synthesis in the cell body. In particular, the neuron relies heavily on molecular motors for the fast transport of various materials along the cytoskeletal structure like microtubules (MTs). The disruption of this long-distance transport can induce neurological and neurodegenerative diseases like Huntington’s, Parkinson’s, and Alzheimer’s disease. Therefore, it is essential to study the intracellular transport process in neurons. There have been several mathematical models proposed to simulate and explain certain phenomena during transport but limited to simple 1D or 2D domains without considering the complex neuron morphology. Here, we present to simulate the intracellular material transport within complex neuron morphologies using isogeometric analysis (IGA) and deep learning (DL). An IGA-based simulation platform is first developed to reconstruct complex 3D neuron geometries and obtain high-fidelity velocity and concentration results during material transport. Built upon the simulation platform, we develop a DL-based surrogate model to improve the computational efficiency of the IGA simulation. We also propose to solve a PDE-constrained optimization (PDE-CO) problem to study the transport control mechanisms and explain the traffic jam phenomenon within abnormal neurons. We then develop a novel IGA-based physics-informed graph neural network (PGNN) that learns from the PDE-CO transport model and effectively predicts complex normal and abnormal material transport phenomena such as MT-induced traffic jams. Our results provide key insights into how material transport in neurons is mediated by their complex morphology and MT distribution, and help to understand the formation of complex traffic jam.

Towards understanding of Deep Reinforcement Learning Agents used in Cloud Resource Management

ABSTRACT. Cloud computing resource management is a critical compo- nent of the modern cloud computing platforms, aimed to manage com- puting resources for a given application by minimizing the cost of the infrastructure while maintaining a Quality-of-Service (QoS) conditions. This task is usually solved using rule-based policies. Due to their lim- itations more complex solutions, such as Deep Reinforcement Learning (DRL) agents are being researched. Unfortunately, deploying such agents in a production environment can be seen as risky because of the lack of transparency of DRL decision-making policies. There is no way to know why a certain decision as made. To foster the trust in DRL generated poli- cies it is important to provide means of explaining why certain decisions were made given a specific input. In this paper we present a tool applying the Integrated Gradients (IG) method to Deep Neural Networks used by DRL algorithms. This allowed to obtain feature attributions that show the magnitude and direction of each feature’s influence on the agent’s decision. We verify the viability of the proposed solution by applying it to a number of sample use cases with different DRL agents.

Long-term prediction of cloud resource usage in high-performance computing

ABSTRACT. Cloud computing is gaining popularity in the context of high-performance computing applications. Among other things, the use of cloud resources allows advanced simulations to be carried out in circumstances where local computing resources are limited. At the same time, the use of cloud computing may increase costs. This article presents an original approach which uses anomaly detection and machine learning for predicting cloud resource usage in the long term, making it possible to optimize resource usage (through an appropriate resource reservation plan) and reduce its cost. The solution developed uses the XGBoost model for long-term prediction of cloud resource consumption, which is especially important when these resources are used for advanced long-term simulations. Experiments conducted using real-life data from a production system demonstrate that the use of the XGBoost model developed for prediction allowed the quality of predictions to be improved (by 16%) compared to statistical methods. Moreover, techniques using the XGBoost model were able to predict chaotic changes in resource consumption as opposed to statistical methods.

A Deep r- Adaptive First Order Least Squares method for solving elliptic PDEs

ABSTRACT. In recent years, there has been a rise in the use of Deep Learning (DL) to solve Partial Differential Equations (PDEs). Mesh-based techniques are particularly beneficial for solving low-dimensional PDEs (up to three dimensions). In contrast, free-mesh methods based on Monte Carlo integration are used to solve high-dimensional PDEs due to the course of dimensionality. In this work, we aim to improve numerical solutions of low-dimensional first-order formulations of elliptic PDEs using a DL adaptive strategy. In particular, the method combines the advantages of the r−Adaptive DL mesh-based method presented in [1] with the Deep First-Order Systems Least Square (FOSLS) method proposed in [2]. During the presentation, we will explain the particularities of the method and the architectures used. We will also display numerical results applied to advection-reaction problems and convection-dominated diffusion problems in one and two dimensions with continuous and discontinuous materials. Numerical results will show the advantages of using the Deep r-Adaptive First Order Least squares method in cases where uniform mesh-based techniques fail.

1. Omella, Á.J., Pardo, D. r−Adaptive Deep Learning Method for Solving Partial Differential Equations. arXiv preprint arXiv:2210.10900. (2022). 2. Cai, Z., Chen, J., Liu, M., Liu, X.: Deep least-squares methods: An unsupervised learning-based numerical method for solving elliptic PDEs. Journal of Computational Physics, 420, 109707. (2020)

14:50-16:30 Session 6D: BBC 2
Location: 220
Multifractal organization of EEG signals in Multiple Sclerosis

ABSTRACT. In this contribution, we present a multifractal analysis of the electroencephalography (EEG) data obtained from patients with multiple sclerosis (MS) and the control group. We compared the complexity of the EEG time series, paying particular attention to analysing the correlations between the degree of multifractality, disease duration, and level of disease progression quantified by the Expanded Disability Status Scale (EDSS).

We used Multifractal Detrended Fluctuation Analysis, a generalisation of the Detrended Fluctuation Analysis, which has been successfully deployed as a robust tool that facilitates the multilevel characterisation of time series and, specifically, other types of brain signals. Furthermore, based on the generalised Hurst exponents, we obtained the multifractal/singularity spectrum of the Holder exponents, f(alpha). To confirm the data multifractality, we compared it to the Fourier surrogates of the data. To quantify the coupling between the brain regions, we used the Detrended Cross-Correlation Analysis. By the coefficient rho(q,s) we denote here the detrended cross-correlations between a pair of time series on the particular time scale "s" and with the amplitude of fluctuations filtered by the exponent "q" of the fluctuation function.

Our results reveal a significant correspondence between the complexity of the time series and the stage of multiple sclerosis progression. Namely, we identified brain regions whose EEG signals were characterised by a well-developed multifractality the estimated multifractal spectra take the shape of asymmetrical parabolas with larger widths \Delta \alpha) and lower persistence of the time series (spectra localised above but closer to $\alpha = 0.5$) for patients with a higher level of disability, whereas for the control group and patients with low-level EDSS they were characterised by monofractality and higher persistence. The link between multifractality and disease duration has not been observed, indicating that the multifractal organisation of the data is a hallmark of developing the disease. Our conclusions are supported by the analysis of the cross-correlations between EEG signals. The most significant difference in the brain areas coupling has been identified for the cohort of patients with EDSS > 1 and the combined group of patients with EDSS <= 1 and the control group.

A Federated Search Workflow Engine for Text Analytics Using Large-Scale BioMedical Literature

ABSTRACT. Modern domain-specific scientific literature is very often packed in extremely large and complex data sets. For scientists parsing these large repositories for a specific term and its corresponding articles, using standard methods could be impossible due to complexity and computa- tional time. The scope of this project is to create a solution suitable for modern researchers and scientists that would help to better impact their work by parsing data gathered from the scientific literature in an easy and quick way. This work presents the design, development, and solution to the problem. It is provided as a federated Full-Text Search architecture built with help of the Pegasus Scientific Workflow Management System (WMS). Such a powerful environment supports scientists in their research with a tool for processing the biomedical literature known as PubMed, which size is close to 40 million individual records. The core solution we propose is based on federalized OpenSearch instances, created and maintained with the help of Pegasus for its flexibility and ability to quickly adapt to various infrastructure architectures for user comfort. With this principle in mind, potential user can define their own computing infrastructure, according to their needs and capabilities, which could greatly reduce the time and resources spent on research. This work provides a complete overview of the system’s infrastructure, capabilities, and potential usage for biomedical research. The project is continuously evolving to improve its features and use cases. The future direction of this work will test the proposed solution using different com- puting infrastructures and software settings, to identify a well-optimized option for a drug repurposing knowledge graph use case.

CNN-based Quantifcation of Blood Vessels Lumen in 3D Images

ABSTRACT. This work is aimed at development of a method for automated, fast and accurate geometric modeling of blood vessels from 3D images, robust to image limited resolution, noise and artefacts. Within the centerline-radius paradigm, convolutional neural networks (CNNs) are used to approximate the mapping from the image cross-sections to vessel lumen parameters. A six-parameter image formation model is utilized to analyse noise-induced uncertainty of the estimated parameters and to define conditions for the mapping to exist. Noisy images are computer-synthesized for the CNN training, validation and testing. The trained networks are applied to real-life time-of-flight (TOF) magnetic resonance images (MRI) acquired for a blood-flow quality-assurance phantom. Excellent agreement is observed between the predictions made by the CNN and those obtained via model fitting as the reference method. The latter is a few orders of magnitude slower than the CNN and suffers from local minima problem. The CNN is also trained and tested on publicly available contrast-enhanced (CE) computed tomography angiography (CTA) datasets. It accurately predicts the coronary lumen parameters in seconds, compared to hours needed by human experts. The method can be an aid to vascular diagnosis and automated annotation of images -- to relieve medical professionals of this tedius and time-consuming task, otherwise crucial to make advances in machine learning applications in biomedicine.

Tensor Train Subspace Analysis for Classification of Hand Gestures with Surface EMG Signals

ABSTRACT. Processing and classification of surface EMG signals is a challenging computational problem that has received increasing attention at least two decades. Transforming multi-channel EMG signals to spectrograms, the classification can be performed using the multi-linear features that can be extracted from a set of spectrograms by various tensor decomposition methods. In this study, we propose to use one of the most efficient tensor network models, i.e. the tensor train decomposition method and to combine it with the tensor subspace analysis to extract the more discriminant 2D features from multi-way data. The numerical experiments, carried out on the surface EMG signals registered during hand gesture actions, demonstrated that the proposed feature extraction method outperforms well-known tensor decomposition methods in terms of the accuracy of classification.

A new statistical approach to image reconstruction with rebinning for the x-ray CT scanners with flying focal spot tube

ABSTRACT. This paper presents an original approach to the image reconstruction problem for spiral CT scanners where the FFS (Flying Focal Spot) technology in an x-ray tube is implemented. The geometry of those scanners causes problems for CT systems based on traditional (FDK) reconstruction methods. Therefore, we propose rebinning strategy, i.e. a scheme with abstract parallel geometry of x-rays, where these problems do not occur. As a consequence, we can reconstruct an image from projections with a non-equiangular distribution, present in the Flying Focal Spot technology. Our method is based on statistical model-based iterative reconstruction (MBIR), where the reconstruction problem is formulated as a shift-invariant system (a continuous-to-continuous data model). The statistical fundamentals of the proposed method allow for a reduction of the x-ray dose absorbed by patients during examinations. Our approach is flexible, and it is possible to conduct reconstruction using projections obtained from scanners with whichever topology of focal spots in an x-ray tube. Thanks to its unique formulation, it can systematically deliver selected scans in a fast procedure. That is especially necessary for ambulatory diagnostics. Performed simulations showed that our method overcomes the traditional approach regarding the quality of the obtained images and an x-ray dose needed to complete an examination procedure.

Hierarchical relative expression analysis in multi-omics data classification

ABSTRACT. This study aims to develop new classifiers that can effectively integrate and analyze biomedical data obtained from various sources through high-throughput technologies. The use of explainable models is particularly important as they offer insights into the relationships and patterns within the data, which leads to a better understanding of the underlying processes.

The objective of this research is to examine the effectiveness of decision trees combined with Relative eXpression Analysis (RXA) for classifying multi-omics data. Several concepts for integrating separated data are verified, based on different pair relationships between the features. Within the study, we propose a multi-test approach that combines linked top-scoring pairs from different omics in each internal node of the hierarchical classification model. To address the significant computational challenges raised by RXA, the most time-consuming aspects are parallelized using a GPU. The proposed solution was experimentally validated using single and multi-omics datasets. The results show that the proposed concept generates more accurate and interpretable predictions than commonly used tree-based solutions.

14:50-16:30 Session 6E: QCW 2
Location: 120
Learning QUBO Models for Quantum Annealing: A Constraint-based Approach

ABSTRACT. Quantum Annealing is an optimization process taking advantage of quantum tunneling to search for the global optimum of an optimization problem. Optimization problems solved by a Quantum Annealer machine are modeled as Quadratic Unconstrained Binary Optimization (QUBO) problems. Combinatorial optimization problems, where variables take discrete values and the optimization is under constraints, can also be modeled as QUBO problems to benefit from Quantum Annealing power. However, defining quadratic penalty functions representing constraints within the QUBO framework can be a complex task. In this paper, we propose a method to learn from data constraint representations as a combination of patterns we isolated in Q matrices modeling optimization problems and their constraint penalty functions. We actually model this learning problem as a combinatorial optimization problem itself. We propose two experimental protocols to illustrate the strengths of our method: its scalability, where correct pattern combinations learned over data from a small constraint instance scale to large instances of the same constraint, and its robustness, where correct pattern combinations can be learned over very scarce data, composed of about 10 training elements only.

Searching $B$-smooth numbers using quantum annealing: applications to factorization and discrete logarithm problem

ABSTRACT. Integer factorization and discrete logarithm problem, two problems of classical public-key cryptography, are vulnerable to quantum attacks, especially polynomial-time Shor's algorithm, which has to be run on the general-purpose quantum computer. On the other hand, one can make quantum computations using quantum annealing, where every problem has to be transformed into an optimization problem, for example, the QUBO problem. Currently, the biggest available quantum annealer, D-Wave advantage, has almost 6,000 physical qubits, and therefore it can solve bigger problems than using general-purpose quantum computers. Even though it is impossible to run Shor's algorithm on a quantum annealer, several methods allow one to transform factorization or discrete logarithm problems into the QUBO problem. Using a D-Wave quantum annealer, the biggest factored integer had 20 bits, and the biggest field, on which it was possible to compute a discrete logarithm problem using any quantum method, had 6 bits. This paper shows how to transform searching for \textit{B}-smooth numbers, an important part of the quadratic sieve method for factorization and index calculus for solving discrete logarithm problems, to the QUBO problem and then solve it using D-Wave Advantage quantum solver.

Enabling Non-Linear Quantum Operations through Variational Quantum Splines

ABSTRACT. One of the major issues for building a complete quantum neural network is the implementation of non-linear activation functions in a quantum computer. In fact, the postulates of quantum mechanics impose only unitary transformations on quantum states, which is a severe limitation for quantum machine learning algorithms. Recently, the idea of QSplines has been proposed to approximate non-linear quantum activation functions by means of the HHL. However, QSplines rely on a problem formulation to be represented as a block diagonal matrix and need a fault-tolerant quantum computer to be correctly implemented. This work proposes two novel methods for approximating non-linear quantum activation functions using variational quantum algorithms. Firstly, we develop the variational QSplines (VQSplines) that allow overcoming the highly demanding requirements of the original QSplines and approximating non-linear functions using near-term quantum computers. Secondly, we propose a novel formulation for QSplines, the Generalized QSplines (GQSplines), which provide a more flexible representation of the problem and are suitable to be embedded in existing quantum neural network architectures. As a third meaningful contribution, we implement VQSplines and GQSplines using Pennylane to show the effectiveness of the proposed approaches in approximating typical non-linear activation functions in a quantum computer.

Qubit: The Game. Teaching Quantum Computing through a Game-Based Approach

ABSTRACT. Quantum computing is a promising and rapidly growing interdisciplinary field that attracts researchers from science and engineering. Based on the hypothesis that traditional teaching is insufficient to prepare people for their introduction to this field, this paper presents Qubit: The Game, an innovative board game to promote both the motivation to learn quantum computing and the understanding of several essential concepts of that field. The reasons for the choice of game type, design and mechanics, and the followed methodology are described in detail here. This paper also includes a preliminary study to determine the effect of the proposed game on the perception, interest and basic knowledge of quantum computing in a group of high school students. The study findings reveal that the designed game is a powerful tool to foster interest and teach essential concepts of a subject as difficult as quantum computing, which can be of great help in introducing more complex concepts.

14:50-16:30 Session 6F: MMS 2
Location: B103
Sensitivity Analysis of Blood Flow in Abdominal Aortic Aneurysms by Lattice Boltzmann Method

ABSTRACT. An abdominal aortic aneurysm (AAA) is a permanent and localised dilation of the abdominal aorta (see Figure 1a) as the end result of a gradual imbalance between synthesis and degradation of tissue constituents within aortic walls. Although AAAs are usually asymptomatic, their growth can increase the risk of rupture, which leads to an overall mortality rate of 86\%. It is widely accepted that wall stresses play an important role in both the growth and rupture of AAAs.

Computational fluid dynamics is useful in analysing wall stresses as it can provide non-invasive quantification in various settings. In this study, we simulate the blood flow in the model shown in Figure 1a by using HemeLB, which is based on the lattice Boltzmann method (LBM). The LBM is an attractive tool for this simulation because its computational performance is highly scalable and pertinent to complex geometries due to its multi-scale approach.

To accurately simulate such a complex fluid flow generally requires a model which has many degrees of freedom. As a first step, we perform a sensitivity analysis on the parameters in our computational model to find out their relative importance on the risk factors of AAAs. We use EasyVVUQ to perform this analysis and obtain the moments and Sobol indices of the quantities of interest. Six model parameters are varied, including the Reynolds number (Re), the Womersley number (Wo), the relaxation time $\tau$ of the LBM model, and the three parameters (Murray's law power, $\gamma_R$, $\gamma_C$) in our recently proposed strategy for imposing flow rate ratios on many outlets . The quantities of interest are the flow rate ratios at the outlets and the endothelial cell activation potential (ECAP) on the wall of AAA. The polynomial chaos expansion of degree 2 is used to represent the quantities of interest in terms of the input variables.

In this analysis, we obtain estimates of the uncertainties of the risk factors due to individual inputs. Our preliminary results suggest that the impact of certain outlets can be significantly greater on the risk factors. Moreover, the results show that the variations in the flow rate ratios and ECAP are mainly determined by $\gamma_C$ and $\tau$ (see Figure 1b). These findings enable us to focus on a small subset of input parameters during calibration with a limited penalty of fixing other parameters. We will base on these findings to improve the computational model for quantifying the risk factors of AAAs in different medical settings.

Hemodynamics of the coarctation of the aorta: a Lattice Boltzmann Study

ABSTRACT. A constriction or narrowing in the body's primary artery, the aorta, is known as coarctation of the aorta (COA). COA can lead to complications such as high blood pressure, an enlarged heart, or even heart failure. Studying COA through clinical or experimental means alone can be challenging. Solely based on image processing techniques does not provide high-resolution blood flow details. Isotope injection requires patient consent and can be laborious, thus impeding its use in clinical studies. An alternative approach is to construct a digital COA model utilizing computational fluid dynamics (CFD) methods. These methods facilitate higher resolution for blood flow simulations in COA patients and can offer valuable insights into the underlying hemodynamic mechanisms contributing to the condition. In this study, the lattice Boltzmann method is employed as a fluid solver to study the hemodynamics of COA. To accomplish this, the open-source highly-scalable LBM code named as HemeLB, which is designed to study sparse fluid domains, is utilized. Previous studies have demonstrated that blood flow in the artery can attain Reynolds numbers exceeding 3000. To address this issue, we implement the LBM-based Smagorinsky sub-grid model to conduct large-eddy simulations (LES) of COA. When COA occurs, the adhesion of the obstruction to the aorta can induce turbulence, leading to increased uncertainty in the simulation results. The present study aims to assess the uncertainty in the quantities of interest(QoI) of the pressure drop, wall shear stress, and the Reynolds shear tensors changes after the narrowing of the aorta. In particular, we will be investigating the response of these quantities by changing the input parameters of Reynolds number, volumetric flow rate, and the blockage ratio present in the COA. This work employs EasyVVUQ as the uncertainty quantification tool. This allows for efficient management of a large number of high-resolution simulations. The goal is to provide a computational reference for the effects of different levels of blockage on the QoI change along the aorta. Our study improves the understanding of the COA's hemodynamics for clinical research and may help to quantify particular risk factors for patients with COA.

Towards a simplified solution of COVID spread in buildings for use in coupled models.

ABSTRACT. We present a prototype agent-based simulation tool, FACSiB, for SARS-Cov2 to be used in an enclosed environment such as a supermarket. Our model simulates both the movement and breathing patterns of agents, to better understand the likelihood of infection within a confined space, given agent behaviors and room layout. We provide an overview of the conceptual model, its implementation, and showcase it in a modeled supermarket environment. In addition, we demonstrate how the model can be coupled to the Flu and Coronavirus Simulator (FACS), which is currently used to model the spread of SARS-CoV2 in cities and larger regions.

14:50-16:30 Session 6G: CoDiP 1
Location: B115
Automating the Analysis of Institutional Design in International Agreements

ABSTRACT. This paper explores the automatic knowledge extraction of formal institutional design - norms, rules, and actors - from international agreements. The focus was to analyze the relationship between the visibility and centrality of actors in the formal institutional design in regulating critical aspects of cultural heritage relations. The developed tool utilizes techniques such as collecting legal documents, annotating them with Institutional Grammar, and using graph analysis to explore the formal institutional design. The system was tested against the 2003 UNESCO Convention for the Safeguarding of Intangible Cultural Heritage.

The Emergence of an European Consensus: A Data-Driven Analysis of UN General Assembly Voting Patterns

ABSTRACT. The voting database of the United Nations General Assembly has been studied by many researchers to determine the political similarity between two member states and to gain a better understanding of the dynamics of voting blocs. In this study, we analyse the emergence of a common European diplomacy. We investigate the affinity between countries by applying spectral clustering algorithms to votes, broken down by annual sessions, and measuring the co-location of European countries in the same clusters. The results are then refined by grouping UN votes by themes using natural language processing (NLP) methods. This allows us to extract the main topic categories discussed at the UN, in order to examine which topics promote or hinder European consensus at the UNGA. Our analysis shows that European concordance has increased significantly when European expansion has been most significant, often preceding the formal enlargement process. We also find evidence of increased thematic coherence among European countries, particularly on issues of peacekeeping, human rights and development. Our findings highlight the importance of international organisations in shaping patterns of cooperation among UN member states, and suggest that unsupervised learning can provide valuable insights into these dynamics. These findings confirm efforts to promote greater cooperation and unity among European countries at the UN General Assembly, particularly in the context of evolving global challenges and shifting geopolitical alignments. Finally, our approach can be used to analyse other political or economic alliances (e.g. ASEAN, NATO, CIS), to study their dynamics and to compare them with the EU.

Towards a Better Understanding of Multilateral Consensus Making: Analyzing the Complex Network of UN Security Council and WHO Resolutions

ABSTRACT. Since the end of the Second World War and the creation of the United Nations, multilateral diplomacy has become a prominent feature of relations between states. However, this process has been accompanied by an increase in complexity due to the multiplication of forums and levels of discussion and decision-making: UN bodies, international agencies and organisations, regional committees, expert groups, etc. Due to this complexity, the study and understanding of the governace in multilateral processes is particularly challenging for researchers. In order to gain insight into the consensus building and the norm production, we focus on the documents issued by these organisations: resolutions, recommendations, agendas, reports of regional committees, etc.

This abundance of documents provides evidence of the activity of these organisations, but it is difficult to study as a whole using manual methods due to the rigorous and repetitive nature of these texts. In contrast, we propose to use computational methods from graph theory and natural language processing (NLP) to analyse the corpus of texts. Instead of focusing on a selection of documents, we consider all availbe document as a complex network.

To demonstrate the feasibility and efficiency of such a method, we analyse a set of 2674 United Nations Security Council resolutions and of 3194 World Health Organisation (WHO) resolutions adopted from the founding date of these institutions to the present day. After content extraction, a complex graph of citations is reconstructed and analysed. We show how the topological graph measures can be related to aspects of decision-making process. Topic modelling performed on the titles and content allows us to refine our conclusions according to the type of object treated. In particular, we show how the clustering coefficient allows us to identify tensions around a given issue. Such method allows researchers to directly identify the ``hot spots'' of diplomatic activity. Our initial results show the success of our approach, and we propose to scale up to include other levels of publications (such as executive documents), and to scale out to consider links with other organisations.

14:50-16:30 Session 6H: CSCx 2-ol
Location: B11
The social graph based on real data

ABSTRACT. In this paper, we propose a model enabling the creation of a social graph corresponding to real society. The procedure uses data describing the real social relations in the community, like marital status or number of kids. Results show the power-law behavior of the distribution of links and, typical for small worlds, the independence of the clustering coefficient on the size of the graph.

The process of polarisation as a loss of dimensionality.

ABSTRACT. The increasing polarisation in our society, and in particular in our Social Networks, has been the focus of much research, especially during the Sars-Cov-2 pandemic. Polarisation is widely believed to be a risk for our democracies. Understanding and detecting its temporal evolution is, therefore, highly important.

Current approaches to define and detect polarisation largely rely on finding evidence of bimodality in a (possibly latent) ideological distribution, often inferred through collective behaviours on Social Networks. Bimodality-based definition makes it hard to detect temporal trends in polarization, as the relevant tests deliver results that fall into a binary of polarised or non-polarised. Building on post-structuralist understanding of polarisation processes, we propose here an alternative definition and estimate technique for polarisation: it is a decrease in the dimensionality of the latent space underpinning a communication network. This allows us to have a more nuanced definition of polarisation, apt to detect its increase or decrease beyond a binary detection test.

In particular, we exploit the statistical theory of Random Dot Product Graphs to embed networks in metric spaces. A decrease in the optimal dimensionality for the embedding of the network graph, as measured using truncated singular value decomposition of the graph adjacency matrix, is indicative of increasing polarisation in the network. We apply our framework to the communication interactions among New Zealand Twitter users discussing about climate change, from 2017. In line with our expectations, we find that the discussion is becoming more polarised over time, as shown by a loss in the dimensionality of the communication network and corroborated by a decrease of the Von Neumann complexity of the network. Second, we apply this analysis to discussions of the COP climate change conferences, showing that our methods agree with other researchers' detections of polarisation in this space. At the end, we provide some synthetic examples to demonstrate how an increase of the isolation between distinct communities, or the increase of the predominance of a community other the others, in the communication networks are identifiable as polarisation processes.

Structural Validation Of Synthetic Power Distribution Networks Using The Multiscale Flat Norm

ABSTRACT. We study the problem of comparing a pair of geometric networks that may not be similarly defined, i.e., when they do not have one-to-one correspondences between their nodes and edges. Our motivating application is to compare power distribution networks of a region. Due to the lack of openly available power network datasets, researchers synthesize realistic networks resembling their actual counterparts. But the synthetic digital twins may vary significantly from one another and from actual networks due to varying underlying assumptions and approaches. Hence the user wants to evaluate the quality of networks in terms of their structural similarity to actual power networks. But the lack of correspondence between the networks renders most standard approaches, e.g., subgraph isomorphism and edit distance, unsuitable. % for this purpose. We propose an approach based on the multiscale flat norm, a notion of distance between objects defined in the field of geometric measure theory, to compute the distance between a pair of planar geometric networks. Using a triangulation of the domain containing the input networks, the flat norm distance between two networks at a given scale can be computed by solving a linear program. In addition, this computation automatically identifies the 2D regions (patches) that capture where the two networks are different. We demonstrate our approach on a set of actual power networks from a county in the USA. Our approach can be extended to validate synthetic networks created for multiple infrastructures such as transportation, communication, water, and gas networks.

OptICS-EV: A Data-Driven Model for Optimal Installation of Charging Stations for Electric Vehicles

ABSTRACT. As the demand for electric vehicles continues to surge worldwide, it becomes increasingly imperative for the government to plan and anticipate its practical impact on society. In particular, any city/state needs to guarantee sufficient and proper placement of charging stations to service all current/future electric vehicle adopters. Furthermore, it needs to consider the inevitable additional strain these charging stations put on the existing power grid. In this paper, we use data-driven models to address these issues by providing an algorithm that finds optimal placement and connections of electric vehicle charging stations in the state of Virginia. Specifically, we found it suffices to build 10,733 additional charging stations to cover 75% of the population within 0.33 miles (and everyone within 5 miles). We also show optimally connecting the stations to the power grid significantly improves the stability of the network. Additionally, we study 1) the trade-off between the average distance a driver needs to travel to their nearest charging station versus the number of stations to build, and 2) the impact on the grid under various adoption rates.These studies provide further insight into various tools policymakers can use to prepare for the evolving future.

14:50-16:30 Session 6I: NMA 2-ol
Location: B10
Strengthening structural baselines for graph classification using Local Topological Profile

ABSTRACT. We present the analysis of the topological graph descriptor Local Degree Profile (LDP), which forms a widely used structural baseline for graph classification. Our study focuses on model evaluation in the context of the recently developed fair evaluation framework, which defines rigorous routines for graph classification model selection and evaluation, ensuring reproducibility and comparability of the results. Based on the obtained insights, we propose a new baseline algorithm called Local Topological Profile (LTP), which extends LDP by using additional centrality measures and local vertex descriptors. The new approach provides the results outperforming or very close to the latest GNNs for all datasets used. Specifically, state-of-the-art results were obtained for 4 out of 9 benchmark datasets. We also consider computational aspects of LDP-based feature extraction and model construction to propose practical improvements affecting execution speed and scalability. This allows for handling modern, large datasets and extends the portfolio of benchmarks used in graph representation learning. As the outcome of our work, we obtained LTP as a simple to understand, fast and scalable, still robust baseline, capable of outcompeting modern graph classification models such as Graph Isomorphism Network (GIN). We provide open-source implementation at \href{https://github.com/j-adamczyk/LTP}{GitHub}.

Applying reinforcement learning to Ramsey problems

ABSTRACT. The paper presents the use of reinforcement learning in edge coloring of a complete graph, and more specifically in the problem of determining Ramsey numbers. Zhou, Hao and Duval presented the reinforcement learning based local search (RLS) approach for grouping problems, which combines reinforcement learning techniques with a descent-based local search procedure. To evaluate the viability of the proposed RLS method, the authors used the well-known graph vertex coloring problem (GCP) as a case study. To the best of our knowledge, no one has used reinforcement learning to color the edges of a graph. Note that the problem is not simple. Already a complete graph with only 15 vertices has as many as 105 edges.

The paper contains an adaptation of the method of Zhou \emph{et al.} to the problem of finding specific Ramsey colorings. The proposed algorithm was tested by successfully finding critical colorings for selected known Ramsey numbers. The results of proposed algorithm are so promising that we may have a chance to find unknown Ramsey numbers.

Deep Learning Attention Model For Supervised and Unsupervised Network Community Detection

ABSTRACT. Network community detection often relies on optimizing partition quality functions, like modularity or blockmodel likelihood. This optimization appears to be a complex unsupervised learning problem traditionally relying on various heuristic algorithms, which often fail to reach the optimal partition, and, therefore, may require further fine-tuning. We propose a new deep learning unsupervised model which consists of a two-layer bi-partite convolutional graph neural network, stacked with a fully connected attention vanilla neural network. The model can be used to fine-tune network partitions with respect to other quality/objective functions, such as block model likelihood or description length. Furthermore, it can be used for supervised community detection, where one seeks to learn how to extrapolate the community structure provided for a certain part of the network to the rest of the network nodes.

Prediction of Urban Population-Facilities Interactions with Graph Neural Network

ABSTRACT. Urban facility distribution and organization have a great impact on citizens' daily life and force them to travel a certain distance to satisfy the demand. Over the last two decades various mathematical models have been derived to formalize people's mobility patterns while interacting with urban facilities to apply them to an area with the lack of observed data. The authors of the current work propose a new approach of modeling population-facilities interactions with regard to constraints of origins' in-flows and destinations' out-flows. In the proposed approach graph attention networks are used to learn latent nodes' representation and discover interpretable dependencies in a graph. The one-step normalization technique is used to substitute an iterative way of balancing doubly-constrained flows. The experiments show that the proposed approach outperforms the constrained versions of the gravity model and could be applicable for a wider range of edge regression tasks.

A framework for modelling temporal networks using singular value decomposition and symbolic regression

ABSTRACT. The modelling of temporal networks is an important task in many real world applications including symptom interactions for mental health, epidemiology, and protein interactions \cite{jordan2020current,contreras2020temporal,lucas2021inferring,jin2009identifying,masuda2013predicting}. Temporal networks can be seen as dynamical systems: that is a system in which we have points, in our case nodes in a network, whose states, the edges connecting them, that vary dependent in time. Discovering the underlying equations governing these dynamical systems proves challenging, because changes in network structure are typically observed in the form of discrete jumps from one state to another, for example an edge between two nodes not being observed at the first time step then appearing at the next. Here, we proposes a hybrid statistical and deep learning framework that allows us to model temporal networks as continuous-time dynamical systems, discover a fitting set of differential equations describing it, and, exploiting that discovery, predict the time evolution of a network.

Differential equations are useful for modelling systems where the state of one variable can effect the trajectories of other variables. We observe this behavior in temporal networks; nodes' connections within the network can influence the formation and decay of edges between other nodes, for example the phenomenon of preferential attachment observed in \cite{newman2001clustering,capocci2006preferential}. With this in mind we might wish to draw on the rich mathematical literature of differential equation modelling.

In the common representation of networks as binary-valued adjacency matrices, the events recorded in a temporal sequence of networks correspond to topological events, such as the appearance or the disappearance of link. Because of the discrete nature of events, directly modelling the temporal networks as dynamical systems would require to handle discrete jumps. The topological nature of temporal networks, and the discontinuous character of their temporal evolution, make it challenging to use differential equations techniques.

Here, we overcome the discreteness problem by interpreting networks as Random Dot Product Graphs; a well established statistical model for complex networks, that embeds nodes in a low-dimensional metric space\cite{athreya2017statistical}. In this way we translate the hard problem of modelling discrete events in the space of networks to the easier problem of modelling continuous change in the embedding space. Then, we define and use systems of Neural Network Differential Equations (NNDE) to approximate the time evolution of the embedding space, and symbolic regression techniques to discover the functional form of the fitted NNDEs. These functional forms are interpretable(as they read as classic differential equations) and allow to predict forward in time the evolution of the temporal networks.

In this manuscript, we prove that the temporal network prediction problem can be successfully re-interpreted as a dynamical system modelling problem. In particular, we apply our proposed framework to three small example temporal networks with the hope of exploring the limitations and strengths of the proposed framework. The framework we are introducing is extremely flexible, and our research regarding the optimal structure of the Neural Networks used for the NNDEs is just started. We are confident that future research can identify more fitting Neural Network structures than the simple one adopted here. For this reason, we did not yet attempt to benchmark our model against other classic temporal network prediction methods. As it is completely general, we believe that the framework we are introducing can be usefully applied to areas of medicine, especially protein interaction networks; population dynamics for network ecology; and social network modelling. In particular, we discuss how specific domain knowledge relative to the prediction scenario can be taken into account, moving from NNDEs to Universal Differential Equations.

Analyzing the Attitudes of Consumers to Electric Vehicles Using Bayesian Networks

ABSTRACT. Road transport, as ‘a producer’ of carbon dioxide (CO2), causes high air pollution, especially in cities. A suggested solution to this situation is the effective diffusion of electric vehicles (EVs). Regulations in the European Union aim to encourage consumers to buy electric cars. In addition, car manufacturers are constantly expanding their range of hybrid vehicles (HEVs) and EVs. Nonetheless, consumers have still many doubts regarding adopting an EV. Our survey among social media users investigates the attitudes and readiness of consumers to adopt HEVs and EVs. To investigate the factors underlying consumers’ attitudes to such vehicles, Bayesian networks were used as an exploratory tool. This paper presents results of this analysis.

16:30-17:00Coffee Break
17:00-18:40 Session 7A: MT 5
Location: 100
Ensemble Based Learning for Automated Safety Labeling of Prescribed Fires

ABSTRACT. Prescribed fires are controlled burns of vegetation that follow a burn plan to reduce fuel build-up and mitigate unanticipated wildfire impacts. To understand the risks associated to a prescribed burn, modern fire simulation tools can be used to simulate the progression of a prescribed fire as a function of burn conditions that include ignition patterns, wind conditions, fuel moisture and terrain information. Although fire simulation tools help characterize fire behavior, the unknown non-linear interactions between burn conditions requires the need to run multiple fire simulations (ensembles) to formulate an allowable range on burn conditions for a burn plan. Processing the ensembles is often a labor intensive process run by user-domain experts that interpret the simulation results and carefully label the safety of the prescribed fire. The contribution of this paper is an algorithm of ensemble based learning that automates the safety labeling of ensembles created by a modern fire simulation tool. The automated safety labeling in this algorithm is done by first extracting important prescribed fire performance metrics from the ensembles and learn the allowable range of these metrics from a subset of manually labeled ensembles via a gradient free optimization. Subsequently, remaining ensembles can be labeled automatically based on the learned threshold values. The process of learning and automatic safety labeling is illustrated on 900 ensembles created by QUIC-Fire of a prescribed fire in the Yosemite, CA region. The results show a performance of over 80\% matching of learned automated safety labels in comparison to manually generated safety labels created by fire domain experts.

Impact of Mixed-Precision: a Way to Accelerate Data-Driven Forest Fire Spread Systems

ABSTRACT. Every year, forest fires burn thousands of hectares of forest around the world and cause significant damage to the economy and people from the affected zone. For that reason, computational fire spread models arise as useful tools to minimize the impact of wild fires. It is well known that part of the forest fire forecast error comes from the uncertainty in the input data required by the models. To reduce the impact of this input-data uncertainty, different strategies have been developed during the last years. One of these strategies consists of introducing a data-driven calibration stage where the input parameters are adjusted according to the actual evolution of the fire using an evolutionary scheme. In particular, the approach described in this work consists of a Genetic Algorithm (GA). This calibration strategy is computationally intensive and time-consuming. In the case of natural hazards, it is necessary to maintain a balance between accuracy and time needed to calibrate the input parameters. Most scientific codes have over-engineered the numerical precision required to obtain reliable results. In this paper, we propose to use a mixed-precision methodology to accelerate the calibration of the input parameters involved in forest fire spread prediction without sacrificing the accuracy of the forecast. The proposed scheme has been tested on a real fire. The results have led us to conclude that using the mixed-precision approach does not compromise the quality of the result provided by the forest fire spread simulator and can also speed up the whole evolutionary prediction system.

Wildfire Perimeter Detection via Iterative Trimming Method

ABSTRACT. The perimeter of a wildfire is essential for prediction of the spread of a wildfire. Real-time information on an active wildfire can be obtained with Thermal InfraRed (TIR) data collected via aerial surveys or satellite imaging, but often lack the actual numerical parametrization of the wildfire perimeter. As such, additional image processing is needed to formulate closed polygons that provide the numerical parametrization of wildfire perimeters. Although a traditional image segmentation method (ISM) that relies on image gradients or image continuity can be used to process a TIR image, these methods may fail to accurately represent a perimeter or boundary of an object when pixels representing high infrared values are sparse and not connected. An ISM processed TIR image with sparse high infrared pixels often results in multiple disconnected sub-objects rather than a complete object. This paper solves the problem of detecting wildfire perimeters from TIR images in three distinct image processing steps. First, Delaunay triangulation is used to connect the sparse and disconnected high-value infrared pixels. Subsequently, a closed (convex) polygon is created by joining adjacent triangles. The final step consists of an iterative trimming method that removes redundant triangles to find the closed (non-convex) polygon that parametrizes the wildfire perimeter. The method is illustrated on a typical satellite TIR image of a wildfire, and the result is compared to those obtained by traditional ISMs. The illustration shows that the three image processing steps summarized in this paper yields an accurate result for representation of the wildfire perimeter.

ITS4Tsunamis: An Intelligent Transportation System for Tsunamis Combining CEP, CPN and Fuzzy Logic

ABSTRACT. Tsunamis and earthquakes have a great impact in human lives, infrastructures and economy. Although preventing tsunamis from occurring is impossible, minimizing their negative effects is in our hands. The aim of ITS4Tsunamis is to provide safer routes for emergency and rescue vehicles. This system must consider the information regarding the tsunami alert system and the road state combined with the vehicle performance. Complex Event Processing (CEP) technology allows us to gather and process the information provided by authorities to establish the alert level. A Fuzzy Inference System (FIS) can be used to consider the uncertain regarding the road-status related concepts, such as, flood, objects and alert levels, and to assist authorities to determine whether roads are accessible. The information obtained through these technologies can then be used in a Colored Petri Net (CPN) model in order to obtain safer routes. This proposal has been applied to the Spanish city of C ́adiz, due to its population density and its location in a small peninsula close to an active tectonic rift.

The first scientiffic evidence for the hail cannon

ABSTRACT. The hail cannon has been used to prevent hail storms since the 19th century. The idea of the hail cannon is to create a sequence of shock waves to prevent the formation of clouds before the hail storm. Modern hail cannons employ a mixture of acetylene and oxygen to ignite a sequence of explosions in the lower chamber traveling through the neck and into the cone of the cannon, creating shock waves. The shock waves propagate upwards to the cloud, and they are supposed to prevent the formation of the cloud. According to Wikipedia, there is no scientific evidence for the hail cannon, even though it is commonly used in several countries. In this paper, we propose a numerical simulation to verify the idea of the hail cannon. We employ isogeometric analysis and variational splitting methods. We compare our numerical results with the experimental data. We show that our numerical simulation is indeed the scientific evidence for the hail cannon. We also compare our numerical simulations with the experimental measurements performed with a drone before and after a sequence of generated shock waves.

Implementation of Coupled Numerical Analysis of Magnetospheric Dynamics and Spacecraft Charging Phenomena via Code-To-Code Adapter (CoToCoA) Framework

ABSTRACT. This paper addresses the implementation of a coupled numerical analysis of the Earth's magnetospheric dynamics and spacecraft charging (SC) processes based on our in-house Code-To-Code Adapter (CoToCoA). The basic idea is that the magnetohydrodynamic (MHD) simulation reproduces the global dynamics of the magnetospheric plasma, and its pressure and density data at local spacecraft positions are provided and used for the SC calculations. This allows us to predict spacecraft charging that reflects the dynamic changes of the space environment. CoToCoA defines three types of independent programs: Requester, Worker, and Coupler, which are executed simultaneously in the analysis. Since the MHD side takes the role of invoking the SC analysis, Requester and Worker positions are assigned to the MHD and SC calculations, respectively. Coupler then supervises necessary coordination between them. Physical data exchange between the models is implemented using MPI remote memory access functions. The developed program has been tested to ensure that it works properly as a coupled physical model. The numerical experiments also confirmed that the addition of the SC calculations has a rather small impact on the MHD simulation performance with up to about 500-process executions.

17:00-18:40 Session 7B: MT 6-ol
Location: 303
Excessive Internet Use in the Organizational Context: A Proposition of the New Instrument to Measure Cyberloafing at Work

ABSTRACT. Cyberloafing considered a non-work-related excessive internet use at work is embedded in everyday's work across organizations. Despite growing con-cerns about the waste of energy, time, money, and corporate data security caused by cyberloafing, we still do not know whether it is detrimental or beneficial to employees and organizations. The first aim of this study was to examine which behaviors in modern organizations fall within the cyber-loafing framework. We developed and empirically verified a new Cyberloaf-ing Scale CBLS-15 to measure four dimensions and a total score of this phenomenon. The CBLS-15 scale includes 15 items grouped into four di-mensions 1) Information browsing (IB), 2) Social networking (SN), 3) Per-sonal matters (PM), and 4) Gambling/Adult content (GA). We have found positive associations of cyberloafing with workload, cognitive demands, role conflict, and stress, but negative with work satisfaction and work per-formance, supporting the external validity of the CBLS-15 measure. The practical implications of a new measure of cyberloafing and its relationship with work-related variables were discussed.

Multi-agent cellular automaton model for traffic flow considering the heterogeneity of human delay and accelerations

ABSTRACT. We propose a multi-agent cellular automata model for analysing the traffic flow with various types of agents (drivers). Agents may differ by their vehicles’ acceleration/deceleration values and the delay value of their decision making. We propose a model in which the main parameters are chosen to reflect different types of driving. Based on valuable previous works, accurate data for possible acceleration/deceleration are used. Additionally, to accurately reflect the cars’ dimensions and their limited movement in a traffic jam, a small-cell cellular automaton is used, where a set of cells represents one car. We present the results of a numerical simulation showing the influence of the main factors of the driving type on the traffic flow.

Modelling the interplay between Chronic Stress and Type 2 Diabetes on-set

ABSTRACT. Stress has become part of the day-to-day life in the modern world. A major pathological repercussion of chronic stress is Type 2 Diabetes (T2D). Modelling T2D as a complex biological system involves combining under-the-skin and outside-the-skin parameters to properly define the dynamics involved. In this study, a compartmental model is built based on the various inter-players that constitute the hallmarks involved in the progression of this disease. Various compartments that constitute this model are tested in a glucose-disease progression setting with the help of an adjacent minimal model.Temporal dynamics of the glucose-disease progression was simulated to explore the contribution of different model parameters to T2D onset. The model simulations reveal chronic stress as a critical modulator of T2D disease progression.

Elements of Antirival Accounting with Shareable Non-Fungible Tokens

ABSTRACT. Accounting with antirival tokens, i.e., accounting based on shareable units that gain value with increased use, enables efficient and effective collective action. Cryptocurrencies are ubiquitous and enable a robust decentralisation of the money creation process. However, most cryptocurrencies are rival tokens which can naturally represent — and be exchanged to — rival goods, such as a cup of coffee. To our knowledge, utilising the open DLT token creation process and the resulting value dynamics, has not previously been studied in the context of antirival tokens. Antirival systems of account would be a natural fit for the economy of antirival goods because the logic of value creation and accounting would be compatible. It is especially difficult to find an allocatively efficient price for antirival goods, such as data, measured in rival units of account. We present an antirival accounting system where the fundamental operation is sharing instead of exchanging and study it with system dynamics models and simulations. We illustrate our arguments by presenting a system known as Streamr Awards that defines three tokens of a fundamentally novel type, shareable non-fungible token (sNFT). We simulate the functioning of one of these in the work allocation of a self-directed online community.

Global network of biofuel feedstock commodity futures

ABSTRACT. This study analyzes two types of networks constructed through correlation and Granger causality, consisting of eight commodity futures that can be used as biofuel feedstock. In the correlation network, futures in emerging markets are isolated from other futures, for example, crude palm oil (CPO) and sunflower futures, whereas those in mature markets are strongly correlated. In the Granger causality network, however, CPO and sunflower futures markets are influential in the network; CPO and sunflower futures provide and receive information with high degree centrality inside the network. In particular, CPO futures is an important bridge node connecting other markets with high betweenness centrality. Findings suggest that futures markets of biofuel feedstock commodities are significantly integrated, and the role of each market cannot be ignored regardless of the trading environment.

A Study of Imaging in the Existence of Resonance with Multiple Scattering in Isotropic Point-Like Discrete Random Media

ABSTRACT. A random medium consisting of many small bodies that can reflect or scatter the incoming waves is called multiple scattering. The emitted waves may be scattered multiple times by one or many inhomogeneities of the medium. Hence, the recorded waves include the information about the medium through which they have propagated. However, imaging becomes very difficult to perform in a random medium when multiple scattering is strong due to resonance. Resonance causes the image distortion arising from the underlying interactions of multiply scattered waves at resonance frequencies.

This article presents an imaging study by simulating the image distortion in multiple scattering with the Foldy-Lax-Lippmann-Schwinger formalism, which was employed for the multiply scattered waves, in the case of an ensemble of randomly distributed point-like scatterers. The objective of this study aims to study the cause of the imaging problems in discrete random media when resonance occurs in multiple scattering.

We introduce an approach to repair the defected images by pruning the signals whose strength exceeds the threshold of permitted strength. This approach called the prune-and-average method. The advantage of this method is to remove resonance spikes when the information about resonance is very complicated and scattering amplitudes are unknown.

We demonstrate this method with numerical simulation that to show that it can be used to reconstruct the distorted images of objects in point-like discrete random media due to resonance in multiple scattering.

17:00-18:40 Session 7C: AIHPC4AS 3-ol
Location: 319
ML-based proactive control of industrial processes

ABSTRACT. This paper discusses the use of optimal control for improving the performance of industrial processes. Industry 4.0 technologies play a crucial role in this approach by providing real-time data from physical devices. Additionally, simulations and virtual sensors allow for proactive control of the process by predicting potential issues and taking measures to prevent them. The paper proposes a new methodology for proactive control based on machine learning techniques that combines physical and virtual sensor data obtained from a simulation model. A deep temporal clustering algorithm is used to identify the process stage, and a control scheme directly dependent on this stage is used to determine the appropriate control actions to be taken. The control scheme is created by an expert human, based on the best industrial practices, making the whole process fully interpretable. The performance of the developed solution is demonstrated using a case study of gas production from an underground reservoir. The results show that the proposed algorithm can provide proactive control, reducing downtime, increasing process reliability, and improving performance.

Chemical Mixing Simulations with Integrated AI Accelerator

ABSTRACT. In this work, we develop the method for integrating an AI model with the CFD solver to predict chemical mixing simulations' output. The proposed AI model is based on a deep neural network with a variational autoencoder that is managed by our AI supervisor. We demonstrate that the developed method allows us to accurately accelerate the steady-state simulations of chemical reactions performed with the MixIT solver from Tridiagonal solutions.

In this paper, we investigate the accuracy and performance of AI-accelerated simulations, considering three different scenarios: i) prediction in cases with the same geometry of mesh as used during training the model, ii) with a modified geometry of tube in which the ingredients are mixed, iii) with a modified geometry of impeller used to mix the ingredients.

Our AI model is trained on a dataset containing 1500 samples of simulated scenarios and can accurately predict the process of chemical mixing under various conditions. We demonstrate that the proposed method achieves accuracy exceeding 90% and reduces the execution time up to 9 times.

Improving Group Lasso for high-dimensional categorical data

ABSTRACT. Sparse modeling or model selection with categorical data is challenging even for a moderate number of variables, because roughly one parameter is needed to encode one category or level. The Group Lasso is a well known efficient algorithm for selection of continuous or categorical variables, but all estimates related to a selected factor usually differ. Therefore, a fitted model may not be sparse, which makes the model interpretation difficult. To obtain a sparse solution of the Group Lasso, we propose the following two-step procedure: first, we reduce data dimensionality using the Group Lasso; then, to choose the final model, we use an information criterion on a small family of models prepared by clustering levels of individual factors. Therefore, our procedure reduces dimensionality of the Group Lasso and strongly improves interpretability of the final model. What is important, this reduction leads to little loss in prediction error. We investigate selection correctness of the algorithm in a sparse high-dimensional scenario. We also test our method on synthetic as well as real data sets and show that it performs better than the state of the art algorithms with respect to the prediction accuracy, model dimension and execution time. Our procedure is contained in the R package DMRnet and available in the CRAN repository.

Parallel algorithm for concurrent integration of three-dimensional B-spline functions

ABSTRACT. In this paper we discuss the concurrent integration applied to the 3D isogeometric finite element method. It has been proven that the integration over individual elements with Gaussian quadrature is independent of each other, and a concurrent algorithm for integrating single element has been created. The sub optimal integration algorithm over each element is developed as a sequence of basic atomic computational tasks, and the dependency relation between them is identified. We show how to prepare independent sets of tasks that can be automatically executed concurrently on a GPU card. This is done with the help of the Diekert's graph expressing the dependency between tasks. The execution time of the concurrent GPU integration is compared with the sequential integration executed on CPU and shows (nearly) perfect scalability.

17:00-18:40 Session 7D: BBC 3-ol
Location: 220
Replacing the FitzHugh-Nagumo electrophysiology model by physics-informed neural networks

ABSTRACT. This paper presents a novel approach to replace the FitzHugh-Nagumo (FHN) model with physics-informed neural networks (PINNs). The FHN model, a system of two ordinary differential equations, is widely used in electrophysiology and neurophysiology to simulate cell action potentials. However, in tasks such as whole-organ electrophysiology modeling and drug testing, the numerical solution of millions of cell models is computationally expensive. To address this, we propose using PINNs to accurately approximate the two variables of the FHN model at any time instant, any initial condition, and a wide range of parameters. In particular, this eliminates the need for causality after training. We employed time window marching and increased point cloud density on transition regions to improve the training of the neural network due to nonlinearity, sharp transitions, unstable equilibrium, and bifurcations of parameters. The PINNs were generated using NVIDIA's Modulus framework, allowing efficient deployment on modern GPUs. Our results show that the generated PINNs could reproduce FHN solutions with average numerical errors below 0.5%, making them a promising lightweight computational model for electrophysiology and neurophysiology research.

Sensitivity Analysis of a Two-compartmental Differential Equation Mathematical Model of Multiple Sclerosis using Parallel Programming

ABSTRACT. Multiple Sclerosis (MS) is a neurodegenerative disease that involves a complex sequence of events in distinct spatiotemporal scales for which the cause is not completely understood. The representation of such biological phenomena using mathematical models can be useful to gain insights and test hypotheses to improve the understanding of the disease and find new courses of action to either prevent it or treat it with fewer collateral effects. To represent all stages of the disease, such mathematical models are frequently computationally demanding. This work presents a comparison of parallel programming strategies to optimize the execution time of a spatiotemporal two-compartmental mathematical model to represent plaque formation in MS and apply the best strategy found to perform the sensitivity analysis of the model.

Toward the Human Nanoscale Connectome: Neuronal Morphology Format, Modeling, and Storage Requirement Estimation

ABSTRACT. The human brain is an enormous scientific challenge. Knowledge of the complete map of neural connections (connectome) is essential for understanding how neural circuits encode information and the brain works in health and disease. Nanoscale connectomes are created for a few small animals but not for human. Moreover, existing models and data formats for neuron morphology description are “merely” at the microscale. This work (1) formulates a complete set of morphologic parameters of the entire neuron at the nanoscale and introduces a new neuronal nanoscale data format; (2) proposes four geometric neuronal models: straight wireframe, enhanced wireframe, straight polygonal, and enhanced polygonal, based on the introduced neuronal format; and (3) estimates storage required for these neuronal models and synaptome (all synapses). The straight wireframe model requires 18PB. The parabolic wireframe model needs 36PB and the cubic model 54PB. The straight polygonal model requires 24PB. The parabolic polygonal model needs 48PB and the cubic model 72PB. The synapses can be calculated from the simplest straight wireframe model and the storage required for the nanoscale synaptome is 11PB. To my best knowledge, this is the first work providing for the human brain (1) the complete set of neuronal morphology parameters, (2) neuronal nanoscale data format, (3) storage requirement estimation for the nanoscale synaptome and (4) for volumetric and geometric neuronal morphology models at the micro and nanoscales. This work opens an avenue in human brain nanoscale modeling enabling the estimation of computing resources required for the calculation of the nanoscale connectome.

17:00-18:40 Session 7E: QCW 3
Location: 120
Simulating sparse and shallow Gaussian Boson Sampling

ABSTRACT. Gaussian Boson Sampling (GBS) is one of the most popular quantum supremacy protocols as it does not require universal control over the quantum system, which favors current photonic experimental platforms and there is strong theoretical evidence for its computational hardness. However, over the years, several algorithms have been proposed trying to increase the performance of classically simulating GBS assuming certain constraints, e.g., a low number of photons or shallow interferometers. Most existing improvements of the classical simulation of GBS provide a performance increase regarding the probability calculation, leaving the sampling algorithm itself untouched. This paper provides an asymptotically better sampling algorithm in the case of low squeezing and shallow circuits.

Constructing generalized unitary group designs

ABSTRACT. Unitary designs are essential tools in several quantum information protocols. Similarly to other design concepts, unitary designs are mainly used to facilitate averaging over a relevant space, in this case the unitary group U(d). The most appealing case is when the elements of the design form a group, which is then called a unitary group design. However, the application of group designs as a tool is limited by the fact that there is no trivial construction method to get even a group 2-design for arbitrary dimension. In this paper, we present novel construction methods, based on the representation theory of the unitary group and its subgroups, that allows to build higher order unitary designs from group designs.

Simulation of quantum computers on tensor streaming processors

ABSTRACT. While the compilation of quantum algorithms is an inevitable step towards executing programs on quantum processors, the decomposition of the programs into elementary quantum operations poses serious challenge, mostly because in the NISQ era it is advantageous to compress the executed programs as much as possible. The problem of optimizing quantum circuits becomes especially difficult above 5 qubits due to the exponential scaling of computational complexity and due to the appearance of barren plateaus in the optimization landscape. In our recent work we proposed the utilization of FPGA based data-flow engines to partially solve one of these limiting aspects. By speeding up quantum computer simulations we managed to decompose unitaries up to 9 qubits with an outstanding circuit compression rate. However, the limit of the available resources on the FPGA chips used in our experiments prevented us from further scaling up our quantum computer simulator implementation. In order to circumvent the limiting factor of spatial programming on FPGA chips, we found a novel use of the GroqTM Tensor Streaming Processors (TSPs) which although used broadly for machine learning, they can also provide for a high performance quantum computer simulator. We prove that such data-flow hardware is indeed competitive for these particular problem sizes, being of practical importance and a subject of active research.

17:00-18:40 Session 7F: MMS 3-ol
Location: B103
Scale-bridging workflows in materials science using FireWorks

ABSTRACT. Scientific workflow is nowadays an established method to implement multiscale models in computational materials science. In this work, we evaluate the FireWorks workflow management system for multiscale modelling and simulation of materials and illustrate the main issues based on excerpts from the developed workflows for two application use cases. We find that FireWorks provides all necessary functionality but requires advanced skills and high development effort.

Integrating remote sensing data in an agent-based modeling of the Tigray conflict

ABSTRACT. The UN International Organization for Migration (IOM) estimated at least 2 million internally displaced people in Tigray, Ethiopia. The vast displacement was due to the conflict that erupted in November 2020. Data from IOM’s Displacement Tracking Matrix (DTM) show that internally displaced persons (IDPs) have established 280 informal settlements across Ethiopia since the beginning of the conflict. However, due to restrictions on physical access, IOM has not been able to collect any data on IDP settlements in Tigray and has not collected IDP settlement location data elsewhere in Ethiopia since early July 2021. Without knowledge of IDP settlement locations and movements in Tigray, humanitarian and research communities are considerably restricted in their ability to respond. We used IDP settlement location, establishment timing, and growth data derived from Landsat and Sentinel-2 satellite retrievals using time series disturbance detection approaches to explore model sensitivity and improve simulations of displaced person movements using an agent-based Flee code (​https://flee.readthedocs.io). Our results demonstrate a novel approach of informing conflict-migrations models with remote sensing retrievals, where data is scarce or unavailable.

17:00-18:40 Session 7G: CoDiP 2-ol
Location: B115
An Approach for Probabilistic Modeling and Reasoning of Political Networks

ABSTRACT. This work proposes a methodology for a sounder assessment of centrality, some of the most important concepts of network science, in the context of voting networks, which can be established in various situations from politics to online surveys. In this regard, the network nodes can represent members of a parliament, and each edge weight aims to be a probabilistic proxy for the alignment between the edge endpoints. In order to achieve such a goal, different methods to quantify the agreement between peers based on their voting records were carefully considered and compared from a theoretical as well as an experimental point of view. The results confirm the usefulness of the ideas herein presented, and which are flexible enough to be employed in any scenario which can be characterized by the probabilistic agreement of its components.

An Analysis of Political Parties Cohesion based on Congressional Speeches

ABSTRACT. Speeching is an intrinsic part of the work of parliamentarians, as they expose facts as well as their points of view and opinions on several subjects. This article details the analysis of relations between members of the lower house of the National Congress of Brazil during the term of office between 2011 and 2015 according to transcriptions of their house speeches. In order to accomplish this goal, Natural Language Processing and Machine Learning were used to assess pairwise relationships between members of the congress which were then observed from the perspective of Complex Networks. Node clustering was used to evaluate multiple speech-based measures of distance between each pair of political peers, as well as the resulting cohesion of their political parties. Experimental results showed that one of the proposed measures, based on aggregating similarities between each pair of speeches, is superior to a previously established alternative of considering concatenations of these elements relative to each individual when targeting to group parliamentarians organically.

Leveraging Social Contagion to Foster Consensus in Collective Decision-Making

ABSTRACT. Appropriate social transfer of information among units of a multi-agent system (MAS) is a prerequisite for an effective collective response to changing environmental conditions. From the network perspective, this social information transfer requires understanding the interplay between network topology and agents’ dynamics [1]. Specifically, information propagation through the MAS can either take the form of a simple contagion—associated with pairwise interactions—or a complex contagion—involving social influence and reinforcement [2, 3]. The key role played by the network topology in this social information transfer has been observed and analyzed in a study involving a swarm of robots subjected to a leader-follower consensus protocol [1]. In that work, a nontrivial relationship between the pace of external perturbations and the network degree is reported. The emergent collective response to slow-changing perturbations in- creases with the degree of the interaction network, while the opposite is true for the response to fast-changing ones. For instance, when considering the task of distributed monitoring of a slow-changing environment, one could implement a high-degree connectivity for the MAS operations. On the other hand, if a de-centralized MAS has to contend with a fast target—e.g., faster than the agents themselves [4], then a drastic reduction in the network degree would be required to ensure an effective collective target tracking. Subsequently, Horsevad et al. [2] revealed the possibility of complex contagion with a leader-follower consensus model of distributed decision-making lacking thresholds and/or nonlinearities. Prior to that work, complex contagions were limited to decision-making models based on a binary decision variable with a threshold [3]. The work by Horsevad et al. [2] highlights that other network properties (beyond the degree distribution) influence the social contagion process. The existence of a transition from a simple contagion to a complex one hinges on knowing which network property plays a key role. At low frequency of the leader—i.e., when dealing with slow-changing perturbations—the Kirchhoff index has been shown to be a robust metric when it comes to identifying a simple contagion. At high frequency of the leader—i.e., with rapidly evolving circumstances—the Kirchhoff index can no longer be used to characterize the social contagion. Instead, the clustering coefficient has been shown to be highly correlated with the existence of a complex contagion. One serious limitation of these works is the lack of a systematic way of characterizing the type of social contagion for a given collective decision-making protocol. What has been found true for the first-order leader-follower consensus might not hold for other forms of distributed decision-making. Furthermore, the Kirchhoff index and average clustering coefficient may not be the appropriate network properties to decipher which type of social contagion is unfolding. It is worth adding, that these network metrics only incorporate features of the network topology, without accounting for the agents’ dynamics taking place over this network. Here, we propose a novel approach to address this issue by considering a generalized metric that would embody network topology along with agents’ dynamics. Spectral graph theory is a powerful tool when considering distributed dynamics of nodes over a complex network. Indeed, the spectrum of the graph Laplacian offers valuable information about both network structure and agents’ dynamics. The eigenvalues of the Laplacian matrix have been used for community detection and spectral clustering. As a matter of fact, the spectrum of the graph Laplacian can reveal information about both global and local properties of the network, such as the number of connected components, clustering coefficient, and spectral gap, etc. Furthermore, the Kirchhoff index can be expressed as the sum of the inverse of the eigenvalues. Also, different parts of the spectrum can be associated with community structures, motif multiplication, and bipartiteness of the network graph. This approach has the potential to extend our results to any collective decision-making protocol beyond the simple leader-follower consensus. References 1. Mateo, D., Horsevad, N., Hassani, V., Chamanbaz, M., & Bouffanais, R. (2019). Optimal network topology for responsive collective behavior. Science advances, 5(4), eaau0999. 2. Horsevad, N., Mateo, D., Kooij, R. E., Barrat, A., & Bouffanais, R. (2022). Tran- sition from simple to complex contagion in collective decision-making. Nature com- munications, 13(1), 1442. 3. Centola, D., Egu ́ıluz, V. M., & Macy, M. W. (2007). Cascade dynamics of complex propagation. Physica A: Statistical Mechanics and its Applications, 374(1), 449-456. 4. Kwa, H. L., Kit, J. L., & Bouffanais, R. (2020). Optimal swarm strategy for dynamic target search and tracking. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 672-680).

Modeling mechanisms of school segregation and policy interventions: a complexity perspective

ABSTRACT. We revisit literature about school choice and school segregation from the perspective of complexity theory. This paper argues that commonly found features of complex systems are all present in the mechanisms of school segregation. These features emerge from the interdependence between households, their interactions with school attributes and the institutional contexts in which they reside. We propose that a social complexity perspective can add to providing new generative explanations of resilient patterns of school segregation and may help identifying policies towards robust school integration. This requires a combination of theoretically informed computational modeling with empirical data about specific social and institutional contexts. We argue that this combination is missing in currently employed methodologies in the field. Pathways and challenges for developing it are discussed and examples are presented demonstrating how new insights and possible policies countering it can be obtained for the cases of primary school segregation in the city of Amsterdam.

The complexities of policy, data and computational methods: Charting a new, case-based, social science grounded, AM-Smart methods approach

ABSTRACT. In the globalized worlds in which we live, governments cannot escape the uncertainty, interdependence, and complexity of current public policy. Nor can they escape the urgent need for sweeping policy reform – infrastructural, political, environmental, social – to address this complexity, including the coordinating local, national, and international policies and stakeholders. Governments are also confronted presently with a big-data flood of information and the promised ‘sales pitch’ of computational science – from modelling and simulation to data science and artificial intelligence – to correctly guide decision-making. While such complexities of policy, data and methods is not an entirely new problem, what is of critical importance is how rather ineffective it has all been. The promise of complexity and computation has struggled to live up to expectation. A shift has emerged in the policy landscape, albeit minor, involving a ‘social science turn’ in complexity and computational modelling. This ‘turn’ involves using the theories, concepts, methods and empirical insights of social science to inform the complexity and computational sciences. In terms of specifics, leading areas of research includes co-production for simulation; participatory design; rigorous stakeholder engagement; a resurgence in systems mapping; mixed-methods development, such as qualitative comparative analysis and agent-based modelling; addressing issues of power and inequalities in the policy landscape, including grounding policy in a complexities of place approach; adopting a case-based perspective; and co-designing more easily accessible computational modelling platforms, called AM-Smart methods.

This paper will seek to outline this ‘social science’ turn in complexity and computational modelling and its implications for improving public policy. This outline will include (1) a brief overview of the above advances; (2) a quick introduction to a methods platform we developed, COMPLEX-IT, which has incorporated many of the social science turn advances into its design; and (3) critically reflect on the strengths and weakness of the social science turn, including barriers to and levers for advancing the utility of this approach across team members with distinct roles, perspectives, and intersections with public policy work. All with the goal of helping to advance the field of computational policy/diplomacy.