FRCCS 2023: THIRD FRENCH REGIONAL CONFERENCE ON COMPLEX SYSTEMS
PROGRAM FOR WEDNESDAY, MAY 31ST
Days:
next day
all days

View: session overviewtalk overview

09:00-09:45 Session Keynote Speaker S1
09:00
The Dynamics of Higher-Order Networks: the Effect of Topology and Triadic Interactions

ABSTRACT. Higher-order networks capture the interactions among two or more nodes in complex systems ranging from the brain, to chemical reaction networks. Here we show that higher-order interactions are responsible for new dynamical processes that cannot be observed in pairwise networks. We will cover how topology is key to define synchronization of topological signals, i.e. dynamical signals defined not only on nodes but also on links, triangles and higher-dimensional simplices in simplicial complexes. Interesting topological synchronization dictated by the Dirac operator can lead to the spontaneous emergence of a rhythmic phase where the synchronization order parameter displays low frequency oscillations which might shed light on possible topological mechanisms for the emergence of brain rhythms. We will also reveal how triadic interactions can turn percolation into a fully-fledged dynamical process in which nodes can turn on and off intermittently in a periodic fashion or even chaotically leading to period doubling and a route to chaos of the percolation order parameter.

09:45-10:45 Session Oral O1: Network Analysis
09:45
Diagnosing Network Attacks by a Machine-Learning Approach
PRESENTER: Davide Coppes

ABSTRACT. In this note we explore a simple machine-learning procedure to establish whether and how a given network has been attacked, without requiring the knowledge of the structure of the network before the attack.

We characterize a graph by a list of four normalized metrics: the ratio between the average and the maximum degree, the global clustering coefficient, the ratio between the average path length and the diameter, and the assortativity. Focusing on three basic random graphs, Erdős–Rényi (ER), Barabasi-Albert (BA) and Watts-Strogatz (WS), we train two popular classification algorithms, k-Nearest-Neighbor and Random Forest, to recognize whether a given network has been attacked as well as the type of attack. We test our procedure on both artificial and real networks, first performing either targeted attacks or random failures, and then applying our classification scheme to the resulting network.

Even though the training set used in this paper is quite limited, our procedure is surprisingly successful in identifying the network type and distinguishing between random failures and targeted attacks, and could therefore provide a basis for more sophisticated approaches to the diagnosis and detection of damaged networks.

10:00
Characterisation of the Robustness of Weighted Networks, a First Step to Better Understand the Context of Humanitarian Operations
PRESENTER: Aurélie Charles

ABSTRACT. In situations where crises are succeeding one to another, it is important to understand and measure the strengths and weaknesses of one's logistics network. This observation applies to many companies. It is even more relevant for humanitarian organizations, who are confronted to increased demand for humanitarian aid without having a sufficient budget to cover all present and future needs. Our proposal allows to visualize these strategic points, using complex networks. We measure the robustness of local infrastructures (health and logistics) by simulating its response in the event of a crisis. Non binary attacks, where nodes and/or links are damaged but not removed entirely are used in order to remain as close as possible to the real phenomenon, where the damages suffered by infrastructures may hinder their capacity but not always totally destroy it. We also use weighted networks. This work is carried out in close collaboration with Handicap International, so as to validate the relevance of the approach and its applicability through real applications.

10:15
Filtering Real World Networks: a Correlation Analysis of Statistical Backbone Techniques
PRESENTER: Ali Yassin

ABSTRACT. Networks are an invaluable tool for representing and understanding complex systems. They offer a wide range of applications, including identifying crucial nodes, uncovering communities, and exploring network formation. However, when dealing with large networks, the computational challenge can be overwhelming. Fortunately, researchers have developed several techniques to address this issue by reducing network size while preserving its fundamental properties. To achieve this goal, two main approaches have emerged: structural and statistical methods. Structural methods aim to keep a set of topological features of the network while reducing its size. In contrast, statistical methods eliminate noise by filtering out nodes or links that could obscure the network's structure, utilizing advanced statistical models.

In a previous work~\cite{survey:transportation:complex} we compared a set of seven statistical backbone filtering techniques in the World Air Transportation network. Results show that the Marginal Likelihood Filter, Disparity Filter, and LANS Filter give more importance to high-weight edges. The other techniques emphasize both small and high-weighted edges.

This study extends the previous research on seven statistical filtering techniques, namely Disparity, Polya Urn, Noise Corrected, Marginal Likelihood, LANS, ECM, and GloSS filters, through the analysis of 39 real-world networks of diverse origins. These networks range in size from 18 to 13,000 nodes and include character, web, biological, economic, infrastructural, and offline/online social networks. In the first experiment, we aim to evaluate and compare the similarities between the seven statistical filtering techniques. Each method assigns a probability value, called a p-value, to each edge. To compare the methods, we use these p-values to conduct correlation analysis. Specifically, we compute the Pearson correlation between each pair of techniques' p-value edges. However, it is important to note that Pearson correlation examines linear relationships, whereas Jaccard similarity compares the similarities of two sets. Therefore, we use Jaccard similarity to compare the fraction of shared edges in each backbone. In a second experiment, we investigate the relationship between edge significance and edge properties. To do this, we compute the Pearson correlation between the p-values and edge properties, including weight, edge degree, and edge betweenness. Fig~\ref{fig:1} illustrates these results.

The heatmaps present the mean and standard deviation of Pearson correlation between filtering technique pairs across all networks. The couples (LANS, Disparity filter) and (Noise Corrected, ECM ) are well correlated (0.8). Conversely, the Polya Urn filter does not exhibit a noticeable correlation with any other filtering method. The standard deviation heat map shows a low standard deviation validating these findings.

The middle graphs illustrate the typical behaviors of how the mean Jaccard score changes as a function of the top fraction of edges sorted by various backbone filtering techniques. The top left panel shows a low Jaccard score between the Polya Urn filter and the Noise Corrected filter. The other techniques also have a low Jaccard score between the Polya Urn filter. The top right panel shows that the GloSS filter shares at least 20\% of its edges with the Marginal Likelihood filter. The other techniques have the same behavior as the Marginal Likelihood filter with the GloSS filter except for the Polya Urn filter. The bottom right panel shows that the set of edges obtained by the Disparity filter shares on average at least $50\%$ of its edges with the LANS filter. The ECM Filter and Marginal Likelihood Filter (ECM-MLF) and ECM Filter and Noise Corrected Filter (ECM-NC) behave similarly. Finally, in the bottom left panel the set of edges obtained by the Marginal Likelihood filter shares on average at least $70\%$ of its edges with the Noise Corrected filter. On the other hand, the couples DF-NC, DF-ECM, DF-MLF, LANS-ECM, LANS-NC, and LANS-MLF behave the same, sharing at least around $30\%$ of the edges. However, they have a high standard deviation.

The boxplots illustrate the Pearson correlation coefficient between edge p-values and edge weights, degrees, and betweenness across all networks. Results indicate a greater demonstration of the distinct behavior of the Polya Urn filter. The edge p-values were found to be uncorrelated with edge weights, degree, and betweenness, with a very low standard deviation. In contrast, the top panel shows that the edge p-values obtained through the Disparity filter and Marginal Likelihood filter were correlated with weights, with average correlation higher than $0.6$. This indicates that these techniques prioritize edges with high weights. In the middle panel, the Noise Corrected filter and ECM filter have an average correlation higher than $0.6$. This means that these methods give importance to edges that connect hubs, as these edges have a high edge degree, which is used indirectly by these methods to determine edge significance. Finally, the bottom panel shows that the edge p-values from all techniques have no correlation with edge betweenness, indicating that none of the methods prioritize edges that play a significant role in communication between nodes through the shortest paths.

In conclusion, correlation analysis is crucial in highlighting similarities and differences between backbone edge filtering techniques, identifying areas for improvement, and advancing knowledge in this field. This study can help to identify areas where improvements can be made in the development of new techniques or in the refinement of existing ones.

10:30
Interaction Network and Graphlets to Identify Sport Teams’ Signature

ABSTRACT. The traditional way of analyzing team sport performance has limitations in terms of understanding the interactions between players and the performance context, that is why it has been proposed to analyze teams through the lens of complexity science paradigm. Thus, graph theory can be used to analyze the interaction network between players in order to assess the collective behavior. In this study, we aim at 1) investigating how the defensive imbalance constraints the emerging team behavior and 2) identifying “team’s signature” defined as their preferences in this emergence. 24 rugby teams and 18 basketball teams of 3 young elite players played a small-sided game in 2 situations characterized by different levels of defensive imbalance (high/low). We established a list of all possible network structures (“graphlets”) and associated each possession with a graphlet to design a “profile” as the frequency of each graphlet. We evaluated the effect of the manipulated constraint on the collective behavior by comparing the mean profile of both situations, and we detected teams’ signature by clustering teams’ profiles. Results suggest that the defensive imbalance constraints more basketball teams than rugby teams, whereas team preferences are more significant in rugby than in basketball. By mobilizing complexity science paradigm and graph theory to assess collective behavior, we are able to explore the effect of a given constraint on interaction between players and to identify each team's preferred patterns of interaction. It provides a more performance-contextualized and interaction-driven analytical framework which could be extended to a more dynamical perspective.

10:45-11:15Coffee Break
10:45-11:15 Session Poster P1: Morning Session
Knowledge Graph for NLG in the Context of Conversational Agents
PRESENTER: Hussam Ghanem

ABSTRACT. The use of knowledge graphs (KGs) enhances the accuracy and comprehensiveness of the responses provided by a conversational agent. While generating answers during conversations consists in generating text from these KGs, it is still regarded as a challenging task that has gained significant attention in recent years. In this document, we provide a review of different architectures used for knowledge graph-to-text generation including: Graph Neural Networks, the Graph Transformer, and linearization with seq2seq models. We discuss the advantages and limitations of each architecture and conclude that the choice of architecture will depend on the specific requirements of the task at hand. We also highlight the importance of considering constraints such as execution time and model validity, particularly in the context of conversational agents. Based on these constraints and the availability of labeled data for the domains of DAVI, we choose to use seq2seq Transformer-based models (PLMs) for the Knowledge Graph-to-Text Generation task. We aim to refine benchmark datasets of kg-to-text generation on PLMs and to explore the emotional and multilingual dimensions in our future work. Overall, this review provides insights into the different approaches for knowledge graph-to-text generation and outlines future directions for research in this area.

Are Networks Really Useful? --- Interplay Between Structures and Vector Representations

ABSTRACT. When we analyze networks (or graphs), we often think that the network structures are given in advance. However, in many cases, extracting network structures from data is not obvious. And it is still controversial whether network structures are really useful for analyzing data. Because of the boom of deep learning, several attempts have been made for network embedding (or representation learning), which transforms nodes in a network into vectors (or latent representation) so that proximity of the nodes in the network will be preserved. Finding suitable network embedding is a challenging research topic. Graph structure learning is another challenging research topic. In the case of social networks, networks are often created in an ad-hoc manner, such as the number of conversation between two people is above certain threshold. Generated networks often contain noise which degrade the performance of machine learning tasks. Graph structure learning is for alleviating such noise, and it is an emerging research topic of machine learning. In this article, we discuss the interplay between structures and vector representations of networks. Structures and vector representations are both important information obtained from real-world structured data. % with attributes Our ultimate research questions are as follows: (1) when network embedding is useful, and (2) when graph structure learning is useful. As the first step, we show the approaches for transforming structures to vectors, and vice versa.

DyHANE: Dynamic Heterogeneous Attributed Network Embedding

ABSTRACT. As real world scenarios are inherently dynamic and heterogeneous, graph continual learning is a machine learning paradigm gaining increasing popularity in recent years. However, literature lacks continual learning approaches capable of harnessing the expressive power of Graph Neural Networks (GNNs) in order to enable the integration of new knowledge (e.g., changes in network topology) without requiring to re-train the model from scratch. In this work, we propose a novel framework, namely DyHANE (Dynamic Heterogeneous Attributed Network Embedding), which is designed to update GNN parameters and efficiently generate up-to-date node representations for networks whose topology evolves over time. We take into account node/edge addition and removal on feature-rich heterogeneous networks, i.e., networks showing multiple types of nodes and/or edges, and having external content associated to nodes.

Deep Reinforcement Learning for Selfish Nodes Detection in a Blockchain

ABSTRACT. Blockchain-based secure resource allocation offers a highly secure and transparent resource distribution method. Blockchain systems ensure that transactions are tamper-proof and can be validated by decentralized nodes, which removes the need for a centralized authority, reducing the risk of fraud or corruption. Additionally, smart contracts can automate resource allocation, ensuring the process is fair, transparent, and efficient. In a private blockchain, the number of nodes is often smaller than in public blockchains, making the network more vulnerable to attacks by selfish nodes. This can lead to a higher concentration of power among a smaller group of actors, making it easier for selfish nodes to manipulate the system for their benefit. Deep reinforcement learning helps to detect selfish nodes in a blockchain adaptively. We propose a deep reinforcement learning-based method for detecting selfish nodes in a blockchain. Simulation results show that our proposed method outperforms the reference solution.

Quantile Regression: an Approach Based on GEV Distribution and Machine Learning

ABSTRACT. The assessment of the risk of a given extreme event requires a precise estimate of quantiles beyond the observations. Extreme quantile regression allows to have this estimate conditional to the dependent variables. Extrapolation beyond the range of the data is possible using the asymptotic results of the extreme value theory. Several classical methods are proposed for quantile regression modelling, but these methods do not work well when the dimension of the space of characteristic variables is large or when the structure between the variable of interest and the characteristic variables is complex. In this work, we will propose a conditional extreme quantile regression model by combining the block maxima approach of extreme value theory and machine learning methods. The conditional extreme distribution is approximated by the generalized extreme value distribution (GEV) whose parameters depend on the features. Machine learning algorithms are used to estimate these conditional parameters which are used for extreme quantile prediction.

Blockchain for the Maritime: a Modular Proposition
PRESENTER: Rim Abdallah

ABSTRACT. This paper describes the concept of modular blockchain deployment in the maritime sector and its potential benefits. The approach involves implementing blockchain technology in a flexible and adaptable manner, allowing stakeholders to select the most relevant modules for their specific needs. The research aims to introduce blockchain technology to the shipping industry and includes the conceptualization and implementation of a modular approach. The challenges of implementing a modular blockchain deployment in the maritime industry are discussed, including integration with existing systems, data standardization, data privacy and security, regulatory compliance, collaboration, and governance. We also highlight the hardware and software requirements necessary for implementing a modular blockchain deployment, including servers, storage, networking infrastructure, security, blockchain platform, node software, and wallet software.

Measuring Movie Script Similarity Using Characters, Keywords, Locations, and Interactions
PRESENTER: Majda Lafhel

ABSTRACT. Measuring similarity between multilayer networks is difficult, as it involves various layers and relationships that are challenging to capture using distance measures. Existing techniques have focused on comparing layers with the same number of nodes and ignoring inter-relationships. In this research, we propose a new approach for measuring the similarity between multilayer networks while considering inter-relationships and networks of various sizes. We apply this approach to multilayer movie networks composed of layers of different entities (character, keyword, and location) and inter-relationships between them. The proposed method captures intra-layer and inter-layer relationships, providing a comprehensive overview of the multilayer network. It can be used in various applications, including analyzing movie story structures and social network analysis.

Identifying Influential Nodes: the Overlapping Modularity Vitality Framework
PRESENTER: Stephany Rajeh

ABSTRACT. This paper proposes an Overlapping Modularity Vitality framework for identifying influential nodes in networks with overlapping community structures. The framework uses a generalized modularity equation and the concept of vitality to calculate the centrality of a node. We investigate three definitions of overlapping modularity and three ranking strategies prioritizing hubs, bridges, or both types of nodes. Experimental investigations involving real-world networks show that the proposed framework demonstrates the benefit of incorporating overlapping community structure information to identify critical nodes in a network.

11:15-12:30 Session Oral O2: Social Complexity
11:15
Visualizing Mobile Phone Communication Data in Criminal Investigations: the Case of Media Multiplexity
PRESENTER: Martina Reif

ABSTRACT. Given the multiple channels offered by social media apps, the analysis of communication data in criminal investigations has become a challenging task. A multivariate graph, gathering information of different types, can be inferred from communication events (calls, group discussions, etc.) and contact information (e.g. phone directory or app ``friends''). Astute transformations are however required to properly associate virtual entities used by a single physical person. This paper proposes a visual analytics approach to support this task relying on graph transformations and proper visual encodings.

11:30
Bounded Confidence Models Generate Additional Clusters When the Number of Agents is Growing

ABSTRACT. Opinion dynamics models express some hypotheses about social interactions mathematically and provide means to investigate their effect in large populations. While in most of the models, the population of interacting agents is fixed, in this contribution, we consider a growing population of agents. In specific, we consider the bounded confidence model on growing populations. Our intention is to study this model in different network types, and particularly on scale-free networks. However, starting from fully mixed agents seems a necessary first step in order to understand the model and compare its results to the fixed population versions.

Our first results show that when there are constantly new agents that appear on the opinion axis, these agents can appear in the regions that correspond to the standard minor clusters (that were identified on the model approximation using continuous distributions of opinions), and form new clusters progressively, once the major clusters have appeared. However, these clusters can maintain themselves only if they are much smaller than the primary clusters. In this oral contribution, we will explain the mechanism between primary and secondary clusters.

11:45
Monetization in Online Streaming Platforms: an Exploration of Inequalities in Twitch.Tv
PRESENTER: Antoine Houssard

ABSTRACT. The live streaming platform Twitch underwent in recent years an impressive growth in terms of viewership and content diversity. The platform has been the object of several studies showcasing how streamers monetize their content via a peculiar system centered around para-sociality and community dynamics but the lack of data regarding streamers revenues left wandering about the effectiveness of the strategies described. Using data from a recent leak we were able to characterize the activity and popularity dynamic and link them to actual revenue. Employing methods from social physics and econometrics, we analyzed audience building and retention dynamics and linked them to observed inequalities. We found a high level of inequality across the platform as well as some . Our results demonstrate that, even if the platform design and affordances favor monetization for smaller creators its non-algorithmic design leaves room for classical choice biases and allows a few streamers to emerge,retain and renew a massive audience.

12:00
Political Participation and Voluntary Associations : A Hypergraph Case Study
PRESENTER: Amina Azaiez

ABSTRACT. Civic organizations, ranging from interest groups to voluntary associations, constantly influence policy formation in representative democracies. This work presents a local case study that examines the relationship between voluntary associations and local political institutions in a city with almost two thousand residents. Traditionally, sociologists approaches focus on individual characteristics such as age, gender, or socio-professional statues. Here, we model social interactions between members of organizations through a hypergraph and explain political involvement. Specifically, we model interactions as hyperedges that correspond to activities proposed by organizations and involve the individuals who participate in those activities. Our analysis reveals a community-based structure, in which members of similar type of organization tend to interact more frequently. To quantify 'political participation', we introduce an interactional-based measure that extends the degree centrality. We also introduce the 'diversity coefficient' as an extension of the degree centrality to capture individual's ability to participate in activities composed of members from different communities. Among other centrality measures, we find that the diversity coefficient is the most significant factor in explaining political participation among members of associations.

12:15
Opinion Dynamics Model Revealing yet Undetected Cognitive Biases

ABSTRACT. This paper synthesises a recent research that includes opinion dynamics models and experiments suggested by the model results. The mathematical analysis establishes that the model's emergent behaviours derives from cognitive biases that should appear in quite general conditions. Moreover, it seems that psychologists have not yet detected these biases. The experimental side of the research therefore includes specifically designed experiments that detect these biases in human subjects. The paper discusses the role of the model in this case, which is revealing phenomena that are almost impossible to observe without its simulations.

12:30-14:00Lunch Break
14:00-15:30 Session Oral O3: Dynamics & Self-Organization
14:00
A Toy Model for Approaching Volcanic Plumbing Systems as Complex Systems
PRESENTER: Remy Cazabet

ABSTRACT. Magmas form at depth, move upwards and evolve chemically through a combination of processes. Power-law relationships describe eruption magnitudes, frequencies, and durations. Such relationships are typical of non-linear and self-organised systems with critical points. A growing body of evidence indicates that several magma chambers connect before and during eruptions. In this work, we investigate the potential of the network approach through a prototype of magma pool interaction and magma transfer across the crust. In network terms, it describes a diffusion process on a dynamic spatial network, in which diffusion and network evolution are intertwined: the diffusion affects the network structure, and reciprocally. This model is a proof of concept where mechanisms are based on analogies: space and time are not realistic, the system is isothermal, etc. Nevertheless, it succeeds in showing that a system governed by the same mechanical rules and fed by a linear input of magma at the bottom, can result in nonlinear behaviors at the surface (e.g., episodic volcanic eruptions).

14:15
Reconstruction of Variables of Interest in Nonlinear Complex Systems: Application to a C. Elegans Biological Neural Network

ABSTRACT. Reconstructing the variables of interest in complex networks is an important process consisting in the capability of inferring the values of the relevant variable of some target nodes from the knowledge of some other node states. However, few approaches deal with the case of networks of nonlinear systems and nonlinear couplings. Our purpose is to present a new one based on specific local relations obtained from equations of each node and to apply it to the study of a biological neural network of C. elegans.

14:30
Agent-Based Modelling to Simulate Realistic Self-Organizing Development of the Mammalian Cerebral Cortex
PRESENTER: Roman Bauer

ABSTRACT. Poster session and presentation on:

The neocortex is a highly complex structure that plays a crucial role in the cognitive abilities of humans and other vertebrates. Here, we employed a computational agent-based model in the high-performance simulation platform BioDynaMo to simulate the development of the neocortex starting with a pool of neuroepithelial cells. Our study aimed to investigate how gene-type rules generate the highly complex structure of the neocortex in a self-organizing manner. Our model was designed to mimic the sequential development of the neocortex through a gene regulatory network that sequentially simulates the differentiation and migration of neuroepithelial cells into the distinct layers found in the mammalian cortex.

14:45
How to Grasp the Complexity of Self-Organised Robot Swarms?

ABSTRACT. Robot swarms consist of large numbers of autonomous robots, whose behaviour has been greatly inspired by existing complex biological, physical or chemical systems. This is especially the case for behaviours that involve mechanisms leading to spatial self-organisation of robots. The complex nature of these behaviours prevents a human operator from keeping a mental model of them, which makes it difficult to interact with them, even though this is necessary in certain cases: prediction of a loss of stability, detection of blocking situations, etc. How to allow an operator to grasp the complexity of self-organised robot swarms? This article aims at providing leads to answer this question, by investigating what humans are capable of perceiving of a complex system, and what additional information could be needed to enable them to understand its dynamics and state, and to predict the effects of their control. We first present what an operator is able to perceive from a large number of agents, self-organised or not, through a state of the art of existing works in cognitive sciences, vision and swarm robotics. Secondly, we identify in the literature the different types of information on robot swarms that are transmitted to the operator, with the aim of facilitating his perception and improving his understanding. Finally, we discuss what could be the information needed to build a mental model of the operator, the avenues being explored and the possible challenges to be taken into account.

15:00
Analysis of a Network of Hodgkin-Huxley Excitatory and Inhibitory Neurons
PRESENTER: B. Ambrosio

ABSTRACT. In this talk, I will present results about the theoretical and numerical analysis of a network of excitatory (E) and inhibitory (I) neurons of Hodgkin-Huxley (HH) type. I will emphasize how emergent properties arise as the parameter reflecting the effect of E to E spikes increase.

15:15
Hopf Bifurcation in Oncolytic Therapeutic Model with Viral Lytic Cycle
PRESENTER: Radouane Yafia

ABSTRACT. In this paper, we propose a delayed mathematical model describing oncolytic virotherapy treatment of a tumour that proliferates according to the logistic growth function and incorporating viral lytic cycle. The tumour population cells is divided into uninfected and infected cells sub-populations and the virus spreading is supposed to be in a direct mode (i.e. from cell to cell). Depending on the time delay, we analyze the positivity and boundedness of solutions and the stability of tumour uninfected-infected equilibrium (UIE) is established. We prove that, delay can lead to ``Jeff's phenomenon" observed in laboratory which causes oscillations in tumour size whose phase and period changes over time. Some numerical simulations are carried out to illustrate our theoretical results.

15:30-16:00Coffee Break
16:00-16:45 Session Keynote Speaker S2
16:00
Resilience and transitions from local Digital Twins to global complex urban systems

ABSTRACT. In the context of the climate change and energetic and political crises, new tendencies in the long/medium term evolution of urban systems, together with new data and methods, require that existing theoretical assumptions and conceptualizations be challenged as global urban hierarchies are reconfigured and citizens’ aspirations in their urban environments are in strong transformations. The connection between urban systems at different levels of organization becomes more and more relevant for understanding urban systems and their transitions. But the current inter-urban perspective is not sufficient to encompass these dynamics. The evolution of power distributions inside and between cities reshapes the world organization of central/peripheral cities and the complexity of the global urban system. Actors as multinational firms, or high-level innovation centers, participate actively in these reconfigurations that concentrate wealth, control, innovation, and attractiveness in a few cities. At the local level, citizens wish more health and well-being implying better policy coordination between local and national/international scales. In the complexity of this multi-level system, how is regionalization of the world reshaping in a multipolar urban world? How does the multi-level perspective highlight some resilience properties if one integrates environmental care in the urban system? The theories and methodologies derived from complex systems sciences bring new perspectives for urban transitions towards more sustainability and resilience.

16:45-18:30 Session Oral O4: Diffusion & Epidemics
16:45
Supply, Demand and Spreading of News During COVID-19 and Assessment of Questionable Sources Production
PRESENTER: Pietro Gravino

ABSTRACT. We exploit the burst of news production triggered by the COVID-19 outbreak through an Italian database partially annotated for questionable sources~\cite{supplydemand}. We compare news supply with news demand, as captured by Google Trends data. We identify the Granger causal relationships between supply and demand for the most searched keywords, quantifying the inertial behaviour of the news supply. Focusing on COVID-19 news, we find that questionable sources are more sensitive than general news production to people’s interests, especially when news supply and demand are mismatched. We introduce an index assessing the level of questionable news production solely based on the available volumes of news and searches. Furthermore, we introduce an analysis of the spreading layer of the dynamics. Measured from social media data, it can represent the bridge of interplay between supply and demand. We contend these results can be a powerful asset in informing campaigns against disinformation and providing news outlets and institutions with potentially relevant strategies.

17:00
A Local Agent-Based Model of COVID-19 Spreading and Interventions

ABSTRACT. This research presents a simulation model that combines meta-population geospatial data with the SEIR epidemiological model to simulate a city of up to 250,000 residents while considering various factors such as virus transmission rate, disease severity, and prevention and control measures. This model can assist decision-makers in exploring different pandemic response strategies, including lockdowns, social distancing, mass testing, contact tracing, and vaccination. This simulation aims to provide decision-makers with a better understanding of the implications of their choices and enable them to make informed real-time decisions to manage a health crisis.

17:15
Towards a Generic Agent Based Vector-Host Model
PRESENTER: Cyrine Chenaoui

ABSTRACT. The aim of our work is to develop a conceptual generic agent-based model to formalize the interaction of vector and host in view of climate change. The model consists in creating a hypothetical example of a vector-host system. It stimulates the vector's life cycle while considering interactions, respectively, with hosts and the temperature. It is presented following the ODD protocol and is based on a set of parameters and processes to conceptualize the vector-host complex and could be accommodated to a wide spectrum of vector species and different biogeographic regions. The model's primary goal is to evaluate the overall effects of temperature variations and host dispersion patterns on tick population dynamics while considering temperature effects on development. We show that seasonality is a primary determinant of the synchronization of distinct physiological stages. Our model can be extended to more ecologically complex systems with multiple species and real-world landscape complexity to test different host- and/or vector-targeted control strategies and identify effective approaches in managing vector population and dispersion patterns.

17:30
Exploring and Optimising Infectious Disease Policies with a Stylised Agent-Based Model
PRESENTER: Juste Raimbault

ABSTRACT. The quantitative study of the spread of infectious diseases is a crucial aspect to design health policies and foster responsiveness, as the recent COVID-19 pandemic showed at an unprecedented scale. In-between abstract theoretical models and large-scale data driven microsimulation models lie a broad set of modelling tools, which may suffer from various issues such as parameter uncertainties or the lack of data. We introduce in this paper a stylised ABM for infectious disease spreading, based on the SIRV compartmental model. We account for a certain level of geographical detail, including commuting modes and workplaces. We apply to it a set of model validation methods, including global sensitivity analysis, surrogates, and multi-objective optimisation. This shows how such methods could be a new tool for more robust design and optimisation of infectious disease policies.

17:45
Impact of Timing of the Booster Dose on SARS-CoV-2 Omicron Variant Epidemic in France (Nov 2021 - May 2022)

ABSTRACT. With evidence of COVID-19 vaccine efficacy waning over time, evaluating different booster strategies through mathematical modeling is key to improve pandemic control. In autumn 2021, France implemented a vaccination campaign to administer a third dose to boost individuals’ protection against COVID-19 in preparation for winter. The campaign was first targeted at the older population with a recommended 6-month delay from the receipt of the second dose, and then extended to adults while shortening the eligible inter-dose delay to 5 months. Here, we investigated the impact of the timing of booster vaccination on the epidemic dynamics in France in the following months, when the Omicron BA.1 and BA.2 sub-lineages emerged. We extended a previously developed COVID-19 transmission model [1] to reproduce SARS-CoV-2 spreading in France during the Delta-Omicron period (September 2021 – May 2022). The model is a stochastic age-structured multi-strain transmission model stratified by vaccination status, accounting for possible re-infections, and waning levels of protection against infection and hospitalization based on the circulating variant, vaccination status, past infection history, and time since vaccination and/or prior infections. The model integrates social contacts parametrized over time to account for behavioral changes during the epidemic, based on mobility data, calendar of school closures, and surveys on adoption of preventive measures, as done in previous works [2]. Assuming the observed booster vaccination rhythm, we fitted the model to French hospital admission data and genomic surveillance data (Figure 1a), estimating Omicron BA.1 and BA.2 transmission advantages. We then tested counterfactual vaccination scenarios administering the booster dose with a fixed delay after the second dose, ranging from 4 to 9 months (Figure 1b). We found that anticipating or delaying the booster dose for the adult population with respect to the realized vaccination campaign would significantly change the epidemic trajectory. Adopting an inter-dose delay of 7 months or longer would lead to a large epidemic wave in January 2022 due to limited vaccination immunity when Omicron BA.1 arrives, and would require the implementation of a one-month social distancing measure to keep the peak in hospitalizations under a manageable level, reducing the reproductive number at least by 10%, 15% or 20% for vaccination scenarios using 7, 8, or 9 months of inter-dose delay (Figure 1c). Instead, shorter inter-dose delays, e.g. administering the booster 4 or 5 months after the second dose, would allow to fully suppress the BA.1 wave in January. However, vaccination immunity in the population would degrade fast due to waning and would not be sufficient to contain a large Omicron BA.2 wave in March (Figure 1c). We found that there can be a benefit in accelerating the booster campaign on the run, e.g. by shortening the inter-dose delay from 8 to 6 months as soon as Omicron BA.1 emerges, to avoid implementation of social distancing later (Figure 1d). The results expose that a strong waning in vaccine immunity combined with lack of recent natural immunity could still leave the population vulnerable to more transmissible variants in the short term, even after a large booster campaign. Results also suggest the importance of building an agile system that can adapt the policies in place based on the evolving epidemic situation, especially emergence of variants of concern, pointing out that genomic surveillance monitoring variants of the virus remains essential.

18:00
Contact Networks in Daily-Life Pedestrian Crowds and Risks of Viral Transmission

ABSTRACT. In order to assess the risks of short-range airborne transmission of a virus in pedestrian crowds, detailed information must be gathered about the network of complex interactions between people in the crowd. We have collected such detailed data on the field, in various situations, and developed a methodology to estimate the rate of new infections in the crowd. The method relies on coarse-graining simulations of droplet trajectories at the microscale, in numerous ambient flows, into spatio-temporal maps of viral concentration around a potential emitter. Through this coupling, we are able to rank the situations that we have investigated by the risks of viral transmission that they raise; our study also highlights that even the most modest air flows dramatically lower the quantitative rates of new infections.

18:15
Minimizing Epidemic Spread Through Mitigation of anti-Vaccine Opinion Propagation
PRESENTER: Sarah Alahmadi

ABSTRACT. In this study we investigate the impact of vaccination attitudes, as a social contagion, on a disease dynamic. Vaccine-related negative information poses a significant concern since it is a major motivator of vaccination hesitancy. This information may cause social contagion of anti-vaccination opinions, resulting in clusters of unprotected people and potentially larger epidemic outbreaks. Thus, this research aims to minimize the spread of an epidemic by mitigating anti-vaccine social contagion with an effective counter-campaign that promotes vaccination. In a coupled-dynamic model describing processes of opinion spreading and disease diffusion, we propose a number of techniques to mitigate the propagation of anti-vaccine opinions and prevent the expansion of anti-vaccine communities. Whereas, counterintuitively, we find that the presence of pro-vaccine information may increase epidemic outbreaks, we also demonstrate that targeted positive campaigning can reduce outbreaks.