COMPLEX NETWORKS 2020: NINTH INTERNATIONAL CONFERENCE ON COMPLEX NETWORKS & THEIR APPLICATIONS
PROGRAM FOR TUESDAY, DECEMBER 1ST
Days:
previous day
next day
all days

View: session overviewtalk overview

11:00-12:15 Session Oral O1A: Human Behavior

Oral session

11:00
Evolution of spatial political community structures in Sweden 1985-2018

ABSTRACT. Understanding how the electoral behaviour of a population changes in a country is key to understand where and why social change is happening. In this paper, we apply methods from network science to the study the middle-long-term evolution of Swedish electoral geography. Sweden is an interesting case since its political landscape has significantly changed over the last three decades with the rise of the Sweden Democrats and the Green Party and the fall of the Social Democrats. By partitioning the Swedish municipalities according to their similarity in voting profiles, we show that Sweden can be divided into three or four main politico-cultural communities. More precisely, a transition from three to four main politico-cultural communities is observed. The fourth community emerged in the early 2000s, and it is characterized by a large vote-share for the Sweden Democrats, while almost all other parties underperform.

11:15
Self-Modeling Networks Using Adaptive Internal Mental Models for Cognitive Analysis and Support Processes

ABSTRACT. In this paper an adaptive network model is presented for cognitive analysis and support processes for the situation and states of a human, illustrated for a car driver. The adap-tive network model makes use of first-order self-models for the internal mental models used for the cognitive analysis and support processes. To obtain adaptation of these first-order self-models, second-order self-models are included. The adaptive network model is illustrated for realistic scenarios for a car driver.

11:30
Revealing the complex comorbidity structure of internalizing disorders through hypergraph models

ABSTRACT. Internalizing disorders, such as anxiety and depression, frequently co-occur. In fact, major depressive disorder (MDD) and generalized anxiety disorder (GAD) overlap to such a degree that researchers have suggested that GAD is not independent of MDD because most people who develop GAD will eventually develop MDD (see Moffitt et al., 2007). The odds ratio of cross-sectional comorbidity of MDD and GAD is upwards of 8.2. While MDD and GAD are two of the most highly comorbid disorders, having any mood disorder substantially increases risk of another disorder with an adjusted odds ratio of 4.2 for any anxiety disorder, 2.0 for any drug use disorder, and 4.6 for any personality disorder. Comorbidity is strongly associated with disorder severity, chronicity, and impairment. Thus, a better understanding of the complex, time-evolving structure of comorbidity is crucial for prevention and treatment.

Here we use a hypergraph model to reveal the structure of comorbidity among internalizing disorders in a large naturalistic sample of online individuals by examining how groups of diagnoses and distinct disorders co-occur in reports of clinical diagnosis on social media. We specifically focus on two prevalent groups of internalizing disorders; one group centered around depression (Group 1, containing the following disorders; depression, dysthymia, seasonal affective disorder (SAD), and persistent depressive disorder (PDD)) and the other centered around anxiety and related disorders (which we will refer to as Group 2, containing the following disorders; agoraphobia, anxiety, GAD, obsessive compulsive disorder (OCD), panic disorder, and specific phobia).

11:45
Paternal-maternal surname networks reveal the population structure of Santiago, Chile

ABSTRACT. Insofar surnames are associated with ancestry, they contain social information. In countries where individuals hold both paternal and maternal surnames, last names are a source of network data. This paper exploits the social and relational properties of surnames to produce a synthetic view of the population of Santiago, Chile. From administrative records of more than four million names, it creates a network of paternal-maternal last names and associates them to an approximate socioeconomic index. It identifies the clusters that emerge and describes them. The analysis reveals three macro-clusters: those representing the Mapuche people, the descendants of Palestinians, and the traditional Chilean upper class. Individuals holding Mapuche surnames are deprived economically. The two latter possess high socioeconomic status and live in the same parts of Santiago.

12:00
Spontaneous symmetry breaking of active phase in coevolving nonlinear voter model

ABSTRACT. We study an adaptive network model driven by a nonlinear voter dynamics. Each node in the network represents a voter and can be in one of two states that correspond to different opinions shared by the voters. A voter disagreeing with its neighbor's opinion may either adopt it or rewire its link to another randomly chosen voter with any opinion. The system is studied by means of the pair approximation in which the distinction between the average degrees of nodes in different states is made. This approach allows us to identify two dynamically active phases, a symmetric and an asymmetric one. The asymmetric active phase, in contrast to the symmetric, is characterized by different numbers of nodes in the opposite states that coexist in the network. The pair approximation predicts the possibility of spontaneous symmetry breaking, which leads to a continuous phase transition between the symmetric and the asymmetric active phases. In this case, the absorbing transition occurs between the asymmetric active and the absorbing phases after the spontaneous symmetry breaking. Discontinuous phase transitions and hysteresis loops between both active phases are also possible. Interestingly, the asymmetric active phase is not displayed by the model where the rewiring occurs only to voters sharing the same opinion, studied by other authors. Our results are backed up by Monte Carlo simulations.

11:00-12:15 Session Oral O1B: Network Models

Oral session

11:00
GrowHON: A Scalable Algorithm for Growing Higher-order Networks

ABSTRACT. Networks are powerful and flexible structures for expressing relationships between entities, but in traditional network models an edge can only represent a relationship between a single pair of entities. Higher-order networks (HONs) overcome this limitation by allowing each node to represent a sequence of entities, rather than a single entity, which allows edges to naturally express higher-order relationships. While HONs have proven effective in several domains, the process of building a HON can be computationally intensive, and previous works have been forced to choose between scalability and thorough detection of higher-order dependencies. To achieve both scalability and accurate encoding of higher-order dependencies, we introduce GrowHON, an algorithm that grows a HON by embedding the input in a tree structure, pruning the non-meaningful sequences, and converting the tree into a network. We demonstrate that GrowHON is scalable with respect to both the size of the input and order of the network, and that it preserves important higher-order dependencies that are not captured by prior approaches. These contributions lay an important foundation for higher-order networks to continue to evolve and represent larger and more complex data.

11:15
Local Degree Asymmetry for Preferential Attachment Model

ABSTRACT. One of the well-known phenomena of the sociological experience is the friendship paradox which states that your friends are more popular than you, on average. The friendship paradox is widely detected empirically in various complex networks including social, coauthorship, citation, collaboration, online social media networks. A local network metric called ``the friendship index'' has been recently introduced in \cite{10.1007/s41109-019-0190-8} in order to evaluate some aspects of the friendship paradox for complex networks. The value of the friendship index for a node is defined as the ratio of the average degree of the neighbors of the node to the degree of this node. In this paper we examine the theoretical properties of the friendship index in growth networks generated by Barab\'{a}si-Albert model by deriving the equation that describes the evolution of the expected value of the friendship index of a single node over time. Results show that there is a clear presence of the friendship paradox for networks evolved in accordance with the Barab\'{a}si-Albert model. Moreover, the distributions of the friendship index for such networks are heavy-tailed and comparable with the empirical distribution obtained for some real networks.

11:30
A Random Growth Model with any Real or Theoretical Degree Distribution

ABSTRACT. The degree distributions of complex networks are usually considered to be power law. However, it is not the case for a large number of them. We thus propose a new model able to build random growing networks with (almost) any wanted degree distribution. The degree distribution can either be theoretical or extracted from a real-world network. The main idea is to invert the recurrence equation commonly used to compute the degree distribution in order to find a convenient attachment function for node connections - commonly chosen as linear. We compute this attachment function for some classical distributions, as the power-law, broken power-law, geometric and Poisson distributions. We also use the model on an undirected version of the Twitter network, for which the degree distribution has an unusual shape.

11:45
Fast multipole networks

ABSTRACT. Two prerequisites for robotic multiagent systems are mobility and communication. Fast multipole networks (FMNs) enable both ends within a unified framework. FMNs can be organized very efficiently in a distributed way from local information and are ideally suited for motion planning using artificial potentials. We compare FMNs to conventional communication topologies, and find that FMNs offer competitive communication performance (including higher network efficiency per edge at marginal energy cost) in addition to advantages for mobility.

12:00
Nondiagonal Mixture of Dirichlet Network Distributions for Analyzing a Stock Ownership Network

ABSTRACT. Block modeling is widely used in studies on complex networks. The cornerstone model is the stochastic block model (SBM), widely used over the past decades. However, the SBM is limited in analyzing complex networks as the model is, in essence, a random graph model that cannot reproduce the basic properties of many complex networks, such as sparsity and heavy-tailed degree distribution. In this paper, we provide an edge exchangeable block model that incorporates such basic features and simultaneously infers the latent block structure of a given complex network. Our model is a Bayesian nonparametric model that flexibly estimates the number of blocks and takes into account the possibility of unseen nodes. Using one synthetic dataset and one real-world stock ownership dataset, we show that our model outperforms state-of-the-art SBMs for held-out link prediction tasks.

11:00-12:15 Session Oral O1C: Biological Networks

Oral session

11:00
A Methodology for Evaluating the Extensibility of Boolean Networks' Structure and Function

ABSTRACT. Formal interaction networks are well suited for representing complex biological systems and have been used to model signalling pathways, gene regulatory networks, interaction within ecosystems, etc. In this paper, we introduce Sign Boolean Networks (SBNs), which are a uniform variant of Threshold Boolean Networks (TBFs). We continue the study of the complexity of SBNs and build a new framework for evaluating their ability to extend, i.e. the potential to gain new functions by addition of nodes, while also maintaining the original functions. We describe our software implementation of this framework and show some first results. These results seem to confirm the conjecture that networks of moderate complexity are the most able to grow, because they are not too simple, but also not too constrained, like the highly complex ones.

11:15
Deep Reinforcement Learning for Control of Probabilistic Boolean Networks

ABSTRACT. Probabilistic Boolean Networks (PBNs) were introduced as a computational model for the study of complex dynamical systems, such as Gene Regulatory Networks (GRNs).Controllability in this context is the process of making strategic interventions to the state of a network in order to drive it towards some other state that exhibits favourable biological properties. In this paper we study the ability of a Double Deep Q-Network with Prioritized Experience Replay in learning control strategies within a finite number of time steps that drive a PBN towards a target state, typically an attractor. The control method is model-free and does not require knowledge of the network's underlying dynamics, making it suitable for applications where inference of such dynamics is intractable. We present extensive experiment results on two synthetic PBNs and the PBN model constructed directly from gene-expression data of a study on metastatic-melanoma.

11:30
Viral Infection From a Complex Networks Perspective

ABSTRACT. Complex network theory has been extensively used to analyze a wide variety of real-world systems, including biological organisms. An example of special relevance is the study of the interaction between a virus and its cellular host during the infection. Host-virus protein interaction networks have been very useful to detect molecular pathways exploited by the virus and potential antiviral targets. Also, the dynamic transcriptional response of the infected cell has been extensively studied with host gene co-expression networks. However, the structure and the dynamics of host-virus networks have been so-far poorly characterized, and to our knowledge there has not been any previous attempt to construct and study in detail a host-virus protein co-expression network. In this work, the infection of a human cell by Epstein-Barr virus has been described, focusing on two different network models: protein interaction networks and protein co-expression networks. The structural and dynamical features of both networks were compared, and our findings confirm that viral proteins are preferentially attached to high-centrality host nodes in host-virus protein interaction networks, while low-centrality host nodes are unexpectedly targeted in host-virus co-expression network. This may suggest a different connecting strategy for viral nodes in host-virus co-expression network, opening future research on the biological significance of host-virus co-expression networks.

11:45
Statistics of Growing Chemical Network Originating from One Molecule Species and Activated by Low-Temperature Plasma

ABSTRACT. Chemistry in plasma is complicated, with many reactions in parallel and in series, and a complex network can be suitable for visualization and analysis of its complexity. A numerical calculation based on hundreds of rate equations is a typical tool for it, but such a computational process does not clarify undergoing physical and chemical properties that make many industrial plasma processes stable for a number of applications. In this study, focusing on low-temperature plasma in which high-energy electrons are activators for chemical reactions, we investigate the origin of the stability by examining statistical properties of networks in the case of silane (SiH4) plasma. The seed species in the reaction space is only one: SiH4, which is surrounded by high-energy electrons. SiH4 is decomposed into several fragments composed of Si and/or H atoms with possible charges, and such radical and ion species are decomposed or synthesized into other species, leading to formation of temporal reaction networks in chemistry. With the effects of rate constants that determines chemical reaction rates, we create temporal networks, and preferential attachments to induce a new reaction are observed in a transient state. Centrality indices for participant species and degree distributions reveal what are going on in this complex system, and an exponential-tail degree distribution observed during the sequential process is one of the significant sources for its reaction stability.

12:00
Adaptive rewiring evolves brain-like structure in directed networks

ABSTRACT. Activity dependent plasticity is the brain’s mechanism for reshaping neural connections in response to development, learning, and recovery from injury. In previous studies we represented activity by graph diffusion while modeling plasticity by adaptive rewiring. Adaptive rewiring involves adding shortcut connections where diffusion on the graph is intensive while pruning underused ones. In undirected networks, this process steers initially random networks robustly to high-levels of structural complexity, reflecting global characteristics of brain anatomy. Here we investigate adaptive rewiring in networks with directed connections. Generalizing the notion of diffusion, we use consensus and advection equations to describe network flow. The resulting networks develop hubs: in case of consensus information converges on the hubs, while for advection information diverges from them. Combining advection and consensus in rewiring the networks, as well as by adding instances of random rewiring, networks evolve with short path lengths and increased numbers of connected nodes, while still developing hub structures. These hallmarks of brain networks thus emerge spontaneously in directed networks.

12:15-13:15Lunch Break
13:15-14:30 Session Oral O2A: Information Spreading in Social Media

Oral session

13:15
Influence of Retweeting on the Behaviors of Social Networking Service Users

ABSTRACT. Retweeting is a featured mechanism of some social media platforms such as Twitter, Facebook, and Weibo. Users share articles with friends or followers by reposting a tweet. However, the ways in which retweeting affects the dominant behaviors of users is still unclear. Therefore, we investigate the influence of retweeting on the behaviors of social media users from a networked, game theoretic perspective; in other words, we attempt to clarify the ways in which the presence of a retweeting mechanism in social media promotes or diminishes the willingness of users toward posting articles and commenting. We propose a retweet reward game model that has been derived by adding a retweeting mechanism to a reward game, which is a simple social networking service model. Subsequently, we conduct some simulation-based experiments to understand the effects of retweeting on the behaviors of users. We observe that users are motivated to post new articles if there is a retweeting mechanism. Furthermore, agents in dense networks are motivated to comment on the articles posted by others because articles spread widely among users, and thus, users can be incentivized to post articles.

13:30
Impact of individual actions on the collective response of social systems

ABSTRACT. In a social system the individual actions have an impact on the system and can trigger a collective reaction. The way and extent to which the activity (number of actions) of an individual causes or is connected to the response (number of reactions) of the system is still an open question. In this work we propose three domain-independent mathematical models that provide a description of how a social system reacts to the actions of its components. With them, we are able to reproduce empirical activity-response data from Twitter, the scientific citations network and Wikipedia. They can also serve as baselines or null models for more elaborated and domain-specific approaches.

13:45
Sockpuppet Detection: a Telegram case study

ABSTRACT. In Online Social Networks (OSN) numerous are the cases in which users create multiple accounts that publicly seem to belong to different people but are actually fake identities of the same person. These fictitious characters can be exploited to carry out abusive behaviors such as manipulating opinions, spreading fake news and disturbing other users. In literature this problem is known as the ”Sockpuppet problem”. In our work we focus on Telegram, a wide-spread instant messaging application, often known for its exploitation by members of organized crime and terrorism, and more in general for its high presence of people who have offensive behaviors.

14:00
Structural Node Embedding in Signed Social Networks: Finding Online Misbehavior at Multiple Scales

ABSTRACT. Entities in networks may interact positively as well as negatively with each other, which may be modeled by a signed network containing both positive and negative edges between nodes. Understanding how entities behave and not just with whom they interact positively or negatively leads us to the new problem of structural role mining in signed networks. We solve this problem by developing structural node embedding methods that build on sociological theory and technical advances developed specifically for signed networks. With our methods, we can not only perform node-level role analysis, but also solve another new problem of characterizing entire signed networks to make network-level predictions. We motivate our work with an application to social media analysis, where we show that our methods are more insightful and effective at detecting user- level and session-level malicious online behavior from the network structure than previous approaches based on feature engineering.

14:15
Learning through the Grapevine: the Impact of Message Mutation, Transmission Failure, and Deliberate Bias

ABSTRACT. We examine how well people learn when information is noisily relayed from person to person; and we study how communication platforms can improve learning without censoring or fact-checking messages. We analyze learning as a function of social network depth (how many times information is relayed) and breadth (the number of relay chains accessed). Noise builds up as depth increases, so learning requires greater breadth. In the presence of mutations (deliberate or random) and transmission failures of messages, we characterize sharp thresholds for breadths above which receivers learn fully and below which they learn nothing. When there is uncertainty about mutation rates, optimizing learning requires either capping depth, or if that is not possible, limiting breadth by capping the number of people to whom someone can forward a message. Limiting breadth cuts the number of messages received but also decreases the fraction originating further from the receiver, and so can increase the signal to noise ratio. Finally, we extend our model to study learning from message survival, if people are more likely to pass messages with one conclusion than another. We find that as depth grows, all learning comes from either the total number of messages received or from the content of received messages, but the learner does not need to pay attention to both.

13:15-14:30 Session Oral O2B: Network Analysis

Oral session

13:15
Towards understanding complex interactions of Normalized Difference Vegetation Index and precipitation rain gauges networks of cereal growth system

ABSTRACT. Earth observations (EO) is nowadays a powerful tool to evaluate vegetation systems as crops to reach Sustainable Development Goals (SDGs) of the agenda 2030. Normalized Difference Vegetation Index (NDVI) is a popular and widespread index in remote sensing to evaluate vegetation dynamics. However, analytical advances of NDVI long term series analysis are towards understanding complex relations of atmosphere-plant-soil system through temporal and scaling behaviour. Hence, this research presents the generalized structure function (GSF) and Hurst exponent as innovative analytical methods to explore NDVI and precipitation series in cereals in the semi-arid. Results suggest that weather support anti-persistence structure of NDVI time series since weather regime in semi-arid is essential in the understanding of complex processes of the crop growth. Mathematical description of NDVI series coupled with GSF and Hurst exponent can reinforce crop modelling future purposes.

13:30
Uncovering the image structure of Japanese TV commercials by a co-occurrence network representation

ABSTRACT. The effect of TV commercials on the purchase intention of the viewers have been studied extensively. The literature suggested that some images in TV commercials positively affects the viewers' purchase intention. However, the whole picture of images in TV commercials has not been revealed sufficiently. We studied the image structure of TV commercials in Japan by constructing a weighted co-occurrence network of keywords in TV commercials. We found the cores of the image structure that frequently co-occur with other keywords in TV commercials over the various categories of products. We further performed a community detection---a community can be regarded as the set of keywords that are in particular associated with each other by the frequent co-occurrence in TV commercials. The core keywords were belonging to different communities, respectively. We discuss the character of each community in the paper.

13:45
Unraveling the paradox of weak links in structural brain connectivity

ABSTRACT. Over the last three decades the study of structural brain connectivity has revealed several features which are found consistently across species: structural brain connectomes are modular and small-world, with the cross-modular pathways are centralised through a set of densily interconnected hubs, which form a rich-club. These conclusions have been mainly achieved by considering the unweighted - binarised - connectivity matrices derived from tract-tracing or tractography. In practice, the connectivity maps derived from both tract-tracing and tractography assign individual weights to the identified connections. These link weights are largely heterogeneous with values spanning over orders of magnitude from the strongest to the weakest; and their weight rapidly decays with the length of the fiber.

Taken together, the network properties reported to govern the structural organization of the connectome and the prominent decay of link weights with fiber length result paradoxical. On the one hand graph analysis - of binary matrices - indicates that brain connectivity obeys the small-world property. For that, the long-distance fibers need to act as shortcuts facilitating that information can travel quickly across distant regions of the brain. On the other hand, the rapid decay of link weights with fiber length implies that longer fibers are orders of magnitude weaker than the shorter ones, thus rendering the shortcuts irrelevant at the eyes of dynamical processess propagating along the brain.

Here, we aim at resolving this paradox. For that, we study whole-brain effective connectivity derived from brain activity at rest via functional MRI in healthy participants. We compare the properties of both structural, functional and effective connectivity matrices and find that the sharp decay of structural link weight with the (euclidean) distance between brain regions is not reproduced in functional or effective connectivity. Indeed, the strength of functional or effective connectivity turns to be independent of the distance between the brain regions. Further, we perform a comparative network analysis of the anatomical and effective connectivities using dynamic communicability, a recent method to study complex networks which naturally incorporates the coupling strength of the connections into the analysis. We find that communication in weighted SC is inefficient as compared to EC due to the loss of shortcuts in SC.

Our results evidence that the interpretation of link weights returned by tractography as the coupling strength of those connections is misleading. In the absence of empirical techniques which could reliably quantify the strength of interactions between ROIs, for now, we advocate to the use of effective connectivity as our source for both network analysis and constraining whole-brain models.

14:00
The Metric Backbone in the Human Connectome

ABSTRACT. Network backbones have been widely used to study the core structure and dynamics of different complex systems, including brain networks. Another key component of network studies is the concept of shortest path, which plays an important role in the optimization of communication. Here we present a preliminary study of the metric backbone – the invariant sub-graph under distance closure that contains all edges that contribute to any shortest path – in human brain structural connectivity of two different cohorts: the Human Connectome Project (HCP) and the Nathan Kline Institute study (NKI). The HCP is a high-quality dataset of healthy young adults and the NKI provides a community sample with a wide age range, which enables us to track data trends across the lifespan. Ours is the first study of the metric backbone of human connectome networks. Our preliminary results show that it comprises a surprisingly small subgraph of the original networks, that its size decreases in the earlier decades of human life, and that it accounts for a significantly outsized proportion of brain connection cost.

14:15
Connectivity-based Spectral Sampling for Big Complex Network Visualization

ABSTRACT. Graph sampling methods have been used to reduce the size and complexity of big complex networks for graph mining and visualization. However, existing graph sampling methods often fail to preserve the connectivity and important structures of the original graph.

This paper introduces a new divide and conquer approach to spectral graph sampling based on the graph connectivity (i.e., decomposition of a connected graph into biconnected components). Specifically, we present two methods, spectral vertex sampling and spectral edge sampling by computing effective resistance values of vertices and edges for each connected component. Experimental results demonstrate that our new connectivity-based spectral sampling approach is significantly faster than previous methods, while preserving the same sampling quality.

13:15-14:30 Session Oral O2C: Networks in Finance & Economics

Oral session

13:15
A network analysis of personnel exchange and companies’ relevant sector: the LinkedIn case study

ABSTRACT. In nowadays fast world, people tend to change workplace very often, carrying along with them the old company’s know-how and, possibly, some strategic information. Analysing how companies exchange personnel is interesting in both social and economic terms and, moreover, it could be a useful tool for every company that does not want to lose its best employees. In fact, over the last few years, one of the major problems managers had to deal with was employee retention [3, 7]. To gain insight into this problem we built a professional network using data retrieved from the LinkedIn social network and we analysed how employees from different sectors moved between companies. The final goal of the study was to understand whether diverse sectors presented some peculiar characteristics in terms of turnover, with a follow-up focus on those companies exchanging a significant number of employees. A further objective was to investigate whether it was possible to infer companies’ prevalent sector from employees’ exchange information.

13:30
Group Centrality Indices in the International Trade Networks

ABSTRACT. The classic centrality indices [1] take into account individual relations between nodes in the networks disregarding such still important issues in the analysis of the influence of the nodes in the networks as group influence of several nodes. The notion of group centrality of nodes can be consistently incorporated into a model evaluating the group influence in networks along with the parameters of the nodes themselves. For example, in a banking network the values on arcs can be volumes of the credit flow among banks, and the parameters of the nodes can reflect the assets of the banks themselves. So, if several creditors of the bank act simultaneously and the total value of the credit exceeds some critical value, which, in general, depends on the total credit activity of the bank or the banks’ assets, then the stability of the financial institution might differ drastically comparing to that if the actors act asynchronously. In the international trade network the situation in which the import to the country is reduced from one exporting country alone, differs from the situation in which the import from several exporting country is reduced. The effect induced on the importing country also depends on how big share of the amount of import the set of countries consti-tutes, does the amount exceeds some critical level, and are there countries in key positions in terms of the amount of trade. This kind of group influence in the net-works can be modelled by the proposed group centrality indices, such as Bundle and Pivotal indices. The indices are illustrated by evaluating the centrality of the countries in the international trade network of 2015 year using the WITS Comtrade data [4,5].

The Bundle and Pivotal indices in a general form have been proposed in [3]. Let us define the indices. Let a country in the international trade network imports goods from several other countries, and let it be some critical level, say 10%, called quota, of the total import, to the country. If the total import from some subset of the coun-tries exceeds the quota then such subset is called critical. The proposed Bundle index calculates the number of the critical sets for each country and then normalize it over all countries. One can notice that some countries can exit the critical sets, and then the set cease to be critical. These countries called pivotal ones. The proposed Pivotal index evaluates the number of pivotal countries aver all critical sets and then normal-ize the values among the countries. For the proposed indices, the restriction on the size of the coalition of the exporting countries can be imposed. Only coalitions of not more than 5 exporting countries are considered.

For the international trade network using the data on total import to the countries of the world for 2015 the Bundle and Pivotal indices are evaluated. The quota is taken to be equal 10% of the total import to the country, and the maximum size of coali-tions is set to 5. The Copland in-degree index is also calculated. The results show that the most influential countries are France, Spain, South Africa, Poland, Netherlands, Thailand, Germany, Korea, United Kingdom, and United States. In this year accord-ing to Bundle index the influence of France was around 2.1%, of Spain is about 1.9%, and within 1.7–1.8% for each of the rest of these top ten countries. According to the Pivotal index the most influential countries are France, Thailand, Slovakia, Korea, Poland, Singapore, Mexico, Czechia, New Zealand, and Canada. These coun-tries have around 2% of Pivotal index influence each. Comparing to the classic Copeland in-degree index the United States, China, Germany, United Kingdom, France, Hong Kong, Japan, Netherlands, Italy, and Korea are among the top 10 countries. In contrast to the group influence indices there is around 14% for the USA, 9% for China, 7% for Germany, and about 4% for United Kingdom, France, and Hong Kong each. The difference of the group centrality comparing to classic in-degree centrality originates both from the fact that the group centrality indices use the share of the total import of the country contrary to the in-degree index which takes into account the import in absolute terms, and the fact that the new group centrality indices reveal so-called the structure of the import among the exporting countries not taken into account in the in-degree index.

References 1. Newman M.E.J. The Structure and Function of Complex Networks // SIAM Review, 45(2), 2003, pp. 167–256. 2. Aleskerov F., Meshcheryakova N., Shvydun S. Power in Network Structures. In: Kalyagin V., Nikolaev A., Pardalos P., Prokopyev O. (eds) Models, Algorithms, and Technologies for Network Analysis. NET 2016. Springer Proceedings in Mathematics & Statistics, v. 197. Springer, Cham. First Online: 24 June 2017. 3. Aleskerov F., Yakuba V. Matrix-vector approach to construct generalized centrality indices in networks. WP7/2020/01. Moscow: Higher School of Economics Publ. House, 2020, 21 p. SSRN: https://ssrn.com/abstract=3597948 4. UN Comtrade, International Trade Statistics database, https://comtrade.un.org/ 5. WITS, World Integrated Trade Solution database, https://wits.worldbank.org/

13:45
Wealth distribution for agents with spending propensity, interacting over a network

ABSTRACT. The distribution of wealth in an economic system is studied by means of an agent model, where agents have a certain spending propensity, and they can only interact over a given network. When the network is random, or scale-free with exponent below 1, approximately, results are equivalent to having all agents allowed to interact with any other agent. However, exponents larger than affect both the wealth distribution, and the behavior at the tail. These results hold both in the absense of spending propensity, and when the spending propensity follows a power-law. Although these are preliminary results, and a larger diversity of network topologies should be investigated, they nevertheless suggest that Pareto's law is a very robust phenomenon with respect to the details of the connectivity of the agents, and that the ubiquity of Pareto's law in actual systems may have implications on the topological properties of the underlying networks of interaction.

14:00
The hidden cost of interdependencies: Collapse of complex economic systems and network structure

ABSTRACT. Economic interdependencies have become increasingly present in globalized production, financial and trade systems. While establishing interdependencies among economic agents is crucial for the production of complex products, they may also increase systemic risk due to failure propagation. It is crucial to identify how network connectivity impacts both the emergent production and risk of collapse of economic systems. In this paper we propose a model to study the effects of network structure on the behavior of economic systems by varying the density and centralization of connections among agents. The complexity of production increases with connectivity given the combinatorial explosion of parts and products. Emergent systemic risks arise when interconnections increase vulnerabilities. Our results suggest a universal description of economic collapse given in the emergence of tipping points and phase transitions in the relationship between network structure and risk of individual failure. This relationship seems to follow a sigmoidal form in the case of increasingly denser or centralized networks. The model sheds new light on the relevance of policies for the growth of economic complexity, and highlights the trade-off between increasing the potential production of the system and its robustness to collapse. We discuss the policy implications of intervening in the organization of interconnections and system features, and stress how different network structures and node characteristics suggest different directions in order to promote complex and robust economic systems.

14:15
Shock propagation channels behind the global economic contagion network

ABSTRACT. In this study we further explore the relationship between trade connections and the contagion of economic shocks across countries. In contrast to previous studies, we use a causality approach to identify synchronization and as a result we can infer on how trade affects the spread of economic shocks between countries. The results show a positive significant relationship between trade and shock contagion: trade channels thus seem to be conducive in spreading business cycle fluctuations across countries in general. The difference between the level of development also has an effect on shock contagion: shocks are less likely to spread from more developed towards less developed countries.

14:30-14:45Coffee Break
14:45-15:25 Session Speaker S1: Nataša Pržulj - Untangling biological complexity: From omics network data to new biomedical knowledge and Data-Integrated Medicine

Keynote speaker

14:45
Untangling biological complexity: From omics network data to new biomedical knowledge and Data-Integrated Medicine

ABSTRACT. We are faced with a flood of molecular and clinical data. We are measuring interactions between various bio-molecules in a cell that form large, complex systems. Patient omics datasets are also increasingly becoming available. These systems-level network data provide heterogeneous, but complementary information about cells, tissues and diseases. The challenge is how to mine them collectively to answer fundamental biological and medical questions. This is nontrivial, because of computational intractability of many underlying problems on networks (also called graphs), necessitating the development of approximate algorithms (heuristic methods) for finding approximate solutions. We develop methods for extracting new biomedical knowledge from the wiring patterns of systems-level, heterogeneous biomedical networks. Our methods uncover the patterns in molecular networks and in the multi-scale network organization indicative of biological function, translating the information hidden in the network topology into domain-specific knowledge. We also introduce a versatile data fusion (integration) framework to address key challenges in precision medicine from biomedical network data: better stratification of patients, prediction of driver genes in cancer, and re-purposing of approved drugs to particular patients and patient groups, including Covid-19 patients. Our new methods stem from novel network science algorithms coupled with graph-regularized non-negative matrix tri-factorization, a machine learning technique for dimensionality reduction and co-clustering of heterogeneous datasets. We utilize our new framework to develop methodologies for performing other related tasks, including disease re-classification from modern, heterogeneous molecular level data, inferring new Gene Ontology relationships, aligning multiple molecular networks, and uncovering new cancer mechanisms.

15:25-16:25 Session Lightning L1: Networks Models & Analysis

Lightning session

15:25
Extraction of overlapping modules in networks via spectral methods and information theory

ABSTRACT. Networks are a common method to model multivariate interactions in a variety of complex systems found in nature and society. The interactions captured by networks can include a multitude of complex phenomena occurring at various levels of observation and intensity, which are difficult to disentangle automatically. For instance, functionally-relevant gene modules in a network of gene interactions, or disease-related terminology in networks derived from drug and symptom mentions in social media [1]typically have overlapping clusters of widely varying size and strength. To address these and other similar questions, a variety of community structure algorithms have have been proposed in the literature [2,3,4]. However, most modularity algorithms seek an optimal partition of the network where each node must belong to a module. This hard-boundary assumption does not match the fluid phenomena often found in biomedical complexity. Indeed, many genes are involved in different biochemical pathways depending on their expression levels and which other biochemical species are present. Hence, the same gene can participate in distinct modules. In these biomedical problems it is more reasonable to assume that network variables can map to multiple, partially overlapping functional communities. Thus, for biomedical applications of network science there has been much recent interest in spectral, overlapping clustering methods [5]. Here we propose a new spectral method to automatically extract overlapping clusters from networks. It is based on two steps: 1) theSingular Value Decomposition(SVD) of weighted graph adjacency matrices, in a process akin toPrincipal ComponentAnalysis(PCA) of gene expression data [6], and 2) automatic extraction of overlapping modules using information theory and a polar coordinate projection of data onto singular vector (or component) subspaces. In the first step, when the original network isa bipartite graph relating two distinct sets of variables (e.g. genes vs assays in time, or disease codes vs social media user timelines), we compute the SVD of the bipartite adjacency matrix. If the network is a weighted graph of a single set of variables (e.g. genes),we perform the PCA of the (covariance-normalized) adjacency matrix (see [6] for the difference between SVD and PCA). Fig. 1A depicts the eigenvector variance spectrum of a Drosophila gene interaction network obtained via PCA, where the first eigenvector(or component) explains 20% of the variance in the gene co-expression data, and is associated with a large module involving most genes and their regular expression patterns(e.g. cell division, housekeeping and cell cycle). In functional analysis we are most interested in biochemical processes involving smaller modules which have specific regulatory functions beyond the regular cell operations captured by the first component [7]. Thus, in the second step of the method, we target subsets of lower components. Fig. 1B depicts a biplot of all genes in the network projected as points onto components 2 and 3 of the spectrum. The majority of points is (randomly) projected at the origin of the biplot, showing that they are not correlated with the phenomena captured by either component. Therefore, we want to identify those points (genes) that most protrude and cluster away from the origin, as those are most correlated with the target components. To do so, we transform the Cartesian coordinates of every point to polar coordinates (see Fig. 1D), apply a moving window over the range of radiuses, and compute the Shannon entropy of the distribution of points over angle bins in each radius window. We use overlapping bins (or fuzzy intervals [8]) for both radiuses and angles, meaning that a node at a particular polar coordinate can contribute to more than one angle and radius bin simultaneously (see Fig. 1C & E, respectively). Computing the Shannon entropy (see red line in Fig. 1D) allows us to track when the distribution of points in polar angle bins transitions from a random to a more structured arrangement. Because points near the origin (radius close to zero) are uncorrelated with the components of interest, the distribution of polar angles tends to be uniformly random, as seen in Fig. 1B,D. As the radius increases, points tend to cluster near specific angles, leading to lower Shannon entropy of the angle distribution. Thus, the goal of the second step of the algorithm is to identify the radius where important transitions in Shannon entropy occur, especially where the distribution of polar angles moves away from a uniform distribution (see blue lines in Fig. 1B,D). Naturally, several entropy transitions may occur, as some clusters are more correlated with components of interest than others—and thus have a higher radius. In other words, identifying the best clusters becomes a multi-objective optimization problem. Several measures can be used to optimize, but we exemplify the method with the rank-sum of radius and entropy to identify the radiuses that maximize the number of points selected while simultaneously minimizing the entropy value. Once a radius is selected, we retrieve only the points that lay beyond the circle it defines. The distinct clusters are then formed by the circle segments that contain similar polar angles; see red polygon in Fig. 1B with radius ≥ 4 selected by rank-sum. In our example, the red module corresponds to genes involved in protein regulation via the proteasome complex, as characterized via gene ontology enrichment analysis (GOEA) [9]. Finally, it important to stress that the clusters thus identified, contain genes that may overlap with clusters found in other component subspaces. In other words, the same network nodes can contribute to overlapping modules associated with distinct phenomena. In the talk, we will discuss variations of the entropy and multi-objective optimization measures, and apply the method to dense and noisy network data from four examples: (i) synthetic networks; (ii) a gene interaction network from transcriptomic data (RNAseq) from Drosophila intestinal cells (Fig. 1); (iii) a knowledge network of drug and symptom terms extracted from social media user timelines [1]; and (iv) a workspace social interaction network collected using radio-frequency identification (RFID) [10]. Our intuitive method to extract overlapping network clusters will be useful to a wide range of network practitioners from the biomedical to the social sciences.

15:30
Interest Clustering Coefficient: a New Metric for Directed Networks like Twitter

ABSTRACT. We study here the clustering of directed social graphs. The clustering coefficient has been introduced to capture the social phenomena that a friend of a friend tends to be my friend. This metric has been widely studied and has shown to be of great interest to describe the characteristics of a social graph. In fact, the clustering coefficient is adapted for a graph in which the links are undirected, such as friendship links (Facebook) or professional links (LinkedIn). For a graph in which links are directed from a source of information to a consumer of information, it is no more adequate. We show that former studies have missed much of the information contained in the directed part of such graphs. We thus introduce a new metric to measure the clustering of a directed social graph with interest links, namely the interest clustering coefficient. We compute it (exactly and using sampling methods) on a very large social graph, a Twitter snapshot with 505 million users and 23 billion links. We additionally provide the values of the formerly introduced directed and undirected metrics, a first on such a large snapshot. We exhibit that the interest clustering coefficient is larger than classic directed clustering coefficients introduced in the literature. This shows the relevancy of the metric to capture the informational aspects of directed graphs.

15:35
Hot-Get-Richer Network Growth Model

ABSTRACT. Under preferential attachment (PA) network growth models, late arrivals are at a disadvantage with regard to their final degrees. Previous extensions of PA have addressed this deficiency by either adding the notion of node fitness to PA, usually drawn from some fitness score distributions, or by using fitness alone to control attachment. Here we introduce a new dynamical approach to address late arrivals by adding a recent-degree-change bias to PA so that nodes with higher relative degree change in temporal proximity to an arriving node get an attachment probability boost. In other words, if PA describes a rich-get-richer mechanism, and fitness-based approaches describe good-get-richer mechanisms, then our model can be characterized as a hot-get-richer mechanism, where hotness is determined by the rate of degree change over some recent past. The proposed model produces much later high-ranking nodes than the PA model and, under specific parameter values, produces networks with structure similar to PA networks.

15:40
Data compression to choose a proper dynamic network representation

ABSTRACT. Dynamic network data are now available in a wide range of contexts and domains. Several representation formalisms exist to represent dynamic networks, but there is no well known method to choose one representation over another for a given dataset. In this article, we propose a method based on data compression to choose between three of the most important representations: snapshots, link streams and interval graphs. We apply the method on synthetic and real datasets to show the relevance of the method and its possible applications, such as choosing an appropriate representation when confronted to a new dataset, and storing dynamic networks in an efficient manner.

15:45
Graph-based Topic Extraction from Vector Embeddings of Text Documents: Application to a Corpus of News Articles

ABSTRACT. The production of news content is growing at an astonishing rate. In order to help manage and monitor the sheer amount of text, there is an increasing need to develop efficient methods that can provide insights into emerging content areas, and help with the stratification of an unstructured corpus of text into `topics' that stem intrinsically from content similarity. We present an unsupervised framework which brings together powerful vector embeddings from natural language processing with tools from multiscale graph partition that reveal natural partitions at different resolutions without making a priori assumptions about the number of cluster in the corpus. We show the advantages of our graph-based clustering through end-to-end comparisons with other popular clustering and topic modelling methods, as well as evaluating text vector embeddings, from classic Bag-of-Words to Doc2Vec to the recent transformers based model Bert. To showcase the results of this comparative work we present an analysis of news coverage from the US during the presidential election year of 2016.

15:50
Interaction of Structure and Information on Tor

ABSTRACT. Tor is the most popular dark network in the world. It provides anonymous communications using unique application layer protocols and authorization schemes. Noble uses of Tor, including as a platform for censorship circumvention, free speech, and information dissemination make it an important socio-technical system. Past studies on Tor present exclusive investigation over its information or structure. However, activities in socio-technical systems, including Tor, need to be driven by considering both structure and information. This work attempts to address the present gap in our understanding of Tor by scrutinizing the interaction between structural identity of Tor domains and their type of information. We conduct a micro-level investigation on the neighborhood structure of Tor domains using struc2vec and classify the extracted structural identities by hierarchical clustering. Our findings reveal that the structural identity of Tor services can be categorized into eight distinct groups. One group belongs to only Dream market services where neighborhood structure is almost fully connected and thus, robust against node removal or targeted attack. Domains with different types of services form the other clusters based on if they have links to Dream market or to the domains with low/high out-degree centrality. Results indicate that the structural identity created by linking to services with significant out-degree centrality is the dominant structural identity for Tor services.

15:55
Mapping the Global Scientific Landscape of Virology Before the COVID-19 Pandemic: A Large-Scale Document Analysis with the Representation Learning and Network Visual Representation

ABSTRACT. The COVID-19 pandemic prompted all sectors of society to recognize the importance of comprehensively establishing medical science and technology innovation systems. This study aims to explore the current situation of core virological subfields and compare various countries’ developing status in the coronavirus disease’s (COVID-19) pre-outbreak era. This study used a large-scale open academic dataset and recent emerging text analysis tools in artificial intelligence to examine the explosive growth of "virology" hotspots, identified 27 core research subfields, and compared various countries’ key research directions and development trends in virology and its subfield, "Coronavirus". Our findings show that (1) virology is a mature discipline with diverse and highly interdisciplinary research topics around different virus types; (2) Development courses vary greatly among worldwide countries. The United States has the most research output, widest research scope, and deepest exploration in this field. Although China was late to develop this field, the number of published papers in the past 10 years increased rapidly. And finally, (3) China has an unbalanced virology discipline layout, low scientific research impact. This data-driven and network visual representation research provides reference values for grasping the current research status in the core virology discipline and regulating regional scientific management policies.

16:00
Graph Signal Processing on Complex Networksfor Structural Health Monitoring

ABSTRACT. In this work, we demonstrate the application of a framework targeting Complex Networks and Graph Signal Processing (GSP) for Structural Health Monitoring (SHM). By modeling and analyzing a large bridge equipped with strain and vibration sensors, we show that GSP is capable of selecting the most important sensors, investigating different optimization techniques for selection. Furthermore, GSP enables the detection of mode shapes, and in general grasps the physical function of the sensors in the network. Our obtained results indicate the efficacy of GSP in handling complex sensor data modeled in complex networks.

16:05
Complex Network Analysis of North American Institutions of Higher Education on Twitter

ABSTRACT. North American institutions of higher education (IHEs): universities, 4- and 2-year colleges, and trade schools -- are heavily present and followed on Twitter. An IHE Twitter account, on average, has 20,000 subscribers. Many of them follow more than one IHE, making it possible to construct an IHE network, based on the number of co-followers. In this paper, we explore the structure of a network of 1,435 IHEs on Twitter. We discovered significant correlations between the network attributes: various centralities and clustering coefficients -- and IHEs' attributes, such as enrollment, tuition, and religious/racial/gender affiliations. We uncovered the community structure of the network linked to homophily -- such that similar followers follow similar colleges. Additionally, we analyzed the followers' self-descriptions and identified twelve overlapping topics that can be traced to the followers' group identities.

16:10
Are there Racial Disparities in Fatal Police Shootings? Exploration with Uniform Manifold Approximation and Projection

ABSTRACT. I explore the issue of fatal police shootings with a nonlinear, exploratory framework, which allow the data to speak as freely as possible. To do so, I use a suite of unsupervised techniques, including manifold learning and also neural-based embedding and projection to explore whether non-random, latent structure emerges along a racial dimension in the context of police shootings in America. The main method in this analysis is uniform manifold approximation and projection (UMAP), with additional methods for validation including self-organizing maps and local linear embedding. Using recent data on fatal police shootings the United States from 2015 to 2020, I find that there does indeed appear to be separation in the projection space of fatal police shootings among suspects of different races.

16:25-16:40Coffee Break
16:40-17:20 Session Speaker S2: Leman Akoglu - Graph-Based Anomaly Detection : Problems, Algorithms and Applications

Keynote speaker

16:40
Graph-Based Anomaly Detection : Problems, Algorithms and Applications

ABSTRACT. Graphs provide a powerful abstraction for representing non-iid data, capturing immediate as well as long-range dependencies between entities. The study of the structure and dynamics of real-world graphs has been a central theme of research across various communities. Graph-based anomaly detection focuses broadly on identifying those ‘constructs’ that do not ‘fit’ the expected relational patterns.

This talk involves vignettes from my decade-long research on anomaly detection using graph-based techniques. I will introduce various scenarios in which graphs can be used in a natural way — both to formalize concrete anomaly detection problems, and to develop algorithmic anomaly detection methods. These will be motivated by real-world applications of anomaly detection in the wild; including opinion fraud, accounting anomalies, and host-level intrusion.

17:20-18:35 Session Oral O3A: Diffusion & Epidemics

Oral session

17:20
A Systematic Framework of Modelling Epidemics on Temporal Networks

ABSTRACT. We present a modelling framework for the spreading of epidemics on temporal networks from which both the individual-based and pair-based models can be recovered. The proposed pair-based model that is systematically derived from this framework offers an improvement over existing pair-based models by moving away from edge centric descriptions while keeping the description concise and relatively simple. For the contagion process, we consider the Susceptible-Infected-Recovered (SIR) model, which is realized on a temporal network with time varying edges. We show that the shift in perspective from individual-based to pair-based quantities enables exact modelling of Markovian epidemic processes on temporal tree networks. On arbitrary networks, the proposed pair-based model provides a substantial increase in accuracy at a low computational and conceptual cost compared to the individual-based model. From the pair-based model we analytically find the condition necessary for an epidemic to occur, otherwise known as the epidemic threshold. Due to the fact that the SIR model has only one stable fixed point, which is the global non-infected state, we identify an epidemic by looking at the initial stability of the model. We also propose a new measure for the temporal-cyclicity of a network from which we can deduce whether or not the use of a pair-based model on a given network is justified. This measure is based upon the idea of temporal non-backtracking matrices with a memory parameter.

17:35
Suppressing Epidemic Spreading via Contact Blocking in Temporal Networks

ABSTRACT. In this paper, we aim to effectively suppress the spread of epidemic/information via blocking/removing a given fraction of the contacts in a temporal (time evolving) network. We consider the SI (Susceptible- Infected) spreading process, on a temporal contact network to illustrate our methodology: an infected node infects a susceptible node with a probability $\beta$ when a contact happens between the two nodes. We address the question: which contacts should be blocked in order to minimize the average prevalence over time. We firstly propose systematically a set of link properties (centrality metrics) based on the aggregated network of a temporal network, that captures the number of contacts between each node pair. Furthermore, we define the probability that a contact $c(i,j,t)$ is removed as a function of the centrality of the corresponding link $l(i,j)$ in the aggregated network as well as the time $t$ of the contact. Each of the centrality metrics proposed can be thus regarded as a contact removal strategy. Empirical results on six temporal contact networks show that the epidemic can be better suppressed if contacts between node pairs that have fewer contacts are more likely to be removed and if contacts happened earlier are likely removed. A strategy tends to perform better when the average number contacts removed per node pair has a lower variance. Strategies that lead to a lower largest eigenvalue of the aggregated network after contact removal do not mitigate the spreading better. This contradicts the finding in static networks, that a network with a small largest eigenvalue tends to be more robust against epidemic spreading, illustrating the complexity introduced by the underlying temporal networks.

17:50
On the Effect of Space on the Spread of Infections

ABSTRACT. Extended Abstract

18:05
Prediction of the effects of epidemic spreading with graph neural networks

ABSTRACT. Understanding how information propagates in real-life complex networks yields a better understanding of dynamical processes such as misinformation or epidemic spreading. With the recent resurgence of graph neural networks as a powerful predictive methodology, many network properties can be studied in terms of their predictability and as such offer a novel view on the studied process, with the direct application of fast predictions that are complementary to resource-intensive simulations. We investigated whether graph neural networks can be used to predict the effect of an epidemic, should it start from a given individual (patient zero). We reformulate this problem as node regression and demonstrate the high utility of network-based machine learning for a better understanding of the spreading effects. By being able to predict the effect of a given individual being the patient zero, the proposed approach offers potentially orders of magnitude faster risk assessment and potentially aids the adopted epidemic spreading analysis techniques.

18:20
On an all-Ireland SIRX Network Model for the Spreading of SARS-CoV-2

ABSTRACT. The Republic of Ireland and Northern Ireland have been severely impacted by the recent history of the spreading of the Severe Acute Respiratory Syndrome CoronaVirus 2 (SARS-CoV-2), commonly known as the coronavirus, which causes the disease COVID-19. While many efforts in both parts of the island have helped to mitigate the impact of the pandemic, policy, regulations and case numbers on each side of the border have direct implications for the other.

We present results on an all-Ireland network modelling approach to simulate the emergence of COVID-19 across the island of Ireland. Our work contributes to the goal of an island with zero community transmissions and careful monitoring of routes of importation. In the model, nodes correspond to locations or communities that are connected by links indicating travel and commuting between different locations. While this proposed modelling framework can be applied on all levels of spatial granularity and different countries, we consider Ireland as a case study. The network comprises 3440 electoral divisions (EDs) of the Republic of Ireland and 890 superoutput areas (SOAs) for Northern Ireland, which corresponds to local administrative units below the NUTS 3 regions. The local dynamics within each node follows a phenomenological SIRX compartmental model including classes of Susceptibles, Infected, Recovered and Quarantined (X) inspired from Science 368, 742 (2020). For better comparison to empirical data, we extended that model by a class of Deaths.

We consider various scenarios including the 5-phase roadmap for Ireland, where the parameters are chosen to match the current number of reported deaths. After relaxation of the control measures (shaded regions), the number of infected rise again in a second wave, until the pool of susceptibles is sufficiently depleted.

In addition, we investigate the effect of dynamic interventions that aim to keep the number of infected below a given threshold. This is achieved by dynamically adjusting containment measures on a national scale, which could also be implemented at a regional (county) or local (ED/SOA) level. As a simple measure, we define some threshold as the maximum number of infected that can be present before we go back into lock-down and go through earlier phases again. We find that - in principle - dynamic interventions are capable to limit the impact of future waves of outbreaks, but on the downside, such a strategy can last several years until herd immunity will be reached.

17:20-18:35 Session Oral O3B: Machine Learning & Networks

Oral session

17:20
Evaluating network embedding by community separability

ABSTRACT. This is an extended abstract

17:35
Susceptible-infected-spreading-based network embedding in static and temporal networks

ABSTRACT. Link prediction can be used to extract missing information, identify spurious interactions as well as forecast network evolution. Network embedding is a methodology to assign coordinates to nodes in a low-dimensional vector space. By embedding nodes into vectors, the link prediction problem can be converted into a similarity comparison task. Nodes with similar embedding vectors are more likely to be connected. Classic network embedding algorithms are random-walk-based. They sample trajectory paths via random walks and generate node pairs from the trajectory paths. The node pair set is further used as the input for a Skip-Gram model, a representative language model that embeds nodes (which are regarded as words) into vectors. In the present study, we propose to replace random walk processes by a spreading process, namely the susceptible-infected (SI) model, to sample paths. Specifically, we propose two susceptible-infected-spreading-based algorithms, i.e., Susceptible-Infected Network Embedding (SINE) on static networks and Temporal Susceptible-Infected Network Embedding (TSINE) on temporal networks. The performance of our algorithms is evaluated by the missing link prediction task in comparison with state-of-theart static and temporal network embedding algorithms. Results show that SINE and TSINE outperform the baselines across all six empirical datasets.We further find that the performance of SINE is mostly better than TSINE, suggesting that temporal information does not necessarily improve the embedding for missing link prediction. Moreover, we study the effect of the sampling size, quantified as the total length of the trajectory paths, on the performance of the embedding algorithms. The better performance of SINE and TSINE requires a smaller sampling size in comparison with the baseline algorithms. Hence, SI-spreading-based embedding tends to be more applicable to large-scale networks.

17:50
Latent Space Modelling of Hypergraph Data

ABSTRACT. The prevalence of relational data describing interactions among a population has motivated a broad literature on statistical network analysis. Whilst it is typical to assume that interactions occur between node pairs, there are many settings where more than two members of the populations interact simultaneously. As an example, consider a coauthorship network where nodes represent authors and interactions describe which authors have collaborated on an article. It is common for several authors to write an article jointly and this data is more appropriately represented as a hypergraph. In this work, we propose a latent space model for hypergraph data in which the hyperedges are modelled as a function of low-dimensional coordinates associated with the nodes. The latent space provides a convenient visualisation of the data, facilitates the exploration of predictive distributions and allows desirable properties to be imposed on the hyperedges. We obtain posterior samples from our model via MCMC and explore its application to real world data.

18:05
Graph Neural Network Models for Node Classification in Multilayer Networks

ABSTRACT. Graph Neural Networks (GNNs) are powerful tools that are nowadays reaching state of the art performances in a plethora of different tasks such as node classification, link prediction and graph classification. The aim of this work is to extend some of these approaches to the Multilayer network model. More specifically, we here propose formulations that extend the GCN and the GAT approaches to the multilayer case. Both approaches are tested on four real world multilayer networks, in the context of a node classification task.

18:20
The node2community prediction problem in complex networks

ABSTRACT. This is an extended abstract

17:20-18:35 Session Oral O3C: Resilience, Synchronization & Control

Oral session

17:20
Quick Sub-optimal Augmentation of Large ScaleMulti-Modal Transport Networks

ABSTRACT. With the recent and continuous growth of large metropolis, the development, management and improvement of their urban multi-modal transport networks become a compelling need. Although the creation of a new transport mode often appears as a solution, it is usually impossible to construct at once a full networked public transport. Therefore, there is a need for efficient solutions aimed at prioritizing the order of construction of the multiple lines or transport modes. Hence, the proposed work aims at developing a simple and quick-to-compute methodology aimed at prioritizing the order of construction of the lines of a newly designed transport mode by maximizing the network performance gain, as described by complex networks metrics. In a resilience context, the proposed methodology could also be helpful to support the rapid and quick response to disruptions by setting up or reinforcing an adapted emergency transport line (e.g., bus service) over a set of predefined itineraries.

17:35
Distribution of the sizes of blackouts on electrical grids and some theoretical models

ABSTRACT. Empirical data on the sizes of the blackouts in real grids and models of them by computer simulations using the direct current approximation have found that the resulting blackout sizes are distributed as a power and it was suggested that this is because the grids are driven to the self-organized critical state. In contrast, more recent studies found that the distribution of cascades is bimodal as in a rst order phase transition, resulting in either a very small blackout or a very large one, engulfing a sizable fraction of the system. Here we reconcile the two approaches and investigate how the distribution of the blackouts changes with model parameters, including the tolerance criteria and the dynamic rules of failure of the overloaded lines during the cascade. In addition, we study the same problem for the Motter and Lai model and similar results, suggesting that the physical laws of the flow on the network are not as important as the network topology, the overload conditions, and the dynamic rules of the failure.

17:50
Dependency Networks and Systemic Risk in Open Source Software Ecosystems

ABSTRACT. Trends in the creation and use of software are increasing the macroscopic complexity of open source software ecosystems. Software is increasingly modular: developers write smaller libraries that carry out specialized functions. These compact libraries interact and depend on one another. The resulting ecosystem of libraries is efficient: a developer with an new idea for a new car doesn't have to reinvent the wheel. However, the widespread use of OSS also presents a systemic risk because errors, bugs, and viruses can spread through inter-library dependencies. In this work we examine the network of dependencies of libraries in the Rust programming language ecosystem. We simulate the spread of failures in this system to quantify systemic risk. Using data from Cargo, the Rust package manager, we create the dependency networks between Rust libraries at different points in time. In these networks, a library is connected by a directed edge to the packages it depends on. Using a simulation approach, we show that systemic risk is rising over time in the Rust ecosystem.

18:05
Distributed Algorithm for Link Removal in Directed Networks

ABSTRACT. This paper considers the problem of removing a fraction of links from a strongly connected directed network such that the largest (in module) eigenvalue of the adjacency matrix corresponding to the network structure is minimized. Due to the complexity of the problem, an effective and scalable algorithm based on eigenvalue sensitivity analysis is proposed in the literature to compute the suboptimal solution to the problem. However, the algorithm requires knowledge of the global network structure and does not preserve strong connectivity of the resulting network. This paper proposes distributed algorithms which allow distributed implementation of the previously mentioned algorithm by relying solely on local information on the network topology while guaranteeing strong connectivity of the resulting network. A numerical example is provided to demonstrate the proposed distributed algorithm.

18:20
Hierarchical properties and control of ageing-related methylation networks

ABSTRACT. An ancient desire of humanity is to understand, slow, or even halt and reverse ageing. In related studies, it was soon realised that certain biomarkers can rather precisely predict the functional capability of tissues, organs and even patients. One of the well-known biomarkers of ageing is provided by DNA methylation, and a prominent example of methylation-based age estimators is the so-called Horvath’s clock, which is based on 353 CpG dinucleotides, and shows high correlation with chronological age across multiple tissue types. We studied the network between these CpG positions, obtained by applying a regularised regression to methylation data, to locate the most important CpGs (and related genes) that may have a large influence on the rest of the system. According to our analysis based on the Global Reaching Centrality, the structure of this network is way more hierarchical compared to what we would expect based on the configuration model. We also studied the control properties of the network using the concept of control centrality, and the results show that top nodes according to the hierarchy also seem to have higher control centrality values.

Besides the analysis of the network structure, we also examined how would the perturbation of the methylation levels change the estimated biological age (also called as the DNAm age). When propagation of the change over the network is also taken into account, a unit change in the methylation level of the top CpGs according to the hierarchy tend to have a larger effect on the estimated age compared to the average. By adjusting the methylation of the most influential single CpG site and following the propagation of methylation level changes we can reach up to 5.74 years age reduction, which is significantly larger compared to the results without taking into account the network effects.

18:35-18:50Coffee Break
18:50-19:30 Session Poster P1A: Community Structure

Poster Session 

18:50
A Method for Community Detection in Networks with Mixed Scale Features at its Nodes

ABSTRACT. The problem of community detection in a network with features at its nodes takes into account both the graph structure and node features. The goal is to find relatively dense groups of interconnected entities sharing some features in common. Algorithms based on probabilistic community models require the node features to be categorical. We use a data-driven model by combining the least-squares data recovery criteria for both, the graph structure and node features. This allows us to take into account both quantitative and categorical features. After deriving an equivalent complementary criterion to optimize, we apply a greedy-wise algorithm for detecting communities in sequence. We experimentally show that our proposed method is effective on both real-world data and synthetic data. In the cases at which attributes are categorical, we compare our approach with state-of-the-art algorithms. Our algorithm appears competitive against them.

18:50
Measuring Proximity in Attributed Networks for Community Detection

ABSTRACT. Proximity measures on graphs have a variety of applications in network analysis, including community detection. Previously they have been mainly studied in the context of networks without attributes. If node attributes are taken into account, however, this can provide more insight into the network structure. In this paper, we extend the definition of some well-studied proximity measures to attributed networks. To account for attributes, several attribute similarity measures are used. Finally, the obtained proximity measures are applied to detect the community structure in some real-world networks using the spectral clustering algorithm.

18:50
Core Method for Community Detection

ABSTRACT. The processing of networks of interacting objects makes it possible to solve topical issues in the modern world of identifying opinion leaders and channels for the dissemination and exchange of information. In this work, the structure of networks of interacting objects and their possible analysis with the help of weighted graphs based on the interaction of elements of such networks are considered. At the beginning of the work, a methodology for working with a weighted graph was proposed. It is called by the authors the "core method" and provides an algorithm for the analyst's actions to identify communication groups, opinion leaders and disseminate information in the network. The key concepts of the \gamma-core of the graph, the interaction coefficients and the density of communities and the core are introduced. In the second part of the work, the main capabilities of the software developed by the authors, which allows the operator to carry out the procedures required for the method, visualize the results and export the obtained data, are presented. The third part shows the application of the "core method" on a weighted graph, based on the data about the coverage of the activities of the Moscow city authorities in the fight against the new coronavirus infection Covid-19 imported from Twitter. This example shows how opinion leaders on a weighted graph can be identified using the core method and the implemented application.

18:50
Effects of Community Structure in Social Networks on Speed of Information Diffusion

ABSTRACT. Social media users can widely disseminate information, potentially affecting societal trends. Therefore, understanding factors affecting information diffusion in social media is important for successful viral marketing campaigns. In this paper, we focus on the community structure of social networks among Twitter users and investigate how that structure affects the speed of diffusion by retweets. Extracting communities among sampled Twitter users, we investigate differences in diffusion speed between tweets with many intra-community retweets and those with many inter-community retweets. Consequently, we show that tweets with many intra-community retweets tend to spread slowly. We also use community structures in Twitter social networks to tackle the tasks of predicting time intervals between a first tweet and its N-th retweet. We show the potential of community structure features in a social network for predicting information diffusion speed.

18:50
Query Oriented Temporal Active Community Search

ABSTRACT. A considerable amount of research has been devoted towards the problem of community detection in online social networks (OSNs). More recently, a related but different problem is community search (aka local community) where the main objective is to ascertain the best potential meaningful community that contains the query node(s) and query attributes. We propose a temporal activity biased weight model which gives higher weight to users’ recent activities and also develop a framework of activity driven temporal active communities (ATAC) to search effective community for a given input query Q to find densely-connected community in which community members are temporally similar in terms of their activities related to the query attributes.

18:50
Local Community Detection Algorithm with Self-defining Source Nodes

ABSTRACT. Surprising insights in community structures of complex networks have raised tremendous interest in developing various kinds of community detection algorithms. Considering the growing size of the existing networks, local community detection methods have gained attention in contrast to the global methods that impose a top-bottom view of global network information. Current local community detection algorithms are based on the availability of part of the network, however, their performance is influenced by the quality of the source node. In case of any changes in the network, they require to execute at each time change. To address the drawbacks of current local techniques, we propose a local community detection algorithm that benefits from a self-governing source node selection. We define a community influence degree to assure that high degree nodes are central to the community. We, then, in an iterative process adjust the community label of each node measuring its membership degree based on the neighbourhood local information. Experiments on both real and artificial networks show that our algorithm gains more over networks with weak community structures compared to networks with strong community structures. We provide experiments to demonstrate the ability of self-defining source node of our algorithm by implementing various source node selection methods from the literature. Additionally, we analyze the computational complexity of the proposed algorithm that is lower than O(n) with respect to the network size.

18:50
Investigating Centrality Measures in Social Networks with Community Structure

ABSTRACT. Centrality measures are crucial in quantifying the influence of the members of a social network. Although there has been a great deal of work dealing with this issue, the vast majority of classical centrality measures are agnostic of the community structure characterizing many social networks. Recent works have developed community-aware centrality measures that exploit features of the community structure information encountered in most real-world complex networks. In this paper, we investigate the interactions between 5 popular classical centrality measures and 5 community-aware centrality measures using 8 real-world online networks. Correlation as well as similarity measures between both type of centrality measures are computed. Results show that community-aware centrality measures can be divided into two groups. The first group, which includes Bridging centrality, Community Hub-Bridge and Participation Coefficient, provides distinctive node information as compared to classical centrality. This behavior is consistent across the networks. The second group which includes Community-based Mediator and Number of Neighboring Communities is characterized by more mixed results that vary across networks.

18:50-19:30 Session Poster P1B: Network Models

Poster session

18:50
Edge based stochastic block model statistical inference

ABSTRACT. Community detection in graphs often relies on ad hoc algorithms with no clear specification about the node partition they define as « the best », which leads to uninterpretable communities. Stochastic block models (SBM) offer a framework to rigorously define communities, and to detect them using statistical inference method to distinguish structure from random fluctuations. In this paper, we introduce an alternative definition of SBM based on edge sampling. We derive from this definition a quality function to statistically infer the node partition used to generate a given graph. We then test it on synthetic graphs, and on the zachary karate club network.

18:50
Modeling and Evaluating Hierarchical Network: An Application to Personnel Flow Network

ABSTRACT. This study evaluates (1) the properties of a hierarchical network of personnel flow in a large and multilayered Chinese bureaucracy, in light of selected classical network models, and (2) the robustness of the hierarchical network with regard to the edge weights as strength of “weak ties” that hold different offices together. We compare the observed hierarchical network with the random graph model, the scale-free model, the small-world model, and the hierarchical random graph model. The empirical hierarchical network shows a higher level of local clustering (in both LCC and GCC) and a lower level of fluidity of flow (i.e., high APL) across offices in the network, as compared with the small-world model and the hierarchical random graph model. We also find that the personnel flow network is vulnerable to the removal of “weak ties” that hold together a large number of offices on an occasional rather than regular basis. The personnel flow network tends to dissolve into locally insulated components rather than to maintain an integrated hierarchy.

18:50
A Network Model of Selective Exposure and Audience Behavior Using Community Detection

ABSTRACT. The analysis of real-world audience overlap or co-exposure networks has received substantial scholarly attention in recent years, particularly in the domain of audience fragmentation and selective exposure to information. In this paper, I propose a formal mathematical model for the same by simulating audience behavior in an artificial media environment. I show how a variety of synthetic co-exposure networks can be generated by tuning specific parameters, that control various aspects of the media environment and individual behavior. I then use a variety of community detection algorithms to characterize the level of audience fragmentation in these synthetic networks and compare their performances for different combinations of the model parameters. Finally, I validate these findings using a novel empirical data-set of actual large-scale browsing behavior and demonstrate the model's utility in informing future analytical choices.

18:50
Analysis of a Finite Mixture of Truncated Zeta Distributions for Degree Distribution

ABSTRACT. A power-law distribution has been widely used to describe the degree distribution of a network, especially when the range of degree is large. However, the deviation of such behavior appears when a range of degrees is small. Even worse, the conventional employment of the continuous power-law distribution usually causes an inaccurate inference as the degree should be discrete-valued. To remedy these obstacles, we propose a finite mixture model of truncated zeta distributions for a broad range of degrees that disobeys a power-law nature in a small degree range while maintaining the scale-free nature of a network. The maximum likelihood algorithm alongside the model selection method is presented to estimate model parameters and the number of mixture components. We apply our method on scientific collaboration networks with remarkable interpretations.

18:50
Fair Comparisons Among Network Sampling Strategies

ABSTRACT. In 2003, in a landmark paper, Cohen et al [2] introduced a new random vertex sampling method that gives a higher expected degree than naïve random sampling. In-stead of using a randomly selected vertex, the random vertex's neighbors are collect-ed and one of these is selected at random. Based on Feld's 'friendship paradox' [3], the neighbor's degree is presumed to be higher than the original vertex, and it has been demonstrated that this is in fact the case [2, 4]. It seems to be ignored, however, that the method is computationally expensive compared to naïve random sampling. One is required to sample twice as many vertices in order to accumulate a collection of the same size. The method was originally conceived in a scenario where there are a limited number of immunizations that need to be administered and the goal is to give them to the highest degree vertices possible. Under these circumstances, it is of course natural to ignore a constant factor increase in computational cost. But if the method is generalized to apply to scenarios where computational efficiency cannot be ignored, a fairer comparison should be sought.

18:50
Top-k Connected Overlapping Densest Subgraphs in Dual Networks

ABSTRACT. Networks are largely used for modelling and analysing data and relations among them. Recently, it has been shown that the use of a single network may not be the optimal choice, since a single network may misses some aspects. Consequently, it has been proposed to use a pair of networks to better model all the aspects, and the main approach is referred to as dual networks (DNs). DNs are two related graphs (one weighted, the other unweighted) that share the same set of vertices and two different edge sets. In DNs is often interesting to extract common subgraphs among the two networks that are maximally dense in the conceptual network and connected in the physical one. The simplest instance of this problem is finding a common densest connected subgraph (DCS), while we here focus on the detection of the Top-k Densest Connectedsubgraphs, i.e. a set k subgraphs having the largest density in the conceptual network which are also connected in the physical network. We formalise the problem and then we propose a heuristic to find a solution, since the problem is computationally hard. A set of experiments on synthetic and real networks is also presented to support our approach.

18:50
Monotone Properties in Random Networks with Dependent Edges

ABSTRACT. We analyze a random graph model in which the edges are allowed to be dependent, such that any edge is included with probability at least p, regardless of the status of other edges (note that this does not imply independence). We call it p-robust random graph. Our result is that for any monotone graph property, the p-robust random graph has at least as high probability to have the property as an Erdos-Renyi random graph with edge probability p. This is very useful, allowing the application of the many results known for Erdos-Renyi random graphs to a non-independent setting.

18:50
A tail of two distributions: why most real world networks are negative binomial rather than power law.

ABSTRACT. We have produced a generative model which produces scaled negative binomial degree distributions. We have shown that real world networks tend to fit closer to the proposed model than a power law model. Finally we can show that the newly proposed generative model better explains other properties of real world networks than currently used models.

18:50-19:30 Session Poster P1C: Earth Sciences Applications & Urban Systems
18:50
Multifractal analyses on normalized difference vegetation index series on pastures

ABSTRACT. Normalized Difference Vegetation index (NDVI) is commonly used to monitor vegetation network dynamics. In this paper, we use multifractal analyses to study NDVI data from two different pasture plots in south-eastern Spain. Multifractal detrended fluctuation analyses, Mann-Kendall test and Hurst test are used to explore the multifractal character of NDVI time series. Results suggest a strong multifractal character influence by the different land uses found among our areas of study. An antipersistent character is found in all areas but they seem to be positively linked to more structured ecosystem provided by pastoral uses as opposed to agricultural landscape, where less antipersistent and multifractal character are shown. The research is therefore useful for policy makers to manage rural areas where land uses such as grazing and agriculture are intertwined.

18:50
Modelling spreading process of a wildfire in heterogeneous orography, fuel distribution and environmental conditions – a multi-scale analysis using complex networks

ABSTRACT. Forest fires are phenomena that represent a great danger to the population and bring severe environmental consequences. The greatest efforts of direct confrontation to the flames require expensive costs in terms of money and human resources every year. This is partly due to the unpredictability of fire behaviour, whether at the flame development, at a local scale level, whether at its spreading process at a wider scale. This unpredictability is sustained by heterogeneous orography and forest fuel distribution. Factors such as the wind speed and direction, terrain slope, soil humidity and vegetation type are preponderant in the fire development along the forest landscape.

To deal more efficiently with this unpredictability, the main goal of our work is to establish an optimal fire break structure in a region that is recurrently affected by the occurrence of ignitions, which develop until wildfires at a rate that is frequently uncontrollable and along a path difficult to monitor.

The function of a fire break structure is to block fire propagation through the natural landscape. Selective thinning of vegetation creates strips of bare land that interrupt the density of vegetation typical of that region. Forests, scrubs, but also grass lands are examples of areas susceptible to burn and, because of that, should be divided by such a structure, as a preventative measure to the occurrence of a fire event. Fire breaks must be efficient in the sense that its maintenance cost must allow its permanence and successfulness in stopping propagation of any forest fire.

Firebreaks so extensive that their maintenance is too expensive for their continuity easily disappear due to the natural growth of the surrounding vegetation, hence opening a connection between two susceptible forest areas, previously separated by a space devoid of fuel. This preventative approach acts as complement to direct confrontation forces and contributes to reduce material, economic and human losses.

For the establishment of an efficient fire break structure, it is necessary to model fire behaviour in a heterogeneous orography and using environmental parameters such as wind direction and speed, fuel type (forests, scrubs, woods, dead fuel matter, etc.), soil humidity, among others. This is a task that requires itself to different acting scales. For the effect, we resort to graph theory and complex networks, in particular, the multilayer model. Within this model, we aim the construction of a network of networks that allows us to simulate fire spreading at a local scale and at a more global scale, the landscape.

In this model, a network is a graph, where each node represents an area susceptible to burn and each edge represents a connection between adjacent areas. Each node is associated to a polygon (every area defined by a closed line -- we can see illustrated in fig. \ref{cm2020}), which represents different fuel types and, inherently, a different fire propagation velocity – these velocities are tabulated for different fuel models. Parameters such as terrain slope, soil humidity and wind direction and speed contribute to this fire spreading rate. Thus, depending on environmental conditions, orography and fuel distribution, each node is going to burn at different time instants. This time differentiation may allow direct fire combat forces to give priority to certain areas in detriment of others whose risk to population or environment may be not as high.

The method in the development of this study consists of performing computational simulations that allow to test fire spreading at the national scale. These fire simulations start with an ignition point and the spreading process develops according to the input parameters. We use several examples of fires occurred in Portugal in previous years, which provide the area and perimeter of the burnt area, to calibrate our simulations. Once accomplished that calibration, we intend to study properties of this network of networks, such as the connectivity and robustness, among others. Finally, the goal, to perform tests to the establishment of the fire-break structure, sequentially eliminating several combinations of edges and evaluating the effect of that elimination in the neutralization of the fire spreading process.

Our computational tools are: ArcGIS, a Geographic Information Systems (GIS) software for the construction and visualization of maps with the areas of interest to the study; Python 3, a programming language that is the basis of the ArcGIS software and that serves us for data processing, simulations and graph construction.

This is one of the subprojects within the project CILIFO (Forest Fires Fight and Investigation Center), supported by the international European fund Interreg, which acts in three POCTEP euroregions, Alentejo, Algarve and Andaluzia.

18:50
Complexity of the vegetation-climate system through data analysis

ABSTRACT. Grasslands in the Iberian Peninsula are valuable and susceptible ecosystems due to their location in arid-semiarid regions. Remote sensing techniques have potential for monitoring them through vegetation indices (VIs). The Modified Soil Adjusted Vegetation Index (MSAVI) is an improved version of classical VIs for arid and semiarid regions. This work aims to analyse the relation among MSAVI, temperature (TMP) and precipitation (PCP) to understand the complexity of the vegetation-climate sys-tem. First, based on MSAVI pattern several phases through the year cycle are defined. Second, a cross-correlation between MSAVI and climatic variables se-ries are performed for each phase at different lags to detect the highest correlation. Then, recurrence plots (RPs) and recurrence quantification analysis (RQA) are computed to characterize and quantify the underlying non-linear dynamics of the MSAVI series. Our results suggest that five different phases can be defined, in this case study, in which TMP is the main driving factor. The correlation with TMP presents different signs depending on the phase. However, PCP plays a key role with a positive correlation regardless the phase. In the case of TMP, the correlations are higher and the lags shorter than PCP case. This explains the complexity of vegetation-climate dynamics. RPs and RQA demonstrated to be a suitable tool to quantify this complexity. In our case, we have detected a high-dimensionality and a short-term predictability in the MSAVI series, characteristic of ecological systems

18:50
Earthquake complex network analysis for the Mw 8.2 earthquake in Iquique, Chile

ABSTRACT. An earthquake complex network analysis for the large earthquake Mw8.2 in the north of Chile (Iquique city) has been made. A directed complex network has been built using a moving window in order to study the time evolution of the critical exponent γ (probability distribution function) and recognize patterns due the occurrence of the large earthquake. We found a change in the value of the critical exponent γ in the time around the main earthquake, this variation may be due a change in the stress of the region, but further analysis will be done in the future.

18:50
Vulnerability indexes in complex networks as avulnerability component in disaster science

ABSTRACT. In a scenario of global change, extreme weather events are expected to increase in frequency and intensity and cause more social and economic impacts in several sectors, such as Critical Infrastructures, like Transportation systems. The Sendai Framework for Disaster Risk Reduction 2015-2030 is one of the most important documents in Disaster Science. Amidst its global targets, one refers exactly to ``Substantially reduce disaster damage to critical infrastructure and disruption of basic services''. For the Disaster Risk Reduction (DRR) scientific community, vulnerability is a key concept \cite{Wisner1994}. The measurement and mapping of vulnerability constitute a subject of global interest. The Complex Networks approach may offer a valuable perspective considering one type of vulnerability specially related to DRR on critical infrastructures: the topological vulnerability \cite{Santos2019}.

Geographical networks provide us a better viewpoint on the influence of physical world phenomena on network properties. In terms of the dynamic outlook, the complex network approach is apt for their analysis but, despite the observable interconnection between the network vulnerability and the complex networks, there still lies a scope for further discovery of the intricacies involved. The representation of street networks and other public utility networks using line and point features renders an efficient method to study a large number of fundamental properties, including shortest paths, connectivity and efficiency, which in turn assist in deducing significant conclusions.

In this work, we analyze two vulnerability indexes: the topological vulnerability index and the isolation index (a new one). In both cases, we consider a geographical representation for a street network, under a Disaster Risk Reduction point of view. Therefore, our work is related to two sustainable development goals: climate action, and industry, innovation and infrastructure.

Road transportation systems, such as highways and streets, are networks of routes and locations, represented as a framework of concrete hierarchical mesh structure along with physical spatial constraint, delimited and reserved for exclusive usage. Roads, bridges, tunnels, hill roads are highly vulnerable to disruptions due to inherent complexity and interdependent nature. Natural disasters cause impairment of physical infrastructures, shutdown and discontinuity in operations of road networks.

The vulnerability index, in complex network literature, is first introduced as a way to spot critical components of a critical infrastructure network. The index is based on the efficiency of the network and how taking away some nodes can change the efficiency.

The calculation of the vulnerability index $V$ is as follows: one first determines the network's efficiency $E$, then removes all the edges connected to the node $k$ and recalculates the efficiency $E_k^*$ without it. Next, one compares the old efficiency $E$ with the new one $E_k^*$ as $V_k = (E - E_k^*)/E$, in which $V_k$ is the vulnerability related to node $k$. The network vulnerability is $\max(V_k)$, with $k=1,\cdots,|N|$, for $|N|$ is the total number of nodes. There are several ways to compute the network's efficiency, each depending on the underlying problem. Yet, a classical definition is $E = (\sum_{i \neq j} e_{ij}) / (|N|(|N|-1))$, in which $e_{ij}$ is the efficiency in the communication between nodes $i$ and $j$, defined as the inverse of the shortest path length between them.

The Isolation index serves as a useful measure to scrutinize the impact of isolated regions of a network. Given a graph $G = (N,L)$, let $L_k$ be the set of edges connected to a node $k$ where $k \in N$. The isolation index of the node $k$ is given by $Q_k$, which is the number of ``infinite length paths'' between any pair of nodes in the graph $H = (N,(L-L_k))$. This is equivalent to the number of disconnected pairs of nodes (i.e. there does not exist a path joining one node to another). The intuition behind the Isolation index is the following: it quantifies the inaccessibility in the network when a specific element (node/edge) is disconnected, like in a flooding or construction work.

While the $V_k$ has the efficiency as the main ingredient, the $Q_k$ is concerned with inaccessibility. Whenever the edges to a node is removed, the former index quantifies the impact on existing routes, while the latter accounts for the number of isolated areas, making it easier to find bottlenecks in the network. Despite their differences, both capture structural vulnerabilities in complex networks, which have application in diverse critical infrastructures, ranging from mobility networks to power grids.

18:50
Deforestation network fractal analysis in Sumaco Biosphere Reserve

ABSTRACT. Landscapes are complex and heterogeneous mosaics constantly transformed by biotic and abiotic factors, disturbances, and human activities such as agriculture, urbanization, and forestry, etc. This transformation is reflected in land use and land cover (LULC) maps at different spatial and temporal scales. These maps present interesting structures and connections difficult to measure from Euclidean approach. This study analyzes the connections between deforested patches identified from LULC maps in biosphere reserves zones applying the multifractal approach using frequency distribution for Local Connected Fractal Dimension (LCFD). This approach was used in diverse studies relate to dynamics of forest fragmentation, urban connectivity, fish gills analysis, etc. This work is presented as a complement of previous research in which deforestation patterns are linked to soil types. Both approaches have proven useful in the understanding of the impact of deforestation expansion in Ecuador.

18:50
Spatio-Temporal Clustering of Earthquakes Based on Average Magnitudes

ABSTRACT. In this paper, we address the problem of automatically extracting several clusters consisting of spatio-temporally similar earthquakes whose average magnitudes are substantially different from the total average. For this purpose, we propose a new method consisting of two phases: tree construction and tree separation. In the former phase, we employ one of two different declustering algorithms called single-link and correlation-metric developed in the field of seismology, while in the later phase, we employ a variant of the change-point detection algorithm, developed in the field of data mining. In our empirical evaluation using earthquake catalog data covering the whole of Japan, we show that the proposed method employing the single-link algorithm can produce more desirable results for our purpose in terms of the improvement of weighted sums of variances and visualization results.

18:50
On the breakup patterns of urban networks under load

ABSTRACT. The urban networks of London and New York City are investigated as directed graphs within the paradigm of graph percolation. It has been recently observed that urban networks show a critical percolation transition when a fraction of edges are removed. The resulting strongly connected components form a cluster structure whose size distribution follows a power law with critical exponent $\tau$. We start by analyzing how the networks react when subjected to increasingly widespread random edge removal and the effect of spatial correlations with power-law decay. We observe a progressive decrease of $\tau$ as spatial correlations grow. A similar phenomenon happens when real traffic data (UBER) is used to delete congested graph edges: during low congestion periods $\tau$ is close to that obtained for uncorrelated percolation, while during rush hours the exponent becomes closer to the one obtained from spatially correlated edge removal. We show that spatial correlations seem to play an important role to model different traffic regimes. We finally present some preliminary evidence that, at criticality, the largest clusters display spatial predictability and that cluster configurations form a taxonomy with few main clades: only a finite number of breakup patterns, specific for each city, exists. This holds both for random percolation and real traffic, where the three largest clusters are well localized and spatially distinct. This consistent cluster organization is strongly influenced by local topographical structures such as rivers and bridges.

18:50
Passenger Delay and Topological Indicators in Public Transport Networks

ABSTRACT. See extended abstract in the file appended.

18:50
Characterising road networks through subgraph graphlet analysis

ABSTRACT. We construct a novel method to characterise the plannedness of a city based on walking regions, graphlet counts and entropy. To explore our novel measure we construct a novel one parameter synthetic model, and we further explore our measure on a real world dataset of very planned new towns and older less planned towns and cities.