COMPLEX NETWORKS 2021: TENTH INTERNATIONAL CONFERENCE ON COMPLEX NETWORKS & THEIR APPLICATIONS
PROGRAM FOR WEDNESDAY, DECEMBER 1ST
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:40 Session Speaker S3: João Gama - Mining Evolving Large-Scale Networks
09:00
Mining Evolving Large-Scale Networks

ABSTRACT. Social networks are live entities that exhibit dynamic behavior over time. Phone calls, emails, SMS, etc., define implicit, huge, and high-speed networks. In this talk, we described our work in analyzing evolving networks. We discuss three case studies. The first case study presents a method to identify community trajectories and events on communities (growing, shrinking, merges, and splits) and explain those events in terms of the topology of the evolving network—the second use case studies sampling strategies from high-speed evolving networks. In the third case study, we monitor the evolution of communities in a network of hashtags co-occurrence in social media. To conclude, we present open issues and challenges that emerge when analyzing high-speed and huge networks.

09:40-10:40 Session Lightning L2: Networks in Finance & Economics - Dynamics on/of Networks
09:40
Mean-field approach to a glassy slow dynamics in an indirect reciprocity model on networks

ABSTRACT. The dynamics of the social relationship through indirect reciprocity is studied on non-complete networks. Under this mechanism, the attitude of individuals toward other individuals (either cooperative or antagonistic) changes over time by their actions and mutual monitoring.

Among several variations of the indirect reciprocity dynamics, we focus on the Kandori assessment rule. We observe a phase transition between absorbing state and ``glassy'' state as we change the network's link density. When the edge density is either small or large enough, opinions quickly reach an absorbing state, from which opinions never change anymore once reached. In contrast, if the edge density is in the middle range the absorbing state is not reached and the state keeps changing thus being active.

In this talk, I will focus on a mean-field approach to understand this phase transition. It will be shown that a closed form of mean-field dynamics can be derived by introducing order parameters defined by the attitude relations among 2-, 3-, and 4-body agents. The obtained mean-field dynamics can well describe the observed phase boundaries for random non-complete graphs. An application of this approach to more realistic networks will be also discussed.

09:45
Empirical social triad statistics can be explained with dyadic homophylic interactions
PRESENTER: Tuan Pham

ABSTRACT. The remarkable robustness of many human social systems has been associated with their peculiar triangular structure. Empirically, the so-called balanced state with either three or one positive link are strongly overrepresented. For almost a century, the mechanism that leads to this very specific (“balanced”) statistics of triads remains elusive. Here we attempt to explain this phenomenon by a simple, however realistic, adaptive network model where every agent tends to minimize her individual social tension that arises from dyadic interactions. The novelty of the model arises from the fact that agents only need local information about their neighbors in the network, and do not need (often unrealistic) higher order information for their relation and opinion updates. The model exhibits a transition from unbalanced- to balanced society at a critical temperature. As the relative strength of positive interactions to that of negative ones exceeds 1/2, a transition between the steady states with different fractions of balanced triads occurs.

09:50
Community Evolution with Ensemble Louvain
PRESENTER: Bojan Evkoski

ABSTRACT. A key factor in tracing communities over time is to be able to consistently detect the communities in one single time period. If one would use Louvain [1] for tracing communities over time, one would not be able to disentangle differences arising by the change in the community structure from the volatility of Louvain runs. In this paper, we present a continuation of the work, aiming to answer why Ensemble Louvain [2] is more suitable for tracking community evolution than the standard widely-used Louvain.

Ensemble Louvain [2], is a simple approach which begins by running the Louvain several times. Then, it builds a meta-network where a pair of nodes is connected if their total co-membership across all runs is above a given threshold. This process forms dis-joint sets of connected components in the meta-network. These sets represent our final communities. Compared to Louvain, Ensemble Louvain drastically improves stability and reproducibility of results.

09:55
Ergodic sets in directed networks: a structural approach
PRESENTER: Erik Hörmann

ABSTRACT. Directed graphs exhibit a number of behaviours that are not present in undirected networks. In this work, we focus on ergodic sets, strongly connected components that are connected in either direction to the rest of the graph, but not in both directions. We connect ergodic sets with the theory of random walks on networks, and we show that they are structures that are more general than closed communicating classes, but more restrictive than strongly connected components. We study numerically the presence of such structures in a number of synthetic network models.

10:00
Anomalous transport on weighted complex networks
PRESENTER: Juan Valdivia

ABSTRACT. In the past few years, a number of groups around the world have been studying the topological structure of static and evolving complex networks in a variety of settings such as electric or spin circuits, stress dissipation in active geophysical regions, roads in cities, or communication networks, etc. In the same spirit, there has also been an increasing interest to study networks as a dynamical system on which certain quantities vary in time or get transported across the network. In some of these networks the connectivity can be described by probabilities, called weighted networks, giving rise to nontrivial random walks.

Therefore, we propose a transport model over a weighted complex network that has an analytical solution, allowing us to understand how the nontrivial, and sometimes anomalous, transport occurs in these systems. In these weighted network we have discrete packages that move randomly through the network, so that they generate an ensemble of paths of different length (each with a different time traveled or cost) from note i to node j. Therefore, we have an adjacency weighted matrix W that determines the probability of package transport to each of the neighbors of a node, and a cost matrix C that provides the cost of such a transport. Concepts of interest include the cost (or time of travel) distributions between nodes i and j (similar to the propagator concept in quantum mechanics). It is possible to write a master equation that allows to solve for all the moments of this distribution directly from the matrices W and C. These results can be compared with simulations. We have found that for many different situations, only 2 moments are necessary to describe, in a reasonable manner, the distribution. Hence the transport is in general anomalous, with long tail fluctuations. The master equation allow us to solve the inverse problem of finding any of the matrices W, C, or mean cost of travel between nodes; knowing any of the other two.

Therefore, these results provide intuition about the transport and nontrivial dynamics that regularly occur in systems that can be analyzed as transport over weighted complex networks, such as the propagation of infections, city traffic, pedestrian flows, etc.

10:05
Reinforcement Learning for credit strategies in the interbank network
PRESENTER: Alessio Brini

ABSTRACT. The study of evolutionary economic systems is an active field of research. Recently, attention has been focused on the effects that interactions has on systemic stability. In interbank markets where risk sharing decreases with the network connectivity and systemic risk increases with linkages, agents adopt specific strategies to obtain liquidity and avoid default. Beyond the hype surrounding the application of machine learning techniques to financial trading, reinforcement learning (RL) represents a framework for optimizing sequential decisions over the course of time. In a RL setting, an agent discovers how to map situations to actions, while trying to maximize the long term utility. The agent is not told which actions to take, but instead it must learn by trials and errors on the basis of the received rewards. It is of interest to analyze the effect produced in the interbank system if banks behave according to a learned RL strategy. We consider the system of banks as a unique learning agent and we let them interact according to an RL policy. In the original model, financial connections might change over time via a preferential attachment evolving procedure such that each financial institution can enter into a lending relationship with others with a probability proportional to a fitness measure. RL allows to change the parameters of the fitness measure and modify the way the preferential attachments are created over time, while maximizing the efficiency of the financial system. In doing so we adopt the Proximal Policy Optimization (PPO) as learning algorithm that makes use of powerful nonlinear approximators such as neural networks. It allows a richer state representation of the state space which is crucial to deal with the large amount of variables that compose the interbank credit model.

10:10
Economic Interplay Forecasting Business Success

ABSTRACT. A startup ecosystem is a dynamic environment in which several actors, such as investors, venture capitalists, angels, and facilitators, are the protagonists of a complex interplay. Most of these interactions involve the flow of capital whose size and direction help to map the intricate system of relationships. This quantity is also considered a good proxy of economic success. Given the complexity of such systems, it would be more desirable to supplement this information with other informative features, and a natural choice is to adopt mathematical measures. In this work, we will specifically consider network centrality measures, borrowed by network theory. In particular, using the largest publicly available dataset for startups, the Crunchbase dataset, we show how centrality measures highlight the importance of particular players, such as angels and accelerators, whose role would be underestimated by focusing on collected funds only. We also provide a quantitative criterion to establish which firms should be considered strategic and rank them. Finally, as funding is a widespread measure for success in economic settings, we investigate to which extent this measure is in agreement with network metrics; the model accurately forecasts which firms will receive the highest funding in future years.

10:15
Digital complexity of occupations - developing an indicator based on workplace skills

ABSTRACT. Digitisation transforms every realm of our society and in particular for workplaces this process is accelerated due to the on-going pandemic. In light of the Covid-19 crisis and previous research which points to a re-allocation of labour and to rising wage inequalities due to information and communication technologies, measuring the extent of digital permeation of occupations seems more topical than ever to design policies aimed at mitigating potential negative effects on workers. Therefore, we propose an indicator based on the digital skill requirements of occupations using a European classification of occupations and network analysis methods. To identify the digital skills that are comparatively more important for performing an occupation, we use the method of revealed comparative advantage and derive an indicator of how much an occupation depends on digital skills compared to all skills required to perform this occupation. Our preliminary results show that the indicator covers a broader range of occupations than measures based on ICT task intensity. This finding suggests that the indicator captures digitisation of jobs which may not be identified otherwise.

10:20
Cryptoasset Networks: Flows and Regular Players in Bitcoin and XRP
PRESENTER: Hideaki Aoyama

ABSTRACT. Cryptoassets such as Bitcoin and XRP comprise a complex network with nodes being users or addresses and directed edges being transactions among them. The network is giant with billions and trillions of nodes and edges, and temporally changing. On the other hand, many cryptoassets are exchanged in markets with fiat currencies and also with each other. Today we can observe that the market capitalization is so huge and highly volatile having a considerable impact to the global asset allocation. It is interesting and important to understand how cryptoassets flow in the network, in particular, flow among "active players", and how the structure and dynamics of the network is related to the exchange markets' volatile behavior.

In this study, we first define and identify active players (users for Bitcoin and nodes for XRP) based on their activity in terms of frequency and amount of transactions, using an index ``Flow-weighted Frequency" that we propose here for the first time. We perform the following analysis to the two cryptoassets of Bitcoin and XRP, and shall discuss common properties and significant differences in the structure and dynamics of cryptoasset networks. Specifically, (1) stock and flow of the active users/nodes, in particular for the groups with different flow-weighted frequency, (2) basic network analyses including community structure, bow-tie structure and Hodge decomposition in order to locate the users in the upstream, downstream, and core of the entire flow, (3) revealing "principal components" hidden in the flow by using non-negative matrix factorization

10:25
Inequality in the menu: How a network of restaurants characterizes social disparities in Boston

ABSTRACT. extended abstract

10:40-11:20Coffee Break & Online Session for Onsite Presenters of Poster Session P4 A-B-C
10:40-11:20 Session Poster P3A: [1-12] Diffusion & Epidemics
Modeling the Spread of COVID-19 Over Varied Contact Networks
PRESENTER: Theresa Migler

ABSTRACT. When attempting to mitigate the spread of an epidemic without the use of a vaccine, the implementation of an effective testing and quarantining strategy on a population has the potential to make a large impact on the spread of the disease. Yet, testing and quarantining becomes difficult when a portion of the population are asymptomatic spreaders of the disease. This research simulates the transmission of COVID-19 using five different contact networks. In these simulations, several testing and quarantining strategies are implemented with a varying number of tests available to the population per day. These strategies include a random testing strategy and several testing strategies that use underlying properties of the graph. This research found many of the strategies had a similar performance to randomly testing the population, save for testing by degree and testing the cliques of the graph which was found to consistently outperform other strategies, especially on networks that are more dense.

Influence maximization in complex networks through supervised machine learning
PRESENTER: Owais Hussain

ABSTRACT. Identifying influential nodes in complex networks is one of the most studied problems in network science. Finding an optimal set of influential nodes is an NP-Hard problem and thus requires the use of heuristics to find the minimal set capable of maximizing influence in a network. Once identified, these influencers have been applied in many different applications such as controlling disease outbreaks in social networks, identifying Achilles heel in computer networks, and finding super spreaders for viral marketing. In this paper, we propose a novel approach to solve this problem by modeling it as a supervised machine learning problem. Several synthetic and real world networks with nodal and network level attributes are used to train supervised learning models. Once the models are trained, their performance is tested against real world networks emanating from a variety of different domains. Results show that the trained models are highly accurate in identifying influential nodes in networks previously not used for training and outperform many commonly used techniques in the literature.

A Framework for Simulating Multiple Contagions Over Multiple Networks
PRESENTER: Chris Kuhlman

ABSTRACT. Many contagion processes evolving on populations do so simultaneously, interacting over time. Examples are co-evolution of human social processes and diseases, such as the uptake of mask wearing and disease spreading. Commensurately, multi-contagion agent-based simulations (ABSs) that represent populations as networks in order to capture interactions between pairs of nodes are becoming more popular. In this work, we present a new ABS system that simulates any number of contagions co-evolving on any number of networked populations. Individual (interacting) contagion models and individual networks are specified, and the system computes multi-contagion dynamics over time. This is a significant improvement over simulation frameworks that require union graphs to handle multiple networks, and/or additional code to orchestrate the computations of multiple contagions. We provide a formal model for the simulation system, an overview of the software, and multiple case studies that illustrate applications of interacting contagions.

Finding Influential Nodes in Complex Networks Using Nearest Neighbourhood Trust Value

ABSTRACT. Information spreading in complex networks is an emerging topic in many applications such as social leaders, rumour control, viral marketing, and opinion monitor. Finding the influential nodes plays a pivotal role for information spreading in complex network. This is because influential nodes have capable to spread more information in compared with other nodes. Currently, there are many centrality measures proposed to identify the influential nodes in the complex network such as degree, betweenness, closeness, semi-local centralities and page-rank etc. These centrality measures are defined based on the local and/or global information of nodes in the network. Sheng et al.in 2020 propose centrality measure based on the information between nodes and structure of network. Inspired by this measure, we propose the nearest neighbourhood trust page rank (NTPR) based on structural information of neighbours and nearest neighbours. We proposed the measure based on the similarity between nodes, degree ratio, trust value of neighbours and nearest neighbours. We also perform on various real world network with proposed centrality measure for finding the influential nodes. Furthermore, we also compare the results with existing basic centrality measures.

Hypergraph Laplacians in Diffusion Framework
PRESENTER: Mehmet Aktas

ABSTRACT. Modeling diffusion on networks is an important concept in network science. It helps to understand how an idea, information, or infection, diffuses within the network. The graph Laplacian has been used to model diffusion on graphs for years. Extending graph Laplacians to hypergraphs is not an intuitive task since hyperedges can include more than two vertices, and edge incidence and vertex adjacency are set-valued in hypergraphs. To handle this issue, researchers limit their attention to specific hypergraphs, which is often not the case for real-world hypergraphs, or reduce hypergraphs to graphs, where these reductions result in information loss. In this paper, we present two new hypergraph Laplacians that can be defined on any hypergraphs. Our Laplacians take the relations between hyperedges into consideration, hence can be used to model diffusion on hypergraphs not only between vertices but also hyperedges. As an application, we study the Enron network and show the effectiveness of the proposed Laplacians in the influential node detection problem. These Laplacians can be further employed in different hypergraph mining problems, such as social contagion models on hypergraphs, influence study on hypergraphs, hypergraph classification, and hypergraph representation learning.

Covid-19 superspreading: lessons from simulations on an empirical contact network
PRESENTER: Lorenz Hilfiker

ABSTRACT. Infectious individuals who cause an extraordinarily large number of secondary infections are colloquially referred to as superspreaders. Empirical evidence suggests that Covid-19 exhibits high levels of superspreading. We provide simulation results based on a realistic social contact network, which provide further evidence that the key to reproducing empirically observed levels of superspreading is to allow for variation in the individual infectiousness.

The effects of the mood of a social network in an epidemic spread

ABSTRACT. Individual responsability and adhering to the public health recommendations are one of the main features to control the propagation of the Covid-19 virus, and so, being able to measure the predisposition of a given society to follow these recommendations can be of high interest to local and regional administrations. Using the bulk of information given by social networks, we present here a mathematical approach that could help in achieving this goal, incorporating empirical data in classic epidemic models.

Measuring the toplogical impact on networked epidemic diffusion
PRESENTER: Cyrille Bertelle

ABSTRACT. In this study, we develop a model capable of addressing a key issue in the management of the Covid-19 crisis in France, namely preventing hospital saturation. The epidemiological model developed is therefore based on classical SEIR-type models, but incorporates an hospitalized patient population that may be saturated due to the availability of hospital beds. We are interested here in an qualitative rather than a quantitative approach, allowing us to provide elements of understanding of the topological impact of diffusion networks on propagation.

Impacts of detection and contact tracing on epidemic spread on temporal networks
PRESENTER: Bing Wang

ABSTRACT. Constructing the epidemic model based on detection and contact tracing intervention in time-varying networks is fundamentally important to avoid the large-scale spread of emergent epidemics. In this work, we study the intervention strategy of detection and contact tracing in time-varying networks by constructing mathematical models in theory. To achieve this, we first construct the Susceptible-Exposed-Infected-Removed-Dead (SEIRD) model based on detection and contact tracing by using the mean field theory in time-varying networks. The theoretical critical thresholds of the epidemics under different interventions are derived. Numerical simulations and experiments further verify and enrich our conclusions. The proposed method is expected to provide guidance for coping with the COVID-19 or similar future outbreaks of other emergent epidemics.

Social Distancing Behavior and Oscillatory Prevalence in an Epidemic Model on Evolving Random Geometric Graphs
PRESENTER: Akhil Panicker

ABSTRACT. Mathematical modeling of infectious diseases on networks has a significant role in understanding epidemics and controlling them. The spatial nature of such contact networks only gained attention recently in the wake of the COVID19 pandemic. In this work, using SIR dynamics on a class of evolving Random Geometric Graphs, we explore the effect of social distancing on the dynamics of an epidemic. We show that a simple population-density-based formalism can be used to make informed decisions about when to promote and relax social distancing norms so that minimal disruption is caused to the normal functioning of a society. We show that oscillations in the prevalence of a disease can arise due to the adaptive behavior of agents or authorities to an ongoing epidemic. Mean-field analysis of the model is done and compared with results from numerical simulations.

Assessing the effectiveness and robustness of vaccinationstrategies for disease control
PRESENTER: Tijani Sulaimon

ABSTRACT. Disease control strategies guided by network information produce an intuitive way of stuttering the transmission chains of an infectious disease through the network connecting the population. Vaccination is an efficient way of controlling infectious disease outbreaks. However, it is crucial to understand the network governing the population in order to design an efficient vaccination strategy. In Tanzania, animals live in discrete herds and are linked by movements, many of which go through markets, forming a complex network of nodes and links. Such a network can be modelled as a multi-scale framework, where the nodes represent sub-populations of livestock and the links represent the trading interactions. This framework allows us to test the impact of vaccination algorithms derived from network measures. Vaccination strategies chosen based on network topological properties are highly effective when the complete structure of the network is known [2]. However, our knowledge of the network structure governing most populations is imperfect due to incomplete data. In reality, network studies rely on observed networks that differ from the true network connecting the population under investigation. Such error can substantially impact network measures [5], which might limit our understanding of node importance in transmitting disease. Therefore, it becomes important to understand how different measures are impacted by imperfect information. This work investigates the effectiveness and robustness of various vaccination strategies under imperfect knowledge of network structure.

A Measure of Temporal Cyclicity in Networks
PRESENTER: Rory Humphries

ABSTRACT. This talk is based on extending our work which was published in the Applied Net-work Science journal March 2021 [3]. In the paper we present a general modelling framework for the spreading of epidemics on temporal networks from which both individual-based [7] and pair-based [5] [1] models can be obtained. The suggested pair-based model, which is systematically derived from this framework, improves on current pair-based models by moving away from edge centric descriptions while maintaining a brief and straightforward description. Our pair-based model continues the work of [1] by extending it to the temporal setting and significantly simplifying the description without sacrificing accuracy by using a particular choice for moment closure. This closure is what is often referred to as the Kirkwood closure and allows us to write higher-order moments in terms of lower orders under an assumption of conditional in- dependence.

For the contagion process, we consider the Susceptible-Infected-Recovered (SIR) model, which is realized on a temporal network with time-varying edges. We demonstrate that shifting the focus from individual-based to pair-based quantities permits precise modelling of Markovian epidemic processes on temporal networks which contain no time respecting non-backtracking cycles. This is equivalent to a tree network when viewed in a static embedding of a supra-adjacency representation. On arbitrary networks, the proposed pair-based model provides a substantial increase in accuracy at a low computational and conceptual cost compared to the individual-based model. Additionally, we demonstrate how our pair-based model is compares to current models such as the edge centric contact-based model [4]. In order to investigate the conditions under which the pair based model is justified on arbitrary networks and by how much it fails, we look at how and when echo chambers [6] appear and their effect on the prediction of our pair-based model. This is done by measuring the density of non-backtracing [2] cycles that appear in all time-respecting paths.

This provides us with the fundamental concept for the temporal-cyclicity of a temporal network and enables us to simply quantify pair based model’s predictive performance. In our extension to this work we derive a succinct matrix based formulation for temporal-cyclicity based off the static non-backtracking matrices of each time step considered in the temporal network. This formulation is derived in such a way that only one matrix multiplication is required each time step, making it far more efficient than naı̈ve algorithmic implementations. One problem with such a measure is that it can be difficult to compare and make sense of results across time when the upper bound of the temporal-cyclicity changeswith time. From here, we prove that the maximum temporal-cyclicity at any given time step n is equal to (n-1)/(1+2(n−1)). This nice result allows us to normalise findings across time for a 0–1 measure and easy comparison. Another interesting point is that in the process of finding the maximum temporal-cyclicity, we find the temporal topological structure which gives rise to the greatest density of temporal cycles and thus would provide the most feedback in many processes on networks, we will dub this the ”maximum cycle network”.

In order to test our findings, we use several artificial and empirical networks which display varied topological and temporal properties. Two important artificial networks we test are 1.) A static tree network and 2.) The maximum cycle network. These have the properties of have 0 and 1 measure respectively. We then test on a number of interesting examples such as social face-to-face networks and cattle trade networks. As expected social contact networks have a large density of temporal cycles whereas trade networks have very few. Next, the temporal-cyclicity is compared with the accuracy of epidemic network models (when compared to the average of a number of Monte-Carlo simulations). We find correlations between temporal-cyclicity and the performance of the network models thus allowing us to justify the use of particular models or Monte-Carlo simulations.

References 1. Frasca, M., Sharkey, K.J.: Discrete-time moment closure models for epidemic spreading in populations of interacting individuals. J. Theor. Biol. 399, 13–21 (Jun 2016), https://linkinghub.elsevier.com/retrieve/pii/S002251931600165X 2. Hashimoto, K.i.: Zeta functions of finite graphs and representations of p-adic groups. In: Automorphic Forms and Geometry of Arithmetic Varieties. pp. 211–280. Mathematical Society of Japan, Tokyo, Japan (1989), https://doi.org/10.2969/aspm/01510211 3. Humphries, R., Mulchrone, K., Tratalos, J., More, S.J., Hövel, P.: A systematic framework of modelling epidemics on temporal networks. Applied Network Science 6(1), 23 (Dec 2021), https://appliednetsci.springeropen.com/articles/10.1007/s41109-021-00363-w 4. Koher, A., Lentz, H.H., Gleeson, J.P., Hövel, P.: Contact-Based Model for Epidemic Spreading on Temporal Networks. Phys. Rev. X 9(3), 031017 (Aug 2019), https://link.aps.org/doi/10.1103/PhysRevX.9.031017 5. Sharkey, K.J., Kiss, I.Z., Wilkinson, R.R., Simon, P.L.: Exact Equations for SIR Epidemics on Tree Graphs. Bulletin of Mathematical Biology 77(4), 614–645 (Apr 2015), http://link.springer.com/10.1007/s11538-013-9923-5 6. Shrestha, M., Scarpino, S.V., Moore, C.: Message-passing approach for recurrentstate epidemic models on networks. Phys. Rev. E 92(2), 022821 (Aug 2015), https://link.aps.org/doi/10.1103/PhysRevE.92.022821 7. Valdano, E., Ferreri, L., Poletto, C., Colizza, V.: Analytical Computation of the Epidemic Threshold on Temporal Networks. Phys. Rev. X 5(2), 021005 (Apr 2015), https://link.aps.org/doi/10.1103/PhysRevX.5.021005

10:40-11:20 Session Poster P3B: [13-19] Biological Networks
Complex Networks reveal biological functions of START domains in rice: Insights from Computational Systems Biology

ABSTRACT. With the advancement of high throughput technologies, there has been a massive surge in the omics data generation and there is a growing need to integrate this data and study the biological interactions that exist in plants. The networks are one of the most important methods for representation of these interactions as well as to visualise and understand the big data at the systems level. In this work, we use a systems and computational biology approach to investigate functions of StAR-related lipid transfer (START) domains in rice. We analyse the data at three levels; namely the transcriptome, proteome, and regulome. Each of these diverse datasets was studies through network-based approaches to generate the respective co-expression, proteinprotein interaction, and gene regulatory networks for rice genes. This work shows how computational systems level studies can complement in-vivo experiments and advance limited knowledge of START domain functions in plants.

Analysis of the SLO Bay Microbiome from a Network Perspective
PRESENTER: Viet Nguyen

ABSTRACT. Microorganisms are key players in ecosystem functioning. We developed a framework to preprocess raw microbiome data, build a correlation network, and analyze co-occurrence patterns between microbes. We then applied this framework to a marine microbiome dataset. The dataset used in this study comes from a year-long time-series to characterize the microbial communities in our coastal waters off the Cal Poly Pier. In analyzing this dataset, we were able to observe and confirm previously discovered patterns of interactions and generate hypotheses about new patterns. The analysis of co-occurrences between prokaryotic and eukaryotic taxa is relatively novel and can provide new insight into how marine microbial communites are structured and interact.

Can dynamic functional connectivity be used to distinguish between resting-state and motor imagery in EEG-BCIs?
PRESENTER: Paula Rodrigues

ABSTRACT. Graph theory has been widely and efficiently applied to characterize brain functioning, ranging from the diagnosis of pathologies (e.g. depression, Alzheimer, schizophrenia, etc) to investigations of cognitive processes in neuroscience. Recently, the use of graph-based strategies through functional connectivity (FC) analysis has shown to be an interesting option for feature extraction in motor imagery brain-computer interfaces (MI-BCIs) - an alternative communication system that does not require the use of classical biological efferent pathways, mapping brain signals directly to control external assistive devices. Although FC has been used in such context, the dynamics of FC under motor imagery has rarely been taken into account, which outlines an essential requirement for online BCI operation. Therefore, this study aims to evaluate the applicability of dynamic functional connectivity (dFC) to differentiate resting-state and motor imagery in electroencephalography (EEG)-based BCI. We evaluated the classification performance of classical markers, as defined by event-related desynchronization, static FC and the dynamic FC scenario. The analysis includes two different similarity criteria for estimating the FC matrix and four different graph-based metrics applied to a representative EEG dataset with 35 subjects. The obtained results point to the potential and accompanying challenges of using dFC in the context of MI-BCIs, in addition to providing insights concerning the motor network activity organization during motor imagery.

Fine scale prediction of ecological community composition using a two-step sequential machine learning ensemble
PRESENTER: Iciar Civantos

ABSTRACT. Prediction is one of the last frontiers in ecology. Indeed, predicting fine-scale species composition in natural systems is a complex challenge as multiple abiotic and biotic processes operate simultaneously to determine local species abundances. On the one hand, species intrinsic performance and their tolerance limits to different abiotic pressures modulate species abundances. On the other hand there is growing recognition that species interactions play an equally important role in limiting or promoting such abundances within ecological communities. Here, we present a joint effort between ecologists and data scientists to use data-driven models to predict species abundances using reasonably easy to obtain data. We propose a sequential data-driven modeling approach that in a first step predicts the potential species abundances based on abiotic variables, and in a second step uses these predictions to model the realized abundances once accounting for species competition. We use this framework to predict species abundances over five years in a highly diverse annual plant community. Our models show a surprisingly high spatial predictive accuracy using only easy-to-measure variables in the field, yet such predictive power is lost when temporal dynamics are taken into account. This result suggests that predicting the temporal dimension of our system requires longer time series data to capture enough variability. In addition, we show that these data-driven models can also suggest how to improve mechanistic models by adding missing variables that affect species performance such as particular soil conditions (e.g. carbonate availability in our case). Being able to build improved models at predicting fine-scale species composition, while maintaining a mechanistic understanding of the underlying processes, can be a pivotal tool for conservation, especially given the human-induced rapid environmental changes we are experiencing. This objective can be achieved by promoting the knowledge gained with classic modelling approaches in ecology and recently developed data-driven models.

Hybridising organic chemistry and synthetic biology reaction networks for retrosynthetic optimisation
PRESENTER: Chonghuan Zhang

ABSTRACT. Computer assisted synthesis planning (CASP) accelerates the discovery of economical organic synthesis routes of pharmaceuticals and industrial chemicals, and synthetic biology, which has the potentials to engineer key synthetic steps, draws considerable attentions to chemical engineers. A hybrid system of synthetic biology and conventional chemical synthesis opens up opportunities for efficient (bio)chemical pathway optimization. In this work, 21 million organic reactions from Reaxys database and all available (60 thousand) metabolic reactions from Kyoto Encyclopedia of Genes and Genomes (KEGG) database were digitalized and merged to generate a hybrid reaction network. The objective was to quantify the added values of the sparse metabolic dataset onto the organic dataset, for the guidance of retrosynthetic planning from target molecules to small precursor molecules. To do this, we assigned metrics for early-stage assessment of synthetic routes, including evaluating the carbon efficiency, capital costs of building blocks and length of synthetic steps. A reinforcement learning approach was employed to learn from the bulk simulated experience of synthetic routes generated from the organic, metabolic and hybrid networks. A reinforcement learning policy model predicted the expected values of all reactions in the chemical space, and given a target molecule, the model would suggested (near)-optimal multi-step reaction choices for retrosynthetic planning. The comparison among the three networks demonstrates the added values of metabolic reactions to improve redox efficiency, ease reaction pressure/temperature and enable synthetic shortcuts. We also proposed a (near)-optimal synthetic route of a pharmaceutical, taxol, suggested by the policy model from the hybrid network. Organic synthesis of taxol was believed to be a challenge from previous CASP tools. Here we conducted later-stage assessment for the biochemical synthetic routes of taxol, including evaluating its ease of separation, exergetic efficiencies, sustainability and etc., to demonstrate the feasibility of the proposed synthetic route.

A systematic evaluation of the impact of environmental exposures on the human interactome network

ABSTRACT. Historically, the health risk of environmental exposures was assessed mostly by epidemiological studies. Only recently, there was an increased effort to integrate the large amount of newly generated -omics data to uncover the relationship between chemical compounds and human health. Here, we propose a network-based approach for investigating the landscape of genetic perturbations induced by the known chemical compounds present in the environment. The integration of these relationships with the protein protein interaction (PPI) network allows us to distinguish two major types of environmental perturbations: hub and module exposures. This finding can suggest a new way to classify exposures based on their genetic perturbation from a network point of view, helping in the detection of harmful contaminants or molecules with drug-like properties.

Emergence of Stable Functional Cliques in Developing Neural Networks
PRESENTER: Myles Akin

ABSTRACT. Complex networks constructed from neural correlations exhibit nonrandom structures hypothesized to be related to neural information processing. These networks can be characterized by topological features such as hubs, communities and small world connectivity. A particular network structure of interest is a clique- a fully connected subgraph - which may indicate the existence of underlying neural ensembles. We introduce a multilayer network method to observe the persistence of functional cliques over population bursts as well as their development as the underlying neural network forms connections. Using data from developing cultures on MEAs, we show that cliques become more numerous and persistent over population bursts as neural cultures age. These results provide evidence for the formation of neural ensembles in cultured neural networks.

10:40-11:20 Session Poster P3C: [20-22] Motifs
Analysing Ego-Networks via Typed-Edge Graphlets: A case study of Chronic Pain Patients
PRESENTER: Mingshan Jia

ABSTRACT. Graphlets, being the fundamental building blocks, are essential for understanding and analysing complex networks. The original notion of graphlets, however, is unable to encode edge attributes in many types of networks, especially in egocentric social networks. In this paper, we introduce a framework to embed edge type information in graphlets and generate a Typed-Edge Graphlets Degree Vector (TyE-GDV). Through applying the proposed method to a case study of chronic pain patients, we find that not only a patient's social network structure could inform his/her perceived pain grade, but also particular types of social relationships, such as friends, colleagues and healthcare workers, are more important in understanding the effect of chronic pain. Further, we demonstrate that including TyE-GDV as additional features leads to significant improvement in a typical machine learning task.

A logical approach for temporal and multiplex networks analysis
PRESENTER: Esteban Bautista

ABSTRACT. Numerous systems generate data as a set of triplets, which are normally analyzed via the temporal and multiplex networks frameworks. However, such frameworks require to detach one variable (layer/time) from the others (network), complicating to capture patterns across all dimensions. In this work, we override the enhanced network vision and show that a meaningful analysis can be done by direct processing of the data triplets. In particular, we show that it suffices to partition the dataset to construct categorical propositions that encode for informative patterns. We show that several concepts from graph theory can be framed under this formalism and leverage our insights to extend such concepts to data triplets. Lastly, we propose an algorithm to list categorical propositions satisfying specific constraints and apply it to a real world dataset.

ETMM: Egocentric temporal motifs miner
PRESENTER: Antonio Longa

ABSTRACT. In this extended abstract, we present a novel strategy to extract statistically significant sub-graphs in temporal networks by concentrating on the egocentricity of a node. We argue that by aggregating the temporal graphs, temporal-dependent information such as the length over time of the interactions, their frequency, periodicity and others are lost. Accordingly, for each node in a three contiguous temporal graph, a structure is created, where relations within the same temporal gap maintain their structure and relations among different contiguous temporal gap exist if the node connected to the ego is the same. We report the five most frequent egocentric temporal motifs of a network representing face-to-face interactions among high school students. We also compare the discovered motifs with those discovered by a competitor.

11:20-12:50 Session Oral O4A: Information Spreading in Social Media
11:20
An Ecological Approach to Structural Flexibility in Online Communication Systems

ABSTRACT. Human cognitive abilities are limited resources. Today, in the age of cheap information --cheap to produce, to manipulate, to disseminate--, this cognitive bottleneck translates into hypercompetition for rewarding outcomes among actors. These incentives push actors to mutualistically interact with specific memes, seeking the virality of their messages. In turn, memes' chances to persist and spread are subject to changes in the communication environment. In spite of all this complexity, here we show that the underlying architecture of empirical actor-meme information ecosystems evolves into recurring emergent patterns. We then propose an ecology-inspired modelling framework, bringing to light the precise mechanisms causing the observed flexible structural reorganisation. The model predicts --and the data confirm-- that users' struggle for visibility induces a re-equilibration of the network's mesoscale towards self-similar nested arrangements. Our final microscale insights suggest that flexibility at the structural level is not mirrored at the dynamical one.

11:35
Maximum entropy networks applied on Twitter disinformation datasets
PRESENTER: Bart de Clerck

ABSTRACT. Identifying and detecting disinformation is currently a major challenge. Twitter provides some ground truth datasets of disinformation campaigns through their information operations report. We compare the results of community detection using a classical network representation with a maximum entropy network model. We conclude that the latter method is useful in the context of disinformation identification over multiple datasets. In addition, we also apply the method to a different disinformation dataset related to COVID-19, which allows us to formulate some concerns about the repeatability of studies on disinformation datasets.

11:50
Indirect causal influence of a single bot on opinion dynamics through a simple recommendation algorithm

ABSTRACT. The ability of social and political bots to influence public opinion is often difficult to estimate. Recent studies found that hyper-partisan accounts often directly interact with already highly polarised users on Twitter and are unlikely to influence the general population's average opinion. In this study, we suggest that social bots, trolls and zealots may affect people’s views not just via a direct interaction (e.g. retweets, at-mentions and likes) and via indirect causal pathways through infiltrating platforms’ content recommendation systems. Using a simple agent-based opinion-dynamics simulation, we isolate the effect of a single bot – representing only 1% of the population – on the average opinion of Bayesian agents when we remove all direct connections between the bot and human agents. We compare this experimental condition with an identical baseline condition where such a bot is absent. We used the same random seed in both simulations so that all other conditions remained identical. Results show that, even in the absence of direct connections, the mere presence of the bot is sufficient to shift the average population opinion. Furthermore, we observe that the presence of the bot significantly affects the opinion of almost all agents in the population. Overall, these findings offer a proof of concept that bots and hyperpartisan accounts can influence average population opinions not only by directly interacting with human accounts but also by shifting platforms’ recommendation engines’ internal representations.

12:05
Modelling the effects of self-learning and social influence on the diversity of knowledge

ABSTRACT. In this paper, I present a computational model of acquiring new knowledge through self-learning (e.g. a Wikipedia "rabbit hole") or social influence (e.g. recommendations through friends). This is set up in a bipartite network between a static social network (agents) and a static knowledge network (topics). For simplicity, the learning process is singly parameterized by $\alpha$ as the probability of self-learning, leaving $1-\alpha$ as the socially-influenced discovery probability. Numerical simulations show the tradeoff of $\alpha$ on the diversity of knowledge when examined at the population level (e.g. number of distinct topics) and at the individual level (e.g. the average distance between topics for an agent), consistent across different intralayer configurations. Particularly, higher $\alpha$, more prone to self-learning, leads to higher population diversity and robustness. However lower $\alpha$, more prone to social influence, more easily expands individual knowledge diversity on average. These numerical results might provide some basic insights into how social influence could affect the diversity of human knowledge, especially in the age of information and social media.

12:20
On Minimum Spanning Trees and the Inference of Message Cascades

ABSTRACT. In this extended abstract we consider the problem of inferring how a message spreads in a given directed, feed-based social network. Under assumptions similar to the independent cascade model and given time stamps of message interactions with network users, the most likely message cascade coincides with a minimum spanning tree for the directed network, where the weight of an edge is given by the log-probability of the event that a message is forwared along this edge. We show that if the probability that a user forwards a message is independent of its sender, then the minimum spanning tree problem can be solved even without knowledge of the respective probabilities.

12:35
Bubble effect induced by recommendation systems in a simple social media model
PRESENTER: Franco Bagnoli

ABSTRACT. We investigate the influence of a recommendation system on the formation of communities in a simulated social media system.

When the recommendation system is initialized with a limited number of random posts, and there is strong selection of correlated users, communities, not present when examining the internal factors of participating people, arise.

This happens even when people are immutable, but are exposed only to messages coming from people that are correlated to them, according to past messages. When people are let free to evolve, reducing their cognitive dissonance, true isolated communities appear, causing the filter bubble effect.

11:20-12:50 Session Oral O4B: Networks in Finance & Economics
11:20
Marginalisation and Misperception: Perceiving Gender and Racial Wage Gaps in Ego Networks

ABSTRACT. We introduce an agent-based model of localised perceptions of the gender and racial wage gap in Random Geometric Graph type networks that result from economic homophily independent of gender/race. Thereby, agents estimate inequality using a composite signal consisting of local information from their personal neighbourhood and the actual global wage gap. This can replicate the underestimation of the gender or racial wage gap that empirical studies find and the well-documented fact that the underprivileged perceive the wage gap to be higher on average with less bias. Calibration by a recent Israeli sample suggests that women place much more weight on the (correct) global signal than men, in line with the hypothesis that people who are adversely affected by a wage gap listen more carefully to global information about the issue. Hence, (educational) interventions about the global state of gender and racial inequality are promising, especially if they target the privileged.

11:35
The COVID-19 Pandemic and Export Disruptions in the United States
PRESENTER: John Schoeneman

ABSTRACT. We use social network analysis to model the trade networks that connect each of the United States to the rest of the world in an effort to capture trade shocks and supply chain disruption resulting from the COVID-19 pandemic and, more specifically, to capture how such disruptions propagate through those networks. The results show that that disruptions will noticeably move along industry connections, spreading in specific patterns. The results are also consistent with past work that shows non-pharmaceutical policy interventions have had a limited impact on trade flows.

11:50
Default Prediction using Network based Features

ABSTRACT. Small and medium enterprises (SME) are crucial for economy and have a higher exposure rate to default than large corporates. In this work, we address the problem of predicting the default of an SME. Default prediction models typically only consider the previous financial situation of each analysed company. Thus, they do not take into account the interactions between companies, which could be insightful as SMEs live in a supply chain ecosystem in which they constantly do business with each other. Thereby, we present a novel method to improve traditional default prediction models by incorporating information about the insolvency situation of customers and suppliers of a given SME, using a graph-based representation of SME supply chains. We analyze its performance and illustrate how this proposed solution outperforms the traditional default prediction approaches.

12:05
Asymmetric diffusion in a complex network: The presence of women on boards
PRESENTER: Ricardo Gimeno

ABSTRACT. Diffusion processes are well known linear dynamical systems. However, a different kind of dynamical solutions emerge when the speed of diffusion is dependent on the sign of the gradient variable that is diffused through the graph. In this case, we move into a nonlinear dynamical system where solutions would depend on the differential speed of diffusion, the topological structure of the graph and the initial values of the gradient variable. We show an example in a real complex network. To do so, we construct a network of US Boards of Directors, where nodes are boards, and the links are interlocking directorates. We show that the proportion of women on each board follows a diffusion process, where changes in the proportion are the results of the gradient of this proportion to adjacent boards. Furthermore, we show that the diffusion is asymmetric, with diffusion slower(faster) when the board has a lower(higher) proportion of women than neighbor boards.

12:20
A networked global economy: the role of social capital in economic growth
PRESENTER: Jaime Oliver

ABSTRACT. Understanding the drivers of economic growth is one of the fundamental questions in Economics. While the role of the factors of production--capital and labor--is well understood, the mechanisms that underpin Total Factor Productivity (TFP) are not fully determined. A number of heterogeneous studies point to the creation and transmission of knowledge, factor supply, and economic integration as key aspects; yet a need for a systematic and unifying framework still exists. Both capital and labor are embedded into a complex network structure through global supply chains and international migration, and it has been shown that the structure of global trade plays an important role in economic growth. Additionally, recent research has established a link between types of social capital and different network centralities. In this paper we explore the role of these measures of social capital as drivers of the TFP. By leveraging the EORA Multi Regional Input-Output and the UN International Migration databases we build the complex network representation for capital and labor respectively. We compile a panel data set covering 155 economies and 21 years. Our results indicate that social capital in the factors of production network significantly drives economic output through TFP.

12:35
Can you always reap what you sow? Network and functional data analysis of VC investments in health-tech companies

ABSTRACT. "Success" of firms in venture capital markets is hard to define, and its determinants are still poorly understood. We build a bipartite network of investors and firms in the healthcare sector, describing its structure and its communities. Then, we characterize "success" introducing progressively more refined definitions, and we find a positive association between such definitions and the centrality of a company. In particular, we are able to cluster funding trajectories of firms into two groups capturing different "success" regimes and to link the probability of belonging to one or the other to their network features (in particular their centrality and the one of their investors). We further investigate this positive association by introducing scalar as well as functional "success" outcomes, confirming our findings and their robustness.

11:20-12:50 Session Oral O4C: Temporal Networks
11:20
Non-Markovian temporal networks with auto- and cross-correlated link dynamics
PRESENTER: Piero Mazzarisi

ABSTRACT. Many of the biological, social and man-made networks around us are inherently dynamic, with their links switching on and off over time. The evolution of these networks is often ob- served to be non-Markovian, and the dynamics of their links are often correlated [1]. Hence, to accurately model these networks including both memory and dynamical dependencies, predict their evolution, and understand how information and other relevant quantities propagate over them is key. Thus far, a large amount of studies has been devoted to empirical data, in order to make clear what are the most important features of temporal networks and whether there is any universality in such features [2]. The disadvantage is that no general conclusions can be drawn on fundamental aspects of temporal network structure. In order to arrive at such an understanding, we need generative models that can output temporal networks with tunable structure. To this end, some models of temporal networks have been proposed, each one capturing some particular aspect: (i) temporal exponential random graphs [3] describe null models of temporal networks with specific temporal features, such as link stability or shifted reciprocity, (ii) dynamic stochastic block models [4, 5] capture the evolution of the community structure of networks, (iii) activity-driven network models [6] characterize the memory features of link dynamics in relation to the activity of nodes, and (iv) state-space models of dynamic networks [7, 8] capture the evolution of networks depending on some latent dynamics, typically for nodes’ features. Here, we introduce a novel class of models of temporal networks that fully characterize the auto- and cross-correlation structure of link dynamics, including any non-Markovian memory effect. It is based on the Discrete Auto-Regressive DAR(p) model introduced for the first time in [9]. The DAR(p) generalizes the concept of linear regression to binary time series by introducing the discrete autoregressive copying mechanism for the evolution of the binary state Xt: (i) Xt is copied from one of its own past realizations Xt−1, . . . , Xt−p with some probability value q (copying parameter), or (ii) it is sampled with probability 1 − q by some marginal distribution. The non-Markovian memory is then described by a random variable which picks integers in the range {1, ..., p} to select which specific past realization is copied. It is straightforward applying the DAR(p) model to one single time series of link realizations for the study of its autocorrelation properties. Here, we extend the discrete autoregressive mechanism to the general case of panel binary data Xt , i.e. time series of adjacency matrices {X1 , . . . , XT } where Xt ≡ {X ij }i,j =1,...,N with N the number of nodes. That is, for each couple of nodes {i,j} the probability of observing a link Xij = 1 at time t accounts for: (1) the probability q of copying one of the p past realizations of any link (including itself) within the set of all possible couples of interacting nodes, namely the backbone of the network. Similarly to the DAR(p) model, the past realization from t − 1 to t − p is selected randomly with an opportunely defined memory kernel; (2) a coupling matrix C characterizing the correlations between pairs of links by determining the probability of copying itself or a past realization of another link on the backbone; (3) a marginal probability distribution of the adjacency matrix, in the simplest case given by a homogeneous Bernoulli distribution with success probability y (Erdös-Rényi). In the most general setting, the parameters q and y can be defined as link-specific or time- varying, the kernel determining the non-Markovian memory of the copying mechanism admits general shape, as long as it is normalized to one, and any admissible correlation structure can be described, provided that the coupling matrix C is row stochastic. This framework defines a new class of models of temporal networks that we name discrete autoregressive network models [12]. As a concrete and useful case study, we concentrate on a specific model within this class, namely CDARN(p), by considering homogeneous and constant parameters: it allows us to generate temporal networks with a specified underlying structural backbone, and with precise control over the dynamical dependencies between links and the strength and length of their memories. In this network model the presence of each link is influenced not only by its past, but also by the past of other links, as specified by the coupling matrix, which directly controls the causal relations, and hence the correlations, among links. In addition to the correlation structure of the link dynamics, which is explicitly controlled by the tuning parameters, the simplicity of the model permits to characterize other network metrics. First, the inter-event time distribution predicted by the model is exponential (in the tail), with a rate whose upper bound can be analytically characterized. Second, the diffusion over temporal networks can be studied both numerically and analytically, showing that not only is the speed of diffusion non- monotonically dependent on the memory length, but also that cross-correlations of links help to speed up the diffusion process, while autocorrelations slow it back down. Interestingly, the model parameters can be rigorously estimated from data by means of max- imum likelihood methods, showing how the model allows a realistic description of real world temporal networks and also to predict their evolution. In particular, we show the correlation properties of several temporal network datasets, ranging from transportation and social net- works to football games, and we corroborate our findings by using the estimated parameters in a link prediction study. Finally, thanks to the flexibility of maximum likelihood inference, we illustrate how to deal with heterogeneity and time-varying patterns, possibly including also non-stationary network dynamics. We show how the estimation method can be generalized in the case of link-specific or time-varying parameters, thus allowing for the inference (and the forecast) of a large variety of temporal network structures.

Ref. [1] Holme, P. and Saramäki, J., 2012. Temporal networks. Physics reports, 519(3), pp.97-125. [2] Holme, P. and Saramäki, J., 2013. Temporal networks as a modeling framework. In Temporal networks (pp. 1-14). Springer, Berlin, Heidelberg. [3] Hanneke, S., Fu, W. and Xing, E.P., 2010. Discrete temporal models of social networks. Electronic journal of statistics, 4, pp.585-605. [4] Peixoto, T.P. and Rosvall, M., 2017. Modelling sequences and temporal networks with dynamic community structures. Nature communications, 8(1), pp.1-12. [5] Peixoto, T.P., 2019. Network reconstruction and community detection from dynamics. Physical review letters, 123(12), p.128301. [6] Karsai, M., Perra, N. and Vespignani, A., 2014. Time varying networks and the weakness of strong ties. Scientific reports, 4(1), pp.1-7. [7] Hoff, P.D., Raftery, A.E. and Handcock, M.S., 2002. Latent space approaches to social network analysis. Journal of the american Statistical association, 97(460), pp.1090-1098. [8] Sarkar, P. and Moore, A.W., 2005. Dynamic social network analysis using latent space models. Acm Sigkdd Explorations Newsletter, 7(2), pp.31-40. [9] Jacobs, P.A. and Lewis, P.A., 1978. Discrete Time Series Generated by Mixtures. III. Autoregressive Processes (DAR (p)). Naval Postgraduate School Monterey Calif. [10] Williams, O.E., Lillo, F. and Latora, V., 2019. Effects of memory on spreading processes in non-Markovian temporal networks. New Journal of Physics, 21(4), p.043028. [11] Mazzarisi, P., Barucca, P., Lillo, F. and Tantari, D., 2020. A dynamic network model with persistent links and node-specific latent variables, with an application to the interbank market. European Journal of Operational Research, 281(1), pp.50-65. [12] Williams, O.E., Mazzarisi, P., Lillo, F. and Latora, V., 2019. Non-Markovian temporal networks with auto- and cross-correlated link dynamics. arXiv preprint arXiv:1909.08134.

11:35
Consensus dynamics on temporal hypergraphs

ABSTRACT. We investigate consensus dynamics on temporal hypergraphs that encode network systems with time-dependent, multi-way interactions. We compare this dynamics with that on appropriate projections of this higher-order network representation that flatten the temporal, the multi-way component, or both. For linear average consensus dynamics, we find that the convergence of a randomly switching time-varying system with multi-way interactions is slower than the convergence of the corresponding system with pairwise interactions, which in turn exhibits a slower convergence rate than a consensus dynamics on the corresponding static network. We then consider a nonlinear consensus dynamics model in the temporal setting. Here we find that in addition to an effect on the convergence speed, the final consensus value of the temporal system can differ strongly from the consensus on the aggregated, static hypergraph. In particular we observe a first-mover advantage in the consensus formation process: If there is a local majority opinion in the hyperedges that are active early on, the majority in these first-mover groups has a higher influence on the final consensus value --- a behaviour that is not observable in this form in projections of the temporal hypergraph.

11:50
Maximum-entropy temporal networks: disentangling the role of static and dynamic node heterogeneity

ABSTRACT. Some of the main challenges that scientists have faced on Temporal Networks are understanding the minimal ingredients that explain the entity of the autocorrelation of different snapshots of the same network over time and whether and how these temporal patterns are heterogeneously distributed across nodes. Among the streams of literature developed to approach Temporal Networks, there is one where the formalism of Exponential Random Graphs has been adapted to the temporal dimension, leading to the so-called Temporal Exponential Random Graphs. Here we use this framework to establish various time-extended generalizations of popular approaches to static networks. Instead of having a representation depending only on a particular state of the network, the model developed also depends on the time correlation between different states. This results in a Markovian model where the structure of the temporal dependency among different snapshots is specified by what is called persistent degree: the part of the degree that persists in time. Our model provides a generalization of the so-called memoryless configuration model (which takes as input only the time-averaged degrees of all nodes in the network) to a time-extended variant where the one-lagged persistent degrees of all nodes are also included. This ingredient ensures statistical dependency between snapshots while imposing a minimal, yet non-trivial, constraint on the autocorrelation at the level of individual nodes. In this way, a specific Hamiltonian can be defined with a term that catches temporal correlation. We note that this Hamiltonian has an expression analogous to the one-dimensional Ising model, in which the coupling strength controls this autocorrelation of links. We exploit this analogy to achieve a rigorous solution and an exact analytic expression for the probability of links. The resulting model is at the same time explicitly solvable and unbiased, as ensured by the maximum-entropy construction. The model is then applied to The ”World Trade Web” datasets. We find that the one-lagged persistence degree explains all the autocorrelation properties, implying that this Markovian approach is suitable for the analysis of these classes of networks. Finally, we consider a relaxed model where the local constraints on the persistent degrees of all nodes are replaced by a weaker global constraint on the persistent degree averaged over all nodes. This relaxed model serves as a comparison to disentangle the topological effects of the heterogeneity of the degree sequence from the temporal effects of the heterogeneity of the persistent degree sequence. We, therefore, apply the Akaike Information Criterion so that the three different models are compared (Memory-less model, global persistent degree and idyosincratic persistent degree). The results show that the dataset exhibits local persistence properties, suggesting that the memory is a node-specific property instead of some global characteristic of the system.

12:05
Estimating the temporal dynamics of densification in human contact networks

ABSTRACT. In social networks, two basic mechanisms influence network densification and sparsification: population size of the system and/or the probability of connection between individuals. These mechanisms manifest quite differently, with aggregate edges growing either superlinearly or with an accelerated pattern. However, most real-world social networks exhibit a mixture of both. Here, we develop a numerical maximum likelihood approach to identify, simultaneously, the extent to which changing population and connection probability influence densification and sparsification in a face-to-face network. The proposed method is able to detect temporal patterns in networks with strict schedule of events merely from shifts in network activity level and/or population size.

12:20
Convergence properties of optimal transport-based temporal networks
PRESENTER: Diego Baptista

ABSTRACT. We study network properties of networks evolving in time based on optimal transport principles. These evolve from a structure covering uniformly a continuous space towards an optimal design in terms of optimal transport theory. At convergence, the networks should optimize the way resources are transported through it. As the network structure shapes in time towards optimality, its topological properties also change with it. The question is how do these change as we reach optimality. We study the behavior of various network properties on a number of network sequences evolving towards optimal design and find that the transport cost function converges earlier than network properties and that these monotonically decrease. This suggests a mechanism for designing optimal networks by compressing dense structures. We find a similar behavior in networks extracted from real images of the networks designed by the body shape of a slime mold evolving in time.

12:35
A Hybrid Adjacency and Time-Based Data Structure for Analysis of Temporal Networks
PRESENTER: Tanner Hilsabeck

ABSTRACT. Dynamic or temporal networks enable representation of time-varying edges between nodes. Conventional adjacency-based data structures used for storing networks such as adjacency lists were designed without incorporating time. When used to store temporal networks, such structures can be used to quickly retrieve all edges between two sets of nodes, which we call a node-based slice, but cannot quickly retrieve all edges that occur within a given time interval, which we call a time-based slice. We propose a hybrid data structure for storing temporal networks with timestamped edges, including instantaneous edges such as messages on social media and edges with duration such as phone calls. Our hybrid structure stores edges in both an adjacency dictionary, enabling rapid node-based slices, and an interval tree, enabling rapid time-based slices. We evaluate our hybrid data structure on many real temporal network data sets and find that they achieve much faster slice times than adjacency-based baseline structures with only a modest increase in creation time and memory usage.

12:50-14:15Lunch Break
14:15-16:15 Session Oral O5A: Structural Network Measures
14:15
PageRank computation for Higher-Order Networks

ABSTRACT. Higher-order networks are efficient representations of sequential data. Unlike the classic first-order network approach, they capture indirect dependencies between items composing the input sequences by the use of memory-nodes. We focus in this study on the recent variable-order network model. Previous works suggested that random-walk-based mining tools can be directly applied to these networks. In this study we discuss the case of the PageRank measure. We show the existence of a bias due to the distribution of the number of representations of the items. We propose an adaptation of the PageRank model in order to correct it. Application on real-world data shows important differences in the achieved rankings.

14:30
Fellow Travelers Phenomenon present in Real-World Networks

ABSTRACT. We investigate a metric parameter "Leanness" of graphs which is a formalization of a well-known Fellow Travelers Property present in some metric spaces. Given a graph G = (V,E), the leanness of G is the smallest λ such that, for every pair of vertices x, y ∈ V, all shortest (x, y)-paths stay within distance λ from each other. We show that this parameter is bounded for many structured graph classes and is small for many real-world networks. We present efficient algorithms to compute or estimate this parameter and evaluate the performance of our algorithms on a number of real-world networks.

14:45
The Frechet Mean of Inhomogeneous Random Graphs

ABSTRACT. To characterize the "average" of a set of graphs, one can compute the sample Frechet mean. We prove the following result: if we use the Hamming distance to compute distances between graphs, then the Frechet mean of an ensemble of inhomogeneous random graphs is obtained by thresholding the expected adjacency matrix: an edge exists between the vertices i and j in the Frechet mean graph if and only if the corresponding entry of the expected adjacency matrix is greater than 1/2. We prove that the result also holds for the sample Frechet mean when the expected adjacency matrix is replaced with the sample mean adjacency matrix. This novel theoretical result has some significant practical consequences; for instance, the Frechet mean of an ensemble of sparse inhomogeneous random graphs is the empty graph.

15:00
An extension of the bag-of-paths model introducing a Poisson distribution on path lengths
PRESENTER: Sylvain Courtain

ABSTRACT. This work extends the bag-of-paths (BoP) model by introducing a weighting of the length of the paths in the network, provided by a Poisson probability distribution. The main advantage of this approach is that it allows to choose the optimal mean path length which is most relevant for the application at hand. Various quantities of interest, such as the probability of drawing a path from the bag of paths, or the join probability of sampling any path connecting two nodes of interest, can be easily computed from this model. In this context, a new distance measure between the nodes of a network, considering prior probabilities on the length of the paths, is defined. Experiments on semi-supervised classification tasks show that the introduced distance measure provides competitive results compared to other state-of-the-art methods. Moreover, a new interpretation of the logarithmic communicability similarity measure is proposed in terms of our new model.

15:15
Learning Centrality by Learning to Route
PRESENTER: Liav Bachar

ABSTRACT. The development of a tailor-made centrality measure for a given task requires domain and network analysis expertise as well as time and effort. Automatically learning arbitrary centrality measures provided ground truth node scores is an important research direction. In this article, we propose a generic deep learning architecture for centrality learning that relies on the insight that arbitrary centrality measures can be computed using Routing Betweenness Centrality (RBC) and on our new differentiable implementation of RBC. The proposed Learned Routing Centrality (LRC) architecture optimizes the routing function of RBC to fit the ground truth scores. Results show that LRC can learn multiple types of centrality indices more accurately than state-of-the-art.

15:30
On the exponential ranking and its linear counterpart

ABSTRACT. This paper deals with ranking algorithms for signed graphs. We analyze the algebraic properties of the exponential ranking algorithm and suggest an alternative ranking scheme that is close to the exponential ranking in several respects but also enjoys the property of being linear. We discuss the properties of the introduced scheme and present numerical evidence that it is indeed very close to the exponential ranking.

15:45
FPPR: Fast Pessimistic PageRank for Dynamic Directed Graphs
PRESENTER: Rohith Parjanya

ABSTRACT. The paper presents a new algorithm for updating PageRank on dynamic directed graphs after the addition of a node. The algorithm uses the expected value of the random surfer to calculate the score of the newly added node and nodes of the existing chain where the new node is added. The complexity of the algorithm for $k$ updates is $\mathcal{O}(k\times d_{avg})$. Extensive experiments have been performed on different synthetic and real-world networks. The experimental result shows that the rank generated by the proposed method is highly correlated with that of the benchmark algorithm of Power Iteration.

16:00
Analyzing Community-aware Centrality Measures Using The Linear Threshold Model
PRESENTER: Ali Yassin

ABSTRACT. Targeting influential nodes in complex networks allows fastening or hindering rumors, epidemics, and electric blackouts. Since communities are prevalent in real-world networks, community-aware centrality measures exploit this information to target influential nodes. Researches show that they compare favorably with classical measures that are agnostic about the community structure. Although the diffusion process is of prime importance, previous studies consider mainly the famous Susceptible-Infected-Recovered (SIR) epidemic propagation model. This work investigates the consistency of previous analyses using the popular Linear Threshold (LT) propagation model, which characterizes many spreading processes in our real life. We perform a comparative analysis of seven influential community-aware centrality measures on thirteen real-world networks. Overall, results show that Community-based Mediator, Comm Centrality, and Modularity Vitality outperform the other measures. Moreover, Community-based Mediator is more effective on a tight budget (i.e., a small fraction of initially activated nodes), while Comm Centrality and Modularity Vitality perform better with a medium to a high fraction of initially activated nodes.

14:15-16:15 Session Oral O5B: Diffusion & Epidemics
14:15
Digital proximity tracing on empirical contact networks for pandemic control
PRESENTER: Giulia Cencetti

ABSTRACT. Digital contact tracing is a relevant tool to control infectious disease outbreaks, including the COVID- 19 epidemic. Early work evaluating digital contact tracing omitted important features and heterogeneities of real-world contact patterns influencing contagion dynamics. We fill this gap with a modeling framework informed by empirical high-resolution contact data to analyze the impact of digital contact tracing in the COVID-19 pandemic. We investigate how well contact tracing apps, coupled with the quarantine of identified contacts, can mitigate the spread in real environments. We find that restrictive policies are more effective in containing the epidemic but come at the cost of unnecessary large-scale quarantines. Policy evaluation through their efficiency and cost results in optimized solutions which only consider contacts longer than 15-20 minutes and closer than 2-3 meters to be at risk. Our results show that isolation and tracing can help control re-emerging outbreaks when some conditions are met: (i) a reduction of the reproductive number through masks and physical distance; (ii) a low-delay isolation of infected individuals; (iii) a high compliance. Finally, we observe the inefficacy of a less privacy-preserving tracing involving second order contacts. Our results may inform digital contact tracing efforts currently being implemented across several countries worldwide.

14:30
Modelling virus spreading in ride-pooling networks
PRESENTER: Rafał Kucharski

ABSTRACT. Urban mobility needs alternative sustainable travel modes to keep our pandemic cities in motion. Ride-pooling, where a single vehicle is shared by more than one traveller, is not only appealing for mobility platforms and their travellers, but also for promoting the sustainability of urban mobility systems. Yet, the potential of ride-pooling rides to serve as a safe and effective alternative given the personal and public health risks considerations associated with the COVID-19 pandemic is hitherto unknown. To answer this, we combine epidemiological and behavioural shareability models to examine spreading among ride-pooling travellers, with an application for Amsterdam. Findings are at first sight devastating, with only few initially infected travellers needed to spread the virus to hundreds of ride-pooling users. Without intervention, ride-pooling system may substantially contribute to virus spreading. Notwithstanding, we identify an effective control measure allowing to halt the spreading before the outbreaks (at 50 instead of 800 infections) without sacrificing the efficiency achieved by pooling. Fixed matches among co-travellers disconnect the otherwise dense contact network, encapsulating the virus in small communities and preventing the outbreaks.

14:45
Protest-driven epidemics during the COVID-19 pandemic
PRESENTER: Inho Hong

ABSTRACT. 1. Introduction

While Non-Pharmaceutical Interventions (NPIs) have effectively mitigated the COVID-19 pandemic, they created massive economic downturn and tremendous psychological distress. Accordingly, as an unintended side effect of the NPIs, anti-intervention protests emerged in many different countries. Although these epidemic-driven protests can potentially be an epicenter to make a surge in the epidemic trend, we still lack a model for estimating the protest-driven epidemics. Specifically, the lack of protest data and uncertainty of outdoor transmission in crowd have obstructed this modeling. This model for protest-driven epidemics may prepare us for the unintended emergence of protests by NPIs and their impacts on pandemic control. Here we develop a simulation model with real data to estimate protest-driven epidemics in different scenarios.

2. Results

Using the US state-level protest data obtained from Count Love, we model the protest-driven epidemics in 2020. This protest data is used to divide protesters and non-protesters into two different groups for each state (Fig. 1a). For non-protester groups, the epidemic dynamics is simulated by the SEIR compartmental epidemic model on mobility networks of commuting and aviation at the county level. For protester groups, the epidemic dynamics is simulated by the ordinary SEIR model for each state. In this simulation, we assume the reproduction number at protests as 2.85 with the limits of 0.6-14.5 by combining the effects of low outdoor transmission and the high crowd density. Then, the simulation results for protester groups and non-protester groups are merged at each simulation step. We first characterize anti-intervention protests in the US. Fig. 1b shows that the anti-intervention protests were most active from April to June of 2020. They reached the peak in May after intervention stringency had increased over March and April. Our state-level analysis shows that population size indicates the size of protests for both protest counts and the number of attendees, while intervention stringency has little explanatory power. We estimate excess infected cases in the US from April to June of 2020, when protests were most active, for different levels of the reproduction number at protests. Fig. 1c shows the cumulative number of the excess infected cases driven by protests. While the high transmission rate due to the high crowd density created tens to hundreds of excess cases, the impact on the overall epidemic trend was not significant since anti-intervention protesters were not large enough. This observation is consistent with no significant surge of infected cases after a series of anti-intervention protests in 2020. At the state level, states with earlier protest occurrences and more protesters, such as California and Michigan, have more excess cases (Fig. 1d). Our observation of apparent little impact of protests on epidemics leads us to a question: how can protests be critical to pandemic control? To answer this, we performed a scaling analysis for different sizes of protests by multiplying the number of anti-intervention protesters. As a result, the number of daily excess cases becomes significant when the multiplication factor is larger than 10^2 corresponding to ~1 million daily protesters (Fig. 2a). This critical size is comparable to the scale of the Black Lives Matter protests which attracted ~300,000 daily protesters at maximum and led to ~80 daily excess infected cases according to our estimation (Fig. 2b). This result suggests a possibility of another epidemic wave by massive protests. The reproduction number is, due to the many factors involved, rough to estimate, and thus small variations can lead to very different scenarios. Therefore, protests can be catastrophic if we face new pandemics with a reproduction number slightly larger than the observed in COVID-19. Our findings highlight the existing risks of protest-driven epidemics in the near future. Massive protests larger than the critical size, 1 million protesters, existed many times in history, and the heavy-tailed distribution of protests suggests that even larger protests may appear. Additionally, extreme pandemics such as COVID-19 might be more frequent in the near future. Our model can be used in those future pandemics. Therefore, more attention should be paid to unintended risks driven by the interplay between epidemics, interventions and protests.

Summary. We propose a data-driven epidemic model for estimating excess infected cases by protests. Although the US protests in 2020 apparently had limited impacts on the epidemic trend due to the small sizes, larger protests with more than 1 million protesters can be critical to pandemic control. These findings call attention to the risks by the interplay between epidemics, interventions and protests in the near future.

15:00
Norm violation versus punishment risk in a social model of corruption
PRESENTER: Francisco Bauza

ABSTRACT. We analyze the onset of social-norm violating behaviors when social punishment is present. To this aim, a compartmental model is introduced to illustrate the flows between the three possible states: Honest, Corrupt and Ostracism. With this simple model we attempt to capture some essential ingredients such as the contagion of corrupt behaviors to honest agents, the delation of corrupt individuals by honest ones, and the warning-to-wrongdoers (fear-like that triggers the conversion of corrupt people into honesty). In non-equilibrium statistical physics terms, the former dynamics can be viewed as a non-hamiltonian kinetic spin-1 Ising model.

15:15
Inference of historical influence networks

ABSTRACT. In recent years many people have studied the modeling of epidemics on networks. The last global outbreaks, such as SARS in 2003, H7N9 in 2013, and the Corona-Virus in 2019, show how important it is to understand the propagation in large-scale networks to prevent future outbreaks. This abstract is about a different type of spreading, namely cultural spreading of ancient societies on historical (spatial) networks. When studying historical processes, we are not interested anymore in predictions, be- cause the process already has happened, but instead in the reconstruction of the process. From the observations of past phenomena, the goal is to infer the underlying spreading network. We will study the Romanization process in ancient Tunisia from 50 BC till 300 AD, - a project in close cooperation with the German Archaeological Institute. One important indicator for the diffusion of Roman cultural innovations in northern Africa is the introduction of Roman administrative structure, where cities were assigned a specific Roman status (civita, municipia or colonia). We model the romanization process with an SI epidemical model, where S means that a city has no Roman status and I means that a city has some Roman status. State of the art methods in network research are inferring networks based on the epidemic outbreaks data. These data is much more richer in time and space than our archaeological data, that is additionally often incomplete. The sparsity of the data is a big challenge for every network inference model. We present our mathematical framework that infers a historical network with the general inverse infection model from sparse data. To be precise, we infer an inter-city network to better understand the communication strength between different subpopulations in modern-day Tunisia, based on the evolution of the status of cities.

15:30
Benchmarking Optimal Control for Network Dynamic Systems with Plausible Epidemic Models

ABSTRACT. The sheer dimension of network dynamic systems adds a challenge of scale to synthesizing optimal control, which the techniques such as mean-field approximation, reinforcement learning, and graphon mean field games attempt to overcome. We propose to use compartmental metapopulation epidemic models derived from open data to benchmark these advanced approaches on an important problem with intuitive visualization options such as choropleth maps. To this end, we formalize a procedure for generating plausible instances of such models with 1–64,735 nodes based on open census data for the contiguous U.S., each with a network of daily commute and airplane travel, coupled with a formal aggregation routine enabling a view of the same geography at different resolutions, illustrated by merging the 2,072 census tracts in Oregon and Washington states, together with their travel networks, into 75 county-level nodes, 23 ``airport service area'' nodes, and 2 nodes for states themselves. These four cases, and 10 other, are then put through 180-day ``patient zero'' scenarios in a Metapopulation SIR Model with per-node ``lockdown level'' control, with the objective of minimizing the cumulative number of infections and the lockdown level. The optimal control is derived through the Pontryagin Maximum Principle and numerically computed with the forward-backward sweep method. To ensure reproducibility, the instance generator, solver, and visualization routines are available at https://github.com/yvs314/epi-net-m.

15:45
Scientific Collaboration Networks and Knowledge Diffusion under the COVID19

ABSTRACT. Do collaboration networks change under the pandemic? Findings show that the collaboration networks become smaller with denser connections between scientists, and geographic location and personal connection remain significant factors. Most biomedical knowledge is stable, but central and novel concepts expand semantic meanings through scientific collaboration and social learning.

16:00
Reducing temporal density reduces total infection rate
PRESENTER: Osnat Mokryn

ABSTRACT. The SARS-CoV-2 coronavirus disease 2019 (COVID-19) has created a global crisis. In response, governments and local authorities seek to employ Non-Pharmaceutical Interventions (NPIs) to slow the spread of the virus. A form of social distancing solution is the creation of spatial or temporal distancing pods. Here, we quantify the different effects of the temporal vs. the spatial division on the spread of the infection in the community, using both temporal random networks, and real-life contact networks. We find that reducing temporal density reduces contagion more robustly than spatial pods. Lowering meeting rates enables removal of symptomatic infected individuals, slows contagion and reduces the burden on the healthcare system and other scarce resources. Further, big changes in initial parameters (number and location of patient zero) do not translate to big changes in the result when temporal pods are used, while the spatial density effect depends heavily on the number and location of patient zero.

14:15-16:15 Session Oral O5C: Resilience, Synchronization & Control
14:15
Dynamics of two-layer networks of FitzHugh-Nagumo oscillators with bidirectional and unidirectional couplings
PRESENTER: Elena Rybalova

ABSTRACT. We study numerically the spatiotemporal dynamics and synchronization of a heterogeneous two-layer multiplex network where each layer is represented by a ring of nonlocally coupled FitzHugh-Nagumo neurons in the oscillatory regime. When uncoupled, the layers can show chimera states, solitary states and combined structures (the coexistence of chimera and solitary states) depending on the values of the intralayer coupling parameters and initial conditions. We choose different spatiotemporal patterns in the coupled layers and systematically study synchronization between them when the bidirectionally and unidirectionally interlayer coupling is introduced through either the fast (activator) or the slow (inhibitor) variable of the FitzHugh-Nagumo oscillators. Our results enable to uncover the competitive behavior between the solitary states and the chimeras in the transition to the synchronous regime in the considered network.

14:30
Anti-phase inter-layer synchronization of wave structures in a multiplex network of oscillators
PRESENTER: Igor Shepelev

ABSTRACT. We present numerical results for the synchronization phenomena in a bilayer network of repulsively coupled 2D lattices of van der Pol oscillators. We consider the cases when the network layers have either different or the same types of intra-layer coupling topology. When the layers are uncoupled, the lattice of van der Pol oscillators with repulsive interaction typically demonstrates a labyrinth-like pattern, while the lattice with attractively coupled van der Pol oscillators shows a regular spiral wave structure. We reveal for the first time that repulsive inter-layer coupling leads to anti-phase synchronization of spatiotemporal structures for all considered combinations of intra-layer coupling. As a synchronization measure, we use the correlation coefficient between the symmetrical pairs of network nodes, which is always close to -1 in the case of anti-phase synchronization. We also study how the form of synchronous structures depends on the intra-layer coupling strengths when the repulsive inter-layer coupling is varied.

14:45
Topological synchronization: abrupt transition and rhythmic phase
PRESENTER: Lucille Calmon

ABSTRACT. Topological signals are signals defined not only on nodes but also links and higher order simplices of a simplicial complex. While topological signals defined on the nodes of a network are traditionally studied in network science, the dynamics of topological signals defined on links have been much less explored. In this work, we introduce topological synchronization, a model that uses topology (i.e. the study of shapes) to couple topological signals defined on nodes and links together. The dynamics of the signals on the nodes follows a Kuramoto-like dynamics including a phase lag that depends on the dynamics of the nearby links. On the other hand, the dynamics of topological signals defined on links follows a higher-order Kuramoto dynamics including a phase lag determined by the dynamical states of nearby nodes. Through in-depth theoretical analysis and extensive large scale numerical simulations, we find that topological synchronization on fully connected networks is explosive and the model presents a thermodynamically stable hysteresis loop between a discontinuous forward transition and a continuous backward transition. We provide an analytical expression for the threshold of the discontinuous forward transition. We show that topological synchronization also leads to a coherent exotic phase, called rhythmic phase, characterised by non-stationary complex parameters due to an instability of the steady state solutions. This complete analysis provides a novel pathway to explosive synchronization.

15:00
How complex is to be a hub? Dynamical complexity as a proxy for the network degree distribution
PRESENTER: I. Leyva

ABSTRACT. The deep relationship between topology and dynamics of complex networks has been thoroughly explored, focusing in the synchronization between the nodes, and the knowledge gathered so far has driven the advance in crucial applications. However, there are cases in which the system performs its activity in a partial or weak synchronization regime to preserve the balance between functional integration and segregation, whereas full synchronization is pathological. But even in this far from synchronized state, each coupled unit is encoding in its own dynamics the signature of its structural role. We explore how this prominent feature could be used to extract information about the network, without needing to make any reference to pairwise correlations, even in those situations where the structure is unknown. The effect of the coupling strength against the dynamical complexity of the nodes is found to be a function of their topological roles, with nodes of higher degree displaying lower levels of complexity. Importantly, our results imply that it is possible to infer the degree distribution of a network only from individual dynamical measurements in a large variety of systems, wihch could be exploited in diverse fields as neuroscience, econophysics or power grids.

15:15
Inter-layer adaptation: A new route for Explosive synchronization in multilayer networks

ABSTRACT. Adaptation in the connection strength between interacting units is key to the evolution and dynamical performance of many real-world complex systems made up of nonlinear dynamical units. Furthermore, first-order transition synchronization popularly known as explosive synchronization has recently drawn tremendous attention from the community due to theoretical curiosity as well as the harmful effects of such phenomenon in many real-world systems. It has been shown that adaptation in intra-layer coupling can lead to explosive synchronization in those systems which depict smooth second-order transition in the absence of the adaptation. Taking a very different approach, we propose adaptation in inter-layer connections aka multiplexing adaptation and show that such adaptation can facilitate Explosive synchronization. The scheme is robust against change in the intra-later network connections, as well as variation in the adaptation terms. Through rigorous mean-field analysis, we show that how the presence of adaptation in the inter-layer term prohibits nonzero solution of the order parameter below the critical intra-layer coupling strength. The results and framework are applicable for those systems which have intrinsic multilayer structure and the exact impact of dynamical activities of one layer on those of other layers is not known which is the case of many real-world complex systems.

15:30
Recovery of oscillations from amplitude death state in a network of coupled slow-fast systems
PRESENTER: Sneha Kachhara

ABSTRACT. Complex systems modeled as networks, such as the social networks, corporate relationships, transport networks, neuronal networks etc. evolve in time, both in terms of collective dynamics and network structure. This interplay of network topology and node dynamics can steer the system to a stable state which may not always be desirable. In the case of coupled slow-fast systems (i.e. otherwise identical systems evolving with different timescales), the emergent dynamics is very sensitive to the timescales involved and can suffer from amplitude suppression or death state. Especially the case of chaotic oscillators is the most intriguing. In this work, we investigate the case of a network of coupled chaotic Roessler systems with two different dynamical timescales. We first identify the amplitude death state and chimera like state, and then try various rewiring strategies to boost oscillations and recover network dynamics. We find that it is possible to boost oscillations with intelligent rewiring based on the amplitude of the nodes, checked periodically. We discover that this strategic rewiring is significantly more successful than random rewiring. Our results do not depend on initial conditions and hence have broader applicability.

15:45
Sandpile cascades on oscillator networks

ABSTRACT. The BTW sandpile model of cascading dynamics forms a cornerstone for our understanding of failures in systems ranging from avalanches and forest fires to electric power grids and brain networks. The latter two are examples of oscillator networks, yet the BTW model does not account for this. Here we seek to understand the interplay between the oscillatory and sandpile dynamics by considering the BTW sandpile model on a network of Kuramoto oscillators. Inspired by electric power grids we aim to leverage this interaction to maximize synchronization, maximize load, and minimize large cascades. We assume that the more out-of-sync a node is with its neighbors, the more vulnerable it is to failure so that a node's capacity is a function of its local level of synchronization and that when a node topples, its phase is reset at random. This leads to a novel long-time oscillatory behavior at an emergent time-scale. The bulk of an oscillatory cycle is spent in a build-up phase where oscillators are fully synchronized, and cascades are largely avoided while the overall load in the system increases. Then the system reaches a tipping point where, in contrast to the BTW model, a large cascade triggers an even larger cascade, leading to a ``cascade of cascades" dragon king event, after which the system has a short transient dynamic that restores full synchrony. The coupling between capacity and synchronization introduces endogenous cascade seeds in addition to exogenous ones and we show their respective roles in the dragon king events. We establish the phenomena from numerical studies and develop the accompanying mean-field theory to locate the tipping point, calculate the amount of load in the system, explain the frequency of the emergent long-time oscillations and find the distribution of cascade sizes during the build-up phase.

16:00
Inferring the connectivity of coupled chaotic oscillators using data assimilation

ABSTRACT. Inferring the interactions between coupled oscillators is a significant open problem in complexity science. Even though the Kalman filter (KF) technique is a well-known tool, it has not yet been used to infer the connectivity of coupled chaotic oscillations. Here we demonstrate that KF allows reconstructing the interaction topology and the coupling strength of a network of mutually coupled chaotic oscillators. We show that the connectivity can be inferred by considering only the observed dynamics of a single variable of the three that define the phase space of each oscillator. We also show that both the coupling strength and the network architecture can be inferred even when the oscillators are close to synchronization. Simulation results are provided to show the effectiveness and applicability of the proposed method.

16:15-16:55Coffee Break & Online Session for Onsite Presenters of Poster Session P3 A-B-C
16:15-16:55 Session Poster P4A: [1-9] Information Spreading in Social Media
Capturing the Spread of Hate on Twitter using Spreading Activation Models
PRESENTER: Seema Nagar

ABSTRACT. Hate speech is a prevalent and pervasive phenomenon on social media platforms. Detecting hate speech and modelling its spread is a significant research problem with practical implications. Though hate speech detection has been an active area of research, hate spread modelling is still in its nascent stage. Prior works have analyzed the hateful users' social network embedding, hateful user detection using belief propagation models, and the spread velocity of hateful content. However, these prior works fail to factor in the multiple hateful forms (such as hate against gender, race and ethnicity) and the temporal evolution of hate spread, limiting their applicability. We take a holistic approach wherein we model the spread of hate as a single form and fine-granular spread of hateful forms. We extend the traditional spread and activation (SPA) model to capture the spread of hate and its forms. We use SPA to model, the spread of hate as one single form while \textit{TopSPA} captures the spread of multiple hate forms. We also propose ways to detect hateful forms by using latent topics present in hateful content. We empirically demonstrate our approach to a dataset from Twitter that contains ample hate speech instances along with users labelled as hateful or not.

Activator-Inhibitor Model for Describing Interactions between Fake News and Their Corrections
PRESENTER: Masaki Aida

ABSTRACT. The problem of fake news has been becoming more serious on today's online social networks. Intuitively, it seems effective to send its corrected information as a countermeasure. However, there is an actual case that its corrected information ironically causes to attract attention to fake news, in which the situation worsened. In this paper, we model the interaction between fake news and its corrected information as a reaction-diffusion system and propose a framework that describes the mechanism that the corrected information causes to attract attention to fake news. In this framework, the emergence of clusters of users who believe in fake news is understood as the Turing pattern that appears in the activator-inhibitor model. Numerical calculations show that even if the network structure has no spatial bias, the interaction between fake news and its corrected information creates a group that strongly discusses fake news.

Love and hate during political campaigns in social networks

ABSTRACT. In this paper we present a general methodology for analyzing the polarization of the opinions posted by users in social networks during competitive events such as a political election. In addition to study the opinions about each contender separately, we propose to consider the relationship between the user opinions about each contender, which allows us a better understanding of the impact of process in the social network. Thus, we propose a methodology that allows us to describe what is the opinion of the supporters of one contender about the rest of them. As a case study we apply these ideas to the electoral process USA 2016 between the candidates Hillary Clinton and Donald Trump. The methodology combines sentiment analysis techniques, new diagrams describing graphically the tension between the positive and negative opinions, and numerical measurements obtained employing techniques borrowed from physics such as the gravity centers, employed here to establish the polarity of the discussion.

Homophily - A Driving Factor for Hate Speech on Twitter
PRESENTER: Seema Nagar

ABSTRACT. In this paper, we hypothesize that homophily is a powerful driver in the generation of hate speech on Twitter. Prior works have shown the role of homophily in information diffusion, sustenance of online guilds and contagion in product adoption. We observe that familiarity computation techniques used in the literature such as the presence of an edge, and the number of mutual friends between a pair of users, have ample scope of improvement. We address the limitations of existing familiarity computation methods by employing a variational-graph-auto-encoder to capture a user's position in the network and utilizing this representation to compute familiarity. We empirically demonstrate the presence of homophily on a dataset from Twitter. Further, we investigate how homophily varies with different hateful forms, such as hate manifesting in topics of gender, race, colour etc. Our results indicate higher homophily in users associating with topics of racism, sexism and nationalism.

How the Far-Right Polarises Twitter: 'Hashjacking' as a Disinformation Strategy in Times of COVID-19
PRESENTER: Fabian Stephany

ABSTRACT. Twitter influences political debates. Phenomena like fake news and hate speech show that political discourses on social platforms can become strongly polarised by algorithmic enforcement of selective perception. Some political actors actively employ strategies to facilitate polarisation on Twitter, as past contributions show, via strategies of 'hashjacking'. For the example of COVID-19 related hashtags and their retweet networks, we examine the case of partisan accounts of the German far-right party Alternative für Deutschland (AfD) and their potential use of 'hashjacking' in May 2020. Our findings indicate that polarisation of political party hashtags has not changed significantly in the last two years. We see that right-wing partisans are actively and effectively polarising the discourse by 'hashjacking' COVID-19 related hashtags, like #CoronaVirusDE or #FlattenTheCurve. This polarisation strategy is dominated by the activity of a limited set of heavy users. The results underline the necessity to understand the dynamics of discourse polarisation, as an active political communication strategy of the far-right, by only a handful of very active accounts.

Models of Influence Spreading on Social Networks
PRESENTER: Vesa Kuikka

ABSTRACT. Complex contagion (CC) and Simple contagion (SC) models have been used to describe influence and information spreading on social networks. We propose different models for these categories and discuss the interplay between the characteristics of social interactions and different variants of the models. Different properties of social networks are discovered by calculating centrality measures with the spreading models. We demonstrate the results with the Zachary’s Karate Club social network and a larger Facebook social media network.

A network perspective on QAnon movement during COVID-19 infodemic
PRESENTER: Wentao Xu

ABSTRACT. QAnon is a popular conspiracy theory that is evolving during COVID-19. We study the behavioral dynamics of pro- and anti-QAnon users and swing users, and topics by using a hashtag co-occurrence network. Results reveal that it is necessary to better inform the swing users about the potential harm of QAnon while avoiding the backfire effect of their further approaching a pro-cluster. A network perspective developed in this study is important for forecasting the evolution of the QAnon movement.

Hawkes process marked with topic and its application to Twitter data analysis
PRESENTER: Masatoshi Goda

ABSTRACT. We modeled the time intervals of tweeting by the United States, Chinese, and British embassies' accounts from 1st February to 27th September 2020 by a marked Hawkes process. In our model, we set the proportion of each topic in a tweet as a mark by using Latent Dirichlet Allocation. By this model, we quantify the impact of each topic of past tweets on the probability of the occurrence of the next tweet. We see that, for example, the topic related to Covid-19 in a tweet excites future tweeting among each account.

The complexity of the Twitter conversation on climate change
PRESENTER: Blas Kolic

ABSTRACT. Identifying intervention points in socioeconomic systems is key to advance climate change mitigation. To that end, Twitter has been extensively used to analyse climate change’s public discourse and has proven to be a successful tool for monitoring public perception in other scenarios. Here we study the Twitter conversation on climate change during 2019, when important climate-related movements occurred, such as “Extinction Rebellion” or “Fridays For Future”. Our goal is to characterise the complexity of the overall conversation on Twitter by studying the impact of its leading users on their audiences networks and analysing how these audiences interact over time.

We find that a few leading users dominate the Twitter conversation on climate change, with an average Gini index for the users’ impact of 0.89. The dynamics of the most persistent leading users show that 1) there are groups of users with contrasting ideologies and 2) the presence of such groups varies over time. Finally, we construct similarity networks of the leading users based on the overlap of the who their audiences retweet --i.e., the chamber of the audience--, where we find a clear community structure that matches the users’ empirical ideologies. This work shows that by studying a small set of users, we can tell a great deal about the dynamics of the Twitter conversation. We will further analyse the network structure of the chambers and focus on the dynamic flow of information between them.

16:15-16:55 Session Poster P4B: [10-15] Dynamics on/of Networks
Towards control of opinion diversity by introducing zealots into a polarised social group

ABSTRACT. We explore a method to influence or even control the diversity of opinions within a polarised social group. We leverage the voter model in which users hold binary opinions and repeatedly update their beliefs based on others they connect with. Stubborn agents who never change their minds ("zealots") are also disseminated through the network, which is modelled by a connected graph. Building on earlier results, we provide a closed-form expression for the average opinion of the group at equilibrium. This leads us to a strategy to inject zealots into a polarised network in order to shift the average opinion towards any target value. We account for the possible presence of a backfire effect, which may lead the group to react negatively and reinforce its level of polarisation in response. Our results are supported by numerical experiments on synthetic data.

Bosonic Random Walk Neural Networks for Graph Learning
PRESENTER: Shivshankar

ABSTRACT. The development of Graph Neural Networks (GNNs) has led to great progress in machine learning on graph-structured data. These networks operate via diffusing information across the graph nodes while capturing the structure of the graph. Recently there has also seen tremendous progress in quantum computing techniques. In this work, we explore applications of multi-particle quantum walks on diffusing information across graphs. Our model is based on learning the operators that govern the dynamics of quantum random walkers on graphs. We demonstrate the effectiveness of our method on classification and regression tasks.

Directed hypergraphs and dynamical systems.
PRESENTER: Mauro Faccin

ABSTRACT. Studying complex systems often requires deconstructing those systems into simpler parts that interact with each other. Network science name those base constituents nodes and the pairwise interaction between them edges. In some cases, unfortunately, pair-wise interactions fall short in describing the initial system and a different route need to be followed. Simplicial complexes introduce the notion of nodes that interact through simplices, a segment, a triangle, a tetrahedron etc. Hyper-graphs on the other hand define interaction beyond pairwise through the definition of hyper-edges, edges that canjoin multiple nodes at once. In this work we present a unified view that encompass results on directed hyper- graphs with the recent finding characterizing dynamical systems on undirected hypergraphs. A notable finding of the present work is the definition of an effective graph that can be used to easily extend and compute dynamics-based network measures to the domain of hypergraphs. Finally, we introduce some examples to illustrate these findings.

Finding Colorful Paths in Temporal Graphs
PRESENTER: Riccardo Dondi

ABSTRACT. The problem of finding paths in temporal graphs has been recently considered due to its many applications. In this paper we consider a variant of the problem that, given a vertex-colored temporal graph, asks for a path whose vertices have distinct colors and include the maximum number of colors. We study the approximation complexity of the problem and we provide an inapproximability lower bound. Then we present a heuristic for the problem and an experimental evaluation of our heuristic, both on synthetic and real-world graphs.

Quantitative Evaluation of Snapshot Graphs for the Analysis of Temporal Networks

ABSTRACT. One of the most common approaches to the analysis of dynamic networks is through time-window aggregation. The resulting representation is a sequence of static networks, i.e. the snapshot graph. Despite this representation being widely used in the literature, a general framework to evaluate the soundness of snapshot graphs is still missing. In this article, we propose two scores to quantify conflicting objectives: Stability measures how much stable the sequence of snapshots is, while Fidelity measures the loss of information compared to the original data. We also develop a technique of targeted filtering of the links, to simplify the original temporal network. Our framework is tested on datasets of proximity and face-to-face interactions.

VLT-LUT: modeling the very long term evolution of the city in 300 years

ABSTRACT. The evolution of cities is modelled by developing a software to simulate the effects of agglomeration economies and transportation planning in the very long term. The cities' complexity is modeled with VLT-LUT, a tool that simulates the evolution of land use and transportation and their interaction, in a time span of 300 years, or from 1 to 29 million inhabitants, based on urban microeconomic theory and market equilibrium. Preliminary simulation results on an artificial city include the impact of exogenous scenarios of road network evolution and of the agents' perception of agglomeration economies, observing the evolution of land-use forms and the city size. Another result is that land rents evolve super-linearly with population, in line with previous empirical and theoretical research, but its strength is differentiated by scenario.

16:15-16:55 Session Poster P4C: [16-20] Structural Network Measures
Difficulty Metrics for Subgraph Isomorphism
PRESENTER: Rahul Deshmukh

ABSTRACT. Subgraph matching is a fundamental problem in complex network analysis where two graphs - query and target are given as input and one must determine whether the target contains a subgraph that is isomorphic to query. Subgraph isomorphism is an extremely challenging problem due to combinatorial explosion and lack of availability of discriminative node and edge features. Various domains such as weapons of mass destruction and cyber-security require to detect and locate the occurrence of a pattern graph within a large target graph. A large target graph constructed from multiple sources often contains a lot of redundant, incorrect, and incomplete information known as noise. Subgraph isomorphism becomes even more challenging when an algorithm needs to identify patterns in the presence of this background noise. Backtracking-based solutions of this problem explore a large search space which results in high computational cost. Due to the large size of target graphs, while finding a match of pattern graph in target graphs, subgraph isomorphism algorithm may get caught up in wrong regions of the target graph and may never be able to trace back to the region where solution lies. This further increases the difficulty of subgraph isomorphism problem. To enable subgraph matching under limited compute and time resources, it is important to design the pattern graphs and the target graph with quantifiable discernability of pattern graphs. While designing these pattern graphs, a subject matter expert may benefit from an estimate of the underlying search difficulty in their respective domains. We propose a method that estimates the difficulty of the subgraph isomorphism problem for a given target graph and a library of query graphs without solving the problem. With the help of this estimate, subgraph isomorphism algorithms can be adapted to solve the problem under limited resources for large target graphs. We present three different novel metrics to measure the difficulty in performing subgraph matching for a library of query graphs. We start with structural and semantic variations on a library of query graphs. The structural variations are achieved by adding or removing nodes and edges in a query graph while the semantic variations change the types of nodes in the query graph. While performing subgraph isomorphism, the majority of applications require that the resulting subgraph must satisfy a given set of constraints. We also make use of the constraint variations in our input query graphs. These constraints are of two types: time constraints and geo-constraints. To avoid initializing subgraph matching from a wrong region in the world graph, we propose to extend the previous research work that leverages the graph structure and identifies domain specific patterns of interests in the world graph [1]. We make use of SME generated query graphs for training of a Graph Convolutional Networks and obtain a set of interesting nodes for a given domain. These interesting nodes are used to generate samples from big target graphs. We propose the sampling strategies that generate samples which resemble in size with the query graphs. The main objective of the sampling is to capture the structures and semantics and compare them with the structures and semantics of the query graphs. To make these comparisons, we make use of the structural and semantic motifs [2]. A motif is a small subgraph that is an interesting indicator for a given complex network. The structural motif can be detected by analyzing the graph structure only. A semantic motif not only encode the structure but also the types (or label) of nodes involved in the sub-structure before classifying them into the library of motifs. We use a library of motifs as shown in Figure 1. Since the query graphs contain structural, semantic as well as constraint variations, it’s essential to have a difficulty metric for all 3 parts. Therefore, we propose a method that compares these samples with query graphs using these 3 metrics and provide a quantitative result to measure how discriminating one query graph is from other graphs in the library. Depending on the noise present in the query graphs, user may adhere to one of these metrics to get an estimate of the subgraph isomorphism problem.

Detection of Dynamics and kinetic changes in a musical audio using Visibility Graphs and Modularity

ABSTRACT. The properties of complex networks have been used to study several types of systems involving musical audio signals (\cite {tse2008analyzing}, \cite {buldu2007complex}, \cite {jacobson2008using}). In \cite {melo2020PlosOneGraphBased}, the authors establish the relationship between the peak homogeneity of audio signals and the modularity of visibility graphs associated with these signals to study the percussive influence in musical styles. In this work, we will use the same methodology of \cite {melo2020PlosOneGraphBased} to show that modularity can detect changes in kinetics and dynamics of a musical audio. In this experiment we used audio segments from the recording of ``Berimbau''composed by Baden Powell and Vinícius de Morais, played by Jurandir Santana (guitar) and Gabriel Grossi (harmonica) \footnote{Berimbau was recorded in Temps Records - Barcelona and mixed by Josep Roight and Erick - 2016}. The original audio of 5 min and 22 sec lenght, 44,100 hz and 16 bit was subdivided into 28 excerpts with 44,100 hz, 16 bit. In each subdivision were separated different musical phrases, each corresponding to a part, which sometimes repeated themselves with some variation. At the same time the criterion of homogeneity of onsets was used.

Statistically Validated Networks for evaluating coherence in topic models
PRESENTER: Andrea Simonetti

ABSTRACT. Probabilistic topic models have become one of the most famous and widespread machine learning technique for textual analysis purpose. Unfortunately, topic models do not guarantee the interpretability of their outputs. The topics learned from the model may be characterized by a set of irrelevant or unchained words, being useless for the interpretation. Although these several measures already consider co-occurrence between words as a measure of association, none has undertaken a statistical approach based on hypotheses testing to assess whether the co-occurrence obtained between two words can be attributed to the chance or if these links carry relevant information about the structure of topics. Thus, we propose a new coherence measure based on Statistically Validated Networks to evaluate the topic coherence. We derived a global measure of coherence for a topic by considering the number of statistically validated links, the strength of the association between word pairs, and the relative relevance of each word in the topic. We claim that these links carry relevant information about the structure of topics, i.e., the more connected the network, the more semantically coherent the corresponding topic. We designed a survey to obtain human judgment, which we use as ground truth, to compare our method with the state-of-art coherence measures. The results show that the proposed coherence measure reproduces human judgment more closely than other measures considered in the literature.

Adding semantic to level-up graph-based Android malware detection
PRESENTER: Roxane Cohen

ABSTRACT. As the number of Android malwares constantly increases, users face numerous threats as privacy leakage, data theft and hardware failure. Although android malware are considered polymorphic, function call graphs are robust features to properly describe an app. In this extended abstract, we use MalNet, a huge dataset that references function call graphs extracted from Android malwares. We show that graph structure alone is not sufficient to detect if a function call graph comes from a malicious app. In this case, external calls to API methods bring both structural and semantic information and better describe the behaviour of an Android application. Hence, adding such features to machine learning models helps to increase performances. In this work, we also study the impact of the orientation of the FCG and recommend the use of this information if structural features are used. Finally, we show that simpler shallow models can have worthwhile performances and we advocate for their systematic use as baselines in graph classification tasks.

On resilience in complex networks

ABSTRACT. In this work, we approach to resilience concept in complex networks modelled by finite graphs. We understand the resilience as a measure of the recovery capacity of the network after a disruption. We propose to study the behavior of the network in such recovery process by using graph topological indices, by observing that some efficient networks can be not very resilient networks.

16:55-17:35 Session Speaker S4: Alessandro Vespignani - Computational Epidemiology at the time of COVID-19
16:55
Computational Epidemiology at the time of COVID-19

ABSTRACT. The data science revolution is finally enabling the development of large-scale data-driven models that provide real- or near-real-time forecasts and risk analysis for infectious disease threats. These models also provide rationales and quantitative analysis to support policy-making decisions and intervention plans. At the same time, the non-incremental advance of the field presents a broad range of challenges: algorithmic (multiscale constitutive equations, scalability, parallelization), real-time integration of novel digital data streams (social networks, participatory platform, human mobility etc.). I will review and discuss recent results and challenges in the area, and focus on ongoing work aimed at responding to the COVID-19 pandemic.

17:35-19:50 Session Oral O6A: Network Models
Chair:
17:35
Towards Building a Digital Twin of Complex System using Causal Modelling
PRESENTER: Luka Jakovljevic

ABSTRACT. Complex systems, such as communication networks, generate thousands of new data points about the system state every minute. Even if faults are rare events, they can easily propagate, which makes it challenging to distinguish root causes of errors from effects among the thousands of highly correlated alerts appearing simultaneously in high volumes of data. In this context, the need for automated Root Cause Analysis (RCA) tools emerges, along with the creation of a causal model of the real system, which can be regarded as a digital twin. The advantage of such model is twofold:(i) it assists in reasoning on the system state, given partial system observations; and (ii) it allows generating labelled synthetic data, in order to benchmark causal discovery techniques or create previously unseen faulty scenarios (counterfactual reasoning). The problem addressed in this paper is the creation of a causal model which can mimic the behavior of the real system by encoding the appearance, propagation and persistence of faults through time. The model extends Structural Causal Models (SCMs) with the use of logical noisy-OR gates to incorporate the time dimension and represent propagation behaviors. Finally, the soundness of the approach is experimentally verified by generating synthetic alert logs and discovering both the structure and parameters of the underlying causal model.

17:50
Morphogenesis of spatial networks through way analysis
PRESENTER: Paul Jeammet

ABSTRACT. One may find in many physical and biological system a spatial shape creating a network, from the streets in a city to a blood network, through clay dessication patterns, slime mold and corals. These networks share as a main characteristic their creation process : a growth mechanism through propagation, bifurcation and reticulation, mostly in a 2D plane. They are the consequence of the iteration of a process through time. As such, they share characteristics that we can highlight through the right methodology and indicators of network analysis.

In this abstract, we propose through the notion of ways (the continuity of edges in a spatial network) a way to numerically extract the network from images, and an ensemble of relevant indicators to understand the historicity, the structure of the complex system. We also discuss how such analysis leads us to new approaches for a deeper understanding of growing complex network in general.

18:05
Dynamical universality in contracting networks
PRESENTER: Eytan Katzav

ABSTRACT. Complex networks encountered in biology, ecology, sociology and technology often contract due to node failures, infections or attacks. The ultimate failure, taking place when the network fragments into disconnected components was studied extensively using percolation theory. We show that long before reaching fragmentation, contracting networks lose their distinctive features. In particular, we identify that a very large class of network structures, which experience a broad class of node deletion processes, exhibit a stable flow towards universal fixed points, representing a maximum-entropy ensemble, which in the pure contraction scenario is the Erdos-Renyi ensemble. Under more general combination of growth and deletion processes the resulting degree distribution is Poisson-like. This is in sharp contrast to network growth processes that often lead to scale-free networks. It also implies that contracting networks in the late stages of node failure cascades, attacks and epidemics reach a common structure, providing a unifying framework for their analysis.

18:20
'Stealing fire' or ‘stacking knowledge' to model link prediction in complex networks

ABSTRACT. From nests to nets intricate wiring diagrams surround the birth and death of life and we, as humans, wonder what the secret rules behind such elegant network architectures are. If nature adopts connectivity to shape complexity, humans might adopt their intelligence, or an artificial one, to make sense of it. To predict structural complexity in a network: should we act as small Prometheans, exploiting creativity to ‘steal the fire from Gods’ and search, as physicists aim, for that accurate but simple, elegant unique rule that makes sense of ‘every-thing’? Or should we ask an artificial intelligence (AI) to stack for us hundreds of inaccurate human-made rules to make a new one that optimally summarizes them together? Perhaps, none of these two solutions is what the future holds for us. Nevertheless, Ghasemian et al. [1] embrace the second option and pro-pose a stacking model: a machine learning method for link prediction based on ensemble meta-learning, using artificial intelligence to create a single meta-model from hundreds of other models. Analysing the results on 550 real-world networks, Ghasemian et al. conclude that they obtain a state-of-the-art algo-rithm achieving nearly optimal prediction [1]. Here, we suggest that the scientific community might benefit to interpret the results of Ghasemian et al. when their algorithm is compared with other state-of-the-art algorithms which largely differ from ensemble meta-learning. One is structural predictability method (SPM) [2], which relies on a theory derived from quantum mechanics that exploits the first-order perturbation of the graph adjacency matrix. SPM takes a way opposite to that of stacking: it simply does not assume any model. Interestingly, also brain and life science theories contribute to inspire revolu-tionary algorithms for link prediction [3]. After Daminelli et al. [4] and Kovács et al. [5] elucidated the importance to build models that rely on paths of length 3 inducing quadrangular closure [4], Muscoloni et al. [6] proposed a general theory of adaptive network automata that includes a simple modelling rule named Cannistraci-Hebb (CH). CH is inspired by neuro-plasticity concepts and scales adaptively on paths of different length, in relation with the intrinsic network structure [6]. It might be enlightening to compare a method that stacks multiple models ver-sus SPM, which is model-free, and CH-adaptive (CHA), which relies on one model adapting to the intrinsic network structure. Using the 550 networks of Ghasemian et al.1, we report the results according to mean performance (Figure 1) for Precision, AUC-PR and AUC-ROC, following evaluation strategies previously employed [2,7]. Brute-force stacking of algorithms by AI does not perform better than (and is often significantly outperformed by) SPM and one simple brain-bioinspired rule such as CH. This agrees with the Gödel incompleteness: stacking is opti-mal but incomplete, you cannot squeeze out more than what is already in your features. Hence, we should also pursue AI that resembles human-like physical ‘understanding’ of simple generalized rules associated to complexity [8]. The future might be populated by AI that ‘steals for us the fire from Gods’, towards machine intelligence that creates new rules rather than stacking the ones already known.

18:35
On the use of Null Models to Assess Statistical Significance in Bipartite Networks.
PRESENTER: María Palazzi

ABSTRACT. The use of null models has been a cornerstone to assess the emergence of many network properties at different levels of organization (micro-, meso- or macroscales). Notwithstanding, the debate around which is the most appropriate metric-null model combination for a given problem is far from being over. Within the ecological community, for example, the discussion around whether nestedness is -or not- ubiquitous in natural systems [1], and under which assumptions, remains open [2]. The debate can also be extended, albeit to a lesser degree, to other patterns, e.g., modularity [3, 4]. Yet, ef- forts have been devoted to exploring to what extent current models are vulnerable to statistical errors [5], or to the introduction of new models that employ different randomization procedures [6]. Here, we show that assessing several descriptors under a single model may produce ambiguous results, which difficult the comparison regarding the join emergence of different arrangements within a single network. We introduce an alternative model (corrected probabilistic model), that helps to reduce the observed under- and overestimation of the structural patterns, and highlight the need for the development of new frameworks that take into account those biases.

18:50
A Leading Author Model for the Popularity Effect on Scientific Collaboration

ABSTRACT. In this paper, we focus on the popularity effect of the scientific collaboration process that popular authors have an advantage in making more publications. Standard network analysis is used to analyze the scientific collaboration network, but it is of limit in explaining the scientific outputs by binary co-authorship relationships since papers have various numbers of authors. We propose a leading author model to understand the popularity effect mechanism while avoiding the use of the standard network structure. The estimation algorithm is presented to analyze the size of the popularity effect. Moreover, we find influential authors through the estimated genius levels of authors by considering the popularity effect. We apply the proposed model to the real scientific collaboration data, and the results show positive popularity effects in all the collaborative systems.

19:05
Limitations of Chung Lu Random Graph Generation

ABSTRACT. Random graph models play a central role in network analysis. The Chung-Lu model, which connects nodes based on their expected degrees, is of particular interest. It is widely used to generate null-graph models with expected degree sequences. In addition, these attachment probabilities implicitly define network measures such as modularity. Despite its popularity, practical methods for generating instances of Chung-Lu model-based graphs do relatively poor jobs in terms of accurately realizing many degree sequences. We perform a theoretical analysis of the Chung-Lu random graph model in order to understand this discrepancy. We approximate the expected output of a generated Chung-Lu random graph model with a linear system and use properties of this system to predict distribution errors. We provide bounds on the maximum proportion of nodes with a given degree that can be reliably produced by the model for both general and non-increasing distributions. We additionally provide an explicit inverse of our linear system and in cases where the inverse can provide a valid solution, we introduce a simple method for improving the accuracy of Chung-Lu graph generation. Our analysis serves as an analytic tool for determining the accuracy of Chung-Lu random graph generation as well as correcting errors under certain conditions.

19:20
Surprising Behavior of the Average Degree for a Node's Neighbors in Growth Networks
PRESENTER: Sergei Sidorov

ABSTRACT. We study the variation of the stochastic process that describes the temporal behavior of the average degree of the neighbors for a fixed node in the Barab\'{a}si-Albert networks. It was previously known that the expected value of this random quantity grows logarithmically with the number of iterations. The noteworthy fact proved in this paper is that the variation of this process is bounded by a constant.

19:35
Score-Driven Exponential Random Graphs: A New Class of Time-Varying Parameter Models for Dynamical Networks

ABSTRACT. Motivated by the increasing abundance of data describing real-world networks that exhibit dynamical features, we propose an extension of the Exponential RandomGraph Models (ERGMs) that accommodates the time variation of its parameters. Inspiredby the fast growing literature on Dynamic Conditional Score-driven models each parameter evolves according to an updating rule driven by the score of the ERGM distribution. Wedemonstrate the flexibility of the score-driven ERGMs (SD-ERGMs), both as data gener-ating processes and as filters, and we show the advantages of the dynamic version withrespect to the static one. We discuss two applications to time-varying networks from financial and political systems. First, we consider the prediction of future links in the Italian inter-bank credit network. Second, we show that the SD-ERGM allows to discriminate between static or time-varying parameters when used to model the dynamics of the US congress co-voting network

17:35-19:35 Session Oral O6B: Human Behavior
17:35
Complex networks to detect discriminations in a historical process: the case of post-WW2 purges

ABSTRACT. At the Liberation of Europe occupied by the Nazis during WW2, there came first the so-called “wild purges”, driven by a thirst for immediate revenge. Then, special governmental bodies were set up, to try those considered guilty of collaboration with the Nazis. In France, Courts of Justice (Cours de Justice) dealt with the most serious crimes, punishable by death, forced labour, prison, etc. However, there were also individuals who could not be convicted of any crime in a legal sense, but who had behaved “unpatriotically”, eg by being involved with, or supportive of, the Nazis. That is why purge committees (Comités d’épuration) were established. Their role was to inspect how members of a number of trades and professions had behaved during the Occupation. These committees had to design specific charges, to measure degrees of involvement and to decide on individual convictions and sentences. Our research focuses on the purge committees in the performing arts, and more specifically on the long-standing question of whether certain social groups – women artists, popular artists, stars, etc. – were discriminated against or not. Such discriminations have often been hypothesized, in particular against women artists, but they have never been neither supported nor disproved through data analysis. In this paper, we analyze the first thorough dataset on purges in the performing arts, created from the material collected by one of the authors from all available archival sources. While standard statistical tools like factor analysis or multiple tests yield only a few results, we show that new techniques from complex network analysis may be fruitfully brought to bear on the data. In particular, building upon methods developed to detect discrimination in machine learning algorithms [3], we use a Suppes-Bayes network representation to obtain the first rigorous, data-supported results on the question of discrimination against women artists during post-WW2 purges.

17:50
When Does it Pay-Off to Learn Something New? Revealing the Complementary Value of Digital Skills

ABSTRACT. This paper asks: When does it pay-off to learn something new? It examines the economic benefits of learning new digital skills. With the data of 14,790 online freelance worker and the example of popular programming languages, this work shows that learning one single skill can add more than 50 percent on the average worker wage. The results suggest that acquiring skills from different domains could be of even higher economic value. However, the economic value of learning a new skill largely depends on the composition of a worker's existing skill bundle. The value of learning something new is complementary to what we already know.

18:05
Markov Modulated Process to model human mobility
PRESENTER: Piet Van Mieghem

ABSTRACT. We introduce a Markov Modulated Process (MMP) to describe human mobility. We represent the mobility process as a time-varying graph, where a link specifies a connection between two nodes (humans) at any discrete time step. Each state of the Markov chain encodes a certain modification to the original graph. We show that our MMP model successfully captures the main features of a human mobility simulator, in which nodes moves in a square region. We apply our MMP model to human mobility in a library.

18:20
Impact of Monetary Rewards on Users’ Behavior in Social Media
PRESENTER: Yutaro Usui

ABSTRACT. This paper investigates the impact of monetary rewards on behavioral strategies and the quality of posts in consumer generated media (CGM). In recent years, some CGM platforms have introduced monetary rewards as an incentive to encourage users to post articles. However, the impact of monetary rewards on users has not been sufficiently clarified. Therefore, to investigate the impact of monetary rewards, we extend the SNS-norms game, which models SNSs based on the evolutionary game theory, by incorporating the model of monetary rewards, the users’ preferences for them, and their efforts for article quality. The results of the experiments on several types of networks indicate that monetary rewards promote posting articles but significantly reduce the article quality. Particularly, when the value of the monetary reward is small, it significantly reduces the utilities of all the users owing to a decrease in quality. We also found that individual user preferences for monetary rewards had a clear difference in their behavior.

18:35
Quoting is not Citing: Disentangling Affiliation and Interaction on Twitter
PRESENTER: Camille Roth

ABSTRACT. Interaction networks are generally much less homophilic than affiliation networks, accommodating for many more cross-cutting links. By statistically assigning a political valence to users from their network-level affiliation patterns, and by further contrasting interaction and affiliation (quotes and retweets) within specific discursive events, namely quote trees, we describe a variety of cross-cutting patterns which significantly nuance the traditional "echo chamber" narrative.

18:50
Communication Now and Then: Analyzing the Republic of Letters as a Communication Network

ABSTRACT. Huge advances in understanding patterns of human communication, and the underlying social networks where it takes place, have been made recently using massive automatically recorded data sets from digital communication, such as emails and phone calls. However, it is not clear to what extent these results on human behaviour are artefacts of contemporary communication technology and culture and if the fundamental patterns in communication have changed over history. Our work presents an analysis of historical epistolary metadata with the aim of comparing the underlying historical communication patterns with those of contemporary communication. We use a new epistolary dataset containing metadata on hundreds of thousands of letters sent between the 16th and 19th centuries. The analyses indicate striking resemblances between contemporary and epistolary communication network patterns, including dyadic interactions and ego-level behaviour. Despite these positive findings, certain aspects of the letter datasets are insufficient to corroborate other similarities or differences for these communication networks.

19:05
Two Learning Approaches for Abstraction and Reasoning
PRESENTER: Simon Alford

ABSTRACT. One of the challenges facing artificial intelligence research today is designing systems capable of utilizing systematic reasoning to generalize to new tasks. The Abstraction and Reasoning Corpus (ARC) measures such a capability through a set of visual reasoning tasks. In this paper we describe two learning approaches to enable higher performance on ARC. First, we give an approach for learning abstractions on ARC. We apply an existing program synthesis system called DreamCoder to create symbolic abstractions out of solutions of tasks solved so far. These abstractions enable the solving of progressively more difficult ARC tasks. Second, we design a reasoning algorithm motivated by the way humans approach ARC. Our algorithm combines execution-guided program syn- thesis with deductive reasoning based on function inverse semantics to enable a bidirectional, execution-guided program synthesis algorithm. We demonstrate the effectiveness of our reasoning approach on three domains: ARC, 24-Game tasks, and a ‘double-and-add’ arithmetic puzzle. These two complementary approaches mark the first known progress on ARC not based in brute force search and paves the way for further progress on ARC and the broader challenges of systematic reasoning and generalization.

19:20
Data-Driven Modeling of Evacuation Decision-Making in Extreme Weather Events
PRESENTER: Chris Kuhlman

ABSTRACT. Data from surveys administered after Hurricane Sandy provide a wealth of information that can be used to develop models of evacuation decision-making. We use a model based on survey data for predicting whether or not a family will evacuate. The model uses 26 features for each household including its neighborhood characteristics. We augment a 1.7 million node household-level social network of Miami, Florida with public data for the requisite model features so that our population is consistent with the survey-based model. Results show that household features that drive hurricane evacuations dominate the effects of specifying large numbers of families as “early evacuators” in a contagion process, and also dominate effects of peer influence to evacuate. There is a strong network-based evacuation suppression effect from the fear of looting. We also study spatial factors affecting evacuation rates as well as policy interventions to encourage evacuation.

17:35-19:35 Session Oral O6C: Biological Networks
17:35
The effective graph: a weighted graph that captures nonlinear logical redundancy in biochemical systems
PRESENTER: Luis M. Rocha

ABSTRACT. The ability to map causal interactions underlying genetic control and cellular signaling has led to increasingly accurate models of the complex biochemical networks that regulate cellular function. These network models provide deep insights into the organization, dynamics, and function of biochemical systems,for example by revealing genetic control pathways involved in disease. However, the traditional representation of biochemical networks as binary interaction graphs fails to accurately represent an important dynamical feature of these multivariate systems: some pathways propagate control signals much more effectively than do others. Such heterogeneity of interactions reflects canalization---the system is robust to dynamical interventions in redundant pathways, but responsive to interventions in effective pathways. Here, we introduce the effective graph, a weighted graph that captures the nonlinear logical redundancy present in biochemical network regulation, signaling, and control. Using 78 experimentally-validated models derived from systems biology, we demonstrate that: (a) redundant pathways are prevalent in biological models of biochemical regulation, (b) the effective graph provides a probabilistic but precise characterization of multivariate dynamics in a causal graph form, and (c) the effective graph provides an accurate explanation of how dynamical perturbation and control signals, such as those induced by cancer drug therapies, propagate in biochemical pathways. Overall, our results indicate that the effective graph provides an enriched description of the structure and dynamics of networked multivariate causal interactions. We demonstrate that it improves explainability, prediction, and control of complex dynamical systems in general, and biochemical regulation in particular.

17:50
The orthoBackbone: an evolutionarily-conserved backbone of genetic networks

ABSTRACT. Extracting knowledge from large gene expression datasets is a complex task, particularly if the analysis requires comparison between different species. We present the orthoBackbone (orthoBb), an evolutionarily-conserved subgraph of genetic networks across multiple species. We cast genetic networks as weighted graphs where nodes are genes of interest and edges correspond to strength of observed functional interactions. The latter can be derived from large-scale integrated databases of protein interactions, such as StringDB, that compile high-throughput experimental data, literature data mining, and genomic context predictions. We built a network for each species of interest---i.e. human, mouse and insect---where nodes and edges denote the interaction knowledge of the biological system being studied. Comparison between the networks of different species allows the identification of key biological processes that have been conserved throughout evolution. For this, matching nodes in the networks from different species have to be identified via orthologous relationships, i.e., genes that have kept the same functional role across evolution and speciation. This is cast as a multilayer network where each layer represents a particular species, and genes that are orthologous are connected across layers---a gene in one layer may connect to multiple orthologous genes in another. However, these networks are often large and dense and thus methods to reduce their complexity, integrate cross-species knowledge, and steer biological interpretation are much desired by domain experts. For these reasons, we developed the orthoBb, building upon our previous single-layer metric-backbone of complex networks.

For each species network (i.e., each network layer) we compute its \textit{metric backbone}. The metric-backbone is the subgraph that is sufficient to compute all shortest paths, thus removing edges that break the triangle inequality. This means that only metric edges are kept and thus edges that are redundant in the original species network, with regards to shortest-paths, are removed. This drastically lowers network density as 80-90% of all edges are deemed redundant and thus removed. In practice, we use a modified version of the Dijkstra algorithm to compute the metric-backbone. A mathematical description and a python implementation of the metric backbone can be seen in Simas et al (2021). Our biological interpretation is that the backbone represents the most important gene relationships in our species of interest. It follows that a fraction of these important gene interactions must have been evolutionarily-preserved, as they encode for functional relationships that are crucial for the survival of different species. Therefore, the orthoBb is the subgraph of the metric-backbone where each edge has an analogous edge that connects orthologous genes in all other metric-backbone layers. Importantly, an assumption is made for cases where there is a $1 \to n$ gene orthology relation, as at least one backbone edge must be analogous in another species' backbone.

Preliminary results to be presented at the conference show that the orthoBb helps to identify fundamental biological programs underlying male fertility across species (Publication in preparation). As an example, in Fig 1 we plot the genetic circuit responsible for the control of the Wee1 cell cycle regulator across different vertebrate and invertebrate species. Despite a distance of more than 700 million years of evolution, in all three species the ubiquitin molecule regulates Wee1 via the SCF complex. The orthoBb was fundamental for the identification of this circuit, as the ubiquitin molecule is connected to large hundreds of different target molecules. Both the complexity reduction and focus on fundamental evolutionarily-conserved processes ensured by the orthoBb uncovered the specific disruption of this circuit in a male infertility background.

18:05
Percolation on the gene regulatory network
PRESENTER: Giuseppe Torrisi

ABSTRACT. We consider a simplified model for gene regulation, where gene expression is regulated by transcription factors (TFs), which are single proteins or protein complexes. Proteins are in turn synthesised from expressed genes, creating a feedback loop of regulation. This leads to a directed bipartite network in which a link from a gene to a TF exists if the gene codes for a protein contributing to the TF, and a link from a TF to a gene exists if the TF regulates the expression of the gene. Both genes and TFs are modelled as binary variables, which indicate, respectively, whether a gene is expressed or not, and a TF is synthesised or not. We consider the scenario where for a TF to be synthesised, all of its contributing genes must be expressed. This results in an ``AND'' gate logic for the dynamics of TFs. By adapting percolation theory to directed bipartite graphs, evolving according to the AND logic dynamics, we are able to determine the necessary conditions, in the network parameter space, under which bipartite networks can support a multiplicity of stable gene expression patterns, under noisy conditions, as required in stable cell types.

In particular, the analysis reveals the possibility of a bi-stability region, where the extensive percolating cluster is or is not resilient to perturbations. This is remarkably different from the transition observed in standard percolation theory. Finally, we consider perturbations involving single node removal that mimic gene knockout experiments. Results reveal the strong dependence of the gene knockout cascade on the logic implemented in the underlying network dynamics, highlighting in particular that avalanche sizes cannot be easily related to gene-gene interaction networks.

18:20
CUBCO: Prediction of protein complexes based on min-cut network partitioning into biclique spanned subgraphs
PRESENTER: Sara Omranian

ABSTRACT. High-throughput approaches have generated large-scale protein-protein interaction (PPI) networks that are used in prediction of protein complexes. However, these approaches do not detect some interactions that can affect the prediction of protein complexes. Here, we introduce CUBCO—an algorithm that predicts protein complexes as biclique spanned subgraphs and that relies on link prediction approaches to score and incorporate missing interactions. Our comprehensive analyses with PPIs from Escherichia coli, Saccharomyces cerevisiae, and Homo sapiens show that CUBCO performs on par with the best-performing approaches, that model protein complexes as biclique spanned subgraphs and outperforms the remaining contenders. We demonstrate that the usage of link prediction approaches in CUBCO improves the prediction of protein complexes. Finally, CUBCO recovers ~40% and ~11% of known protein complexes from the Pan-Plant and Metazoan networks, respectively, and predicts 11 and 15 protein complexes in PPI networks of Arabidopsis thaliana and H. sapiens, that are validated by domain-domain inter-action and GO semantic similarity. Therefore, CUBCO represents an efficient, parameter-free approach for accurate prediction of protein complexes from PPI networks.

18:35
From Quantitative SBML Modelsto Boolean Networks

ABSTRACT. Modelling complex biological systems is necessary for their study and understanding. SBML is the standard format to represent models of biological systems. Most of the curated models available in the repository Biomodels are quantitative, but in some cases qualitative models —such as Boolean networks— would be better suited. This paper is the first to focus on the automatic transformation of quantitative SBML models to Boolean networks. We propose SBML2BN, a pipeline dedicated to this task. By running SBML2BN on more than 200 quantitative SBML models, we provide evidence that we can automatically construct Boolean networks which are compatible with the structure and the dynamics of a given quantitative SBML model.

18:50
Griffihts phases below hybrid phase transitions
PRESENTER: Geza Odor

ABSTRACT. Hybrid or mixed order phase transitions (HPT) occur when an order parameter has a discontinuity, still the dynamical quantities, like the correlation time, diverge and power-law behavior emerges. Also the approach of the steady state order parameter to the gap can exhibit power-law singularity. These kind of transitions are quite common in nature in excitable/threshold models at higher dimensions or when a hub or weight distribution of the underlying graph forces it. On the other hand Griffiths Phases (GP) can occur in systems, when such large rare-regions exist, which evolve very slowly causing power-law dynamical behavior below the phase transition point in an extended control parameter space.

By studying heterogeneous models recently we found several examples where HPT-s and GP-s can coexist. For example in threshold models on hierarchical modular graphs, which can model brain, we found numerical evidences for this coexistence [1]. Furthermore, in oscillatory systems, like the second order Kuramoto model, exhibiting discontinuous transition, a phase with non-universal power-law dynamics also seems to exist [2, 3]. This can describe power-grids with blackout cascades. We demonstrate by numerical examples further cases which can be relevant for neuroscience. References 1. Ódor, G., de Simoni, B.: Heterogeneous excitable systems exhibit Griffiths phases below hybrid phase transitions. Phys. Rev. Res. 3, 013106. (2021). https://doi.org/ 10.1103/PhysRevResearch.3.013106 2. Ódor, G., Hartmann B.: Heterogeneity effects in power grid network models. Phys. Rev. E 98, 022305. (2018). https://doi.org/10.1103/PhysRevE.98.022305 3. Ódor, G., Hartmann B.: Power-Law Distributions of Dynamic Cascade Failures in Power-Grid Models. Entropy 22(6), 666. (2020); https://doi.org/10.3390/e22060666

19:05
Quantifying Cellular Pluripotency and Pathway Robustness through Forman-Ricci Curvature
PRESENTER: Kevin Murgas

ABSTRACT. In stem cell biology, cellular pluripotency describes the capacity of a given cell to differentiate into multiple cell types. From a statistical physics perspective, entropy provides a statistical measure of randomness and has been demonstrated as a way to quantitate pluripotency when considering biological gene networks. Furthermore, recent theoretical work has established a relationship between Ricci curvature (a geometric measure of ``flatness'') and entropy (also related to robustness), which one can exploit to link the deterministic geometric quantity of curvature to the statistical quantity of entropy. Therefore, this study seeks to explore Ricci curvature in biological gene networks as a descriptor of pluripotency and robustness among gene pathways. Here, we investigate Forman-Ricci curvature, a combinatorial discretization of Ricci curvature, along with network entropy, to explore the relationship of the two quantities as they occur in gene networks. First, we demonstrate our approach on an experiment of stem cell gene expression data. As expected, we find Ricci curvature directly correlates with network entropy, suggesting Ricci curvature could serve as an indicator for cellular pluripotency much like entropy. Second, we measure Forman-Ricci curvature in a dataset of cancer and non-cancer cells from melanoma patients. We again find Ricci curvature is increased in the cancer state, reflecting increased pluripotency or ``stemness''. Further, we locally examine curvature on the gene level to identify several genes and gene pathways with known relevance to melanoma. In turn, we conclude Forman-Ricci curvature provides valuable biological information related to pluripotency and pathway functionality. In particular, the advantages of this geometric approach are promising for extension to higher-order topological structures in order to represent more complex features of biological systems.

19:20
Functional and spatial rewiring jointly generate convergent-divergent units in self-organizing networks
PRESENTER: Jia Li

ABSTRACT. Self-organization through adaptive rewiring of random neural networks generates brain-like topologies comprising modular small-world structures with rich club effects, merely as the product of optimizing the network topology. In the nervous system, spatial organization is optimized no less by rewiring, through minimizing wiring distance and maximizing spatially aligned wiring layouts. We show that such spatial organization principles interact constructively with adaptive rewiring, contributing to establish the networks’ connectedness and modular structures. We use an evolving neural network model with weighted and directed connections, in which neural traffic flow is based on consensus and advection dynamics, to show that wiring cost minimization supports adaptive rewiring in creating convergent-divergent unit structures. Convergent-divergent units consist of a convergent input-hub, connected to a divergent output-hub via subnetworks of intermediate nodes, which may function as the computational core of the unit. The prominence of minimizing wiring distance in the dynamic evolution of the network determines the extent to which the core is encapsulated from the rest of the network, i.e., the context-sensitivity of its computations. This corresponds to the central role convergent-divergent units play in establishing context-sensitivity in neuronal information processing.