next day
all days

View: session overviewtalk overview

09:00-09:40 Session Speaker S1: Marc Barthélémy - Challenges in Spatial Networks
Challenges in Spatial Networks

ABSTRACT. Graphs embedded in the physical space are relevant for many processes and applications in various fields, ranging from physics, epidemiology, geography to urbanism, transport planning, or biology. These networks can also serve as tools in spatial statistics and represent a new direction in this field. Characterizing and understanding the structure and the evolution of these spatial networks is thus crucial and has been the subject of many studies this last decade. In this talk, I will first review some of the main results and I will then discuss a selection of important challenges and research directions involving these objects.

09:40-10:40 Session Lightning L1: Biological Networks - Information Spreading in Social Media
Complex network analysis of fruit fly connectome: fractal and entropy/complexity approach
PRESENTER: Michael Olesik

ABSTRACT. The animal brain is a well-known example of complex networks in which individual parts play different functional roles and connect to others in a highly non-trivial way. The system of synaptic connections in the brain representing the "wiring diagram" of neurons, known as the connectome, is an example of such a complex structure. Traditional ways of analysis in such networks have been based on topological, spatial or information-theoretic properties. In our study, we considered a recently released fruit fly structural connectome. We construct a network where nodes represent neurons and edges correspond to synapses; the resulting network has around 20~000 neurons. We propose a new possibility of differentiating complexity in the network by using fractal graph analysis. We compare our results with the graph's complexity measured using information-theoretic measures based on random walks on the network.

Network science environment predicts transition towards complexity in prebiotic astrochemistry
PRESENTER: Jacobo Aguirre

ABSTRACT. We present a conceptual and computational framework to model the evolution towards complexity of simple networked structures. Complex networks represent simplified chemical compounds, and interact optimizing the dynamical importance of their nodes. We show the existence of a transition towards chemical complexity, and while the rules that govern our model are radically different to those in real chemistry, our results are compatible with the evolution of four real interstellar medium molecular abundances. In summary, our work connects network science with astrochemistry in a novel way, and supports the idea that some properties that rule the extremely complex long way from the chemistry in space to prebiotic chemistry and finally to life could show relatively simple and general patterns.

Supervised Gene Function Prediction using Spectral Clustering on Gene Co-expression Networks
PRESENTER: Miguel Romero

ABSTRACT. Gene annotation addresses the problem of predicting unknown functions (e.g., biological processes) that are associated to the genes of a specific organism. Despite recent advances, the cost and time demand of annotation procedures that rely largely on in vivo biological experiments remain prohibitively high. This paper presents an in silico approach to the annotation of genes that follows a network-based representation, and combines techniques from multivariate statistics (spectral clustering) and machine learning (gradient boosting). Spectral clustering is used to enrich the gene co-expression network (GCN) with currently known gene annotations. Gradient boosting is trained on features of the GCN to build an estimator of the probability that a gene is involved in a given biological process. The proposed approach is applied to a case study on Zea mays, one of the world’s most dominant and productive crops. Broadly speaking, the main results illustrate how computational experimentation is a promising tool for narrowing down the time and costs in efforts to annotate the functions of genes. More specifically, the results highlight the importance of multivariate statistics and machine learning techniques in reducing types I and II prediction errors.

Vaccine-critic discourse on French-speaking Twitter and its outreach.
PRESENTER: Mauro Faccin

ABSTRACT. During the ongoing Covid-19 pandemic, the discussion about related topics took place at several levels, from official channels to mouth-to-mouth to online social networks. We consider the discussion related to vaccines in the wider framework of Covid-19 in the French-speaking Twitter.

We have collected ~2M tweets with ~6M retweets from ~800k users related to the discussion on vaccines and Covid-19, and extracted and manually confirmed 380 urls of vaccine-critic websites.

Due to the directed nature of Twitter online discussion, we extend the available tools based on hypergraph to encompass directionality. From the hypergraph emerging from the online discussion we detected a number of communities and their role in spreading vaccine-critic information across the Twitter user base.

Dynamics of Online Hate and Misinformation
PRESENTER: Matteo Cinelli

ABSTRACT. Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model fine-tuned on a large set of hand-annotated data. Our analysis shows that there is no evidence of the presence of ``pure haters'', intended as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents community. Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of number of comments and time. Our results show that, coherently with Godwin's law, online debates tend to degenerate towards increasingly toxic exchanges of views.

Structure and evolution of the UK far-right Telegram network
PRESENTER: Alexandre Bovet

ABSTRACT. 1 Introduction Telegram is a free and open source instant messaging software that was launched in 2013 by the two Russian brothers Nikolai and Pavel Durov. Its usage has steadily in- creased over the years to a reported number of active monthly users of 400 million in April 2020 [1]. Telegram offers end-to-end encryption of messages, voice and video calls and of “secret” chats. The platform is popular in fringe groups and for illegal ac- tivities that often take refuge on Telegram after having been banned on less permissive platforms such as Twitter of Facebook [3]. For example, until a joined operation by Europol and Telegram in November 2019 that removed 43,000 terrorist-related bots and channels, the terrorist group Islamic State of Iraq and the Levant (ISIS) recom- mended Telegram to its supporters and members and used it to promote radicalism and to coordinate operations [2, 4]. In particular, Telegram gained popularity among the far-right movements in the US, UK and Europe [5, 6]. These groups make usage of public Telegram channels and group chats to disseminate hate speech, disinformation and conspiracy theories [6]. Several works have investigated Telegram in relation to the effect of deplatforming of extreme Internet celebrities [3], the growth of channels supporting hateful ideolo- gies [6], the dissemination of misinformation [7] or the toxicity of the language used in the groups/channels related to the far-right conspiracy theory of QAnon [8]. How- ever, little work has been done to understand the structure and evolution of the far-right network of channels and group chats. Urman and Katz [5] investigated the structure and evolution of the network of far-right channels using standard community detection (Modularity optimization) and found communities divided mostly along the ideologi- cal and national lines, with the communities related to 4chan imageboard and Donald Trump’s supporters being the most influential. 2 Results Here, we use the novel community detection method flow stability for temporal net- works based on the clustering of a flow of random walkers diffusing along its edges [9]. We adapted this method to directed networks such that it allows us to cluster together channels/groups that influence the same set of channels (i.e. that have similar outgoing flows) or that are influenced by the same set of channels/groups (i.e. that have simingoing flows). In this way, we can classify communities in a directed core-periphery model as either: periphery communities (upstream or downstream) or core communi- ties. Upstream communities contain channels that mostly have outgoing links (pointing to the same channels), i.e. that are mostly commentators of channels from other com- munities. Channels/groups inside core communities have strong links with each others, actively participate in discussions across channels, and can be seen as forming echo- chambers. Downstream communities contain popular channels/groups that are highly referenced by other (similar) channels. This framework allows us to uncover the organ- isation and the influence relations of the different Telegram communities. Fig. 1. Flow stability clustering of the directed network of interactions between Telegram pub- lic channels/groups related to the far-right movement. The different colors indicate whether the community is an upstream, downstream or a core community and the numbers inside the nodes represent their size (i.e. number of channels/groups). We built a seed dataset of Telegram messages from public channels and group chats present in the Pushift Telegram Dataset [10] that contained an URL containing the term britainfirst. Britain First is a central far-right organisation in the UK. This re- sults in a seed set of 2,729 posts from April 6th 2019 to October 14th 2019 from 258 channels/groups. We then expanded our dataset by collecting (using the Telegram API) all the messages from all the public channels/groups that where mentioned in our ini- tial seed set. The resulting dataset contains 7 million messages from 12,564 channels posted between September 2015 and January 2021. We build a directed weighted net- work where nodes represent channels/groups and edges represent the potential flow of users between channels: an edge between nodes A and B can represent a mention of node B in node A, a URL link pointing to node B posted in node A or message from node B forwarded in node A. Our clustering analysis, shown in Fig. 1, reveals the existence of two main core communities: one with mainly UK and US based far-right channels and another with Russian language channels that have links with Russian propaganda or are known to disseminate information with low factual content. Two main downstream clusters are found: one with mostly channels related to right wing US and EU news and politics and one with Russian news and politics. The influence of individual channels, measured by their directed Katz centrality, reveals that the most influential channels are in the core clusters. To understand the evolution of this network, we separate it in three periods: 1) initial phase from September 2015 to June 2019, 2) first growth from June 2019 to March 2020 and 3) COVID19 from March 2020 to January 2021. We characterize the growth of each clusters and notice that downstream clusters see a lot of change over the different period while the core clusters mostly grow over time, with the largest growth being for the far-right cluster occurring during the COVID19 phase. Our work reveals the structure and evolution of the UK far-right Telegram networks in terms of the influence between public channels/groups. We propose a methodology for analysing such directed and weighted interaction networks that permits to classify communities as upstream, core or downstream in the flow of users navigating this net- work. We show that our method allows us to uncover the different actors, how they evolve over time and their role in this network. References 1. 2. Prucha, N. IS and the Jihadist Information Highway-Projecting Influence and Religious Iden- tity via Telegram. Perspect. Terror. 10, 48–58 (2016). 3. Rogers, R. Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media. Eur. J. Commun. 35, 213–229 (2020). 4. propaganda-online 5. Urman, A. & Katz, S. What they do in the shadows: examining the far-right networks on Telegram. Information, Commun. Soc. 0, 1–20 (2020). 6. Walther, S. & McCoy, A. US Extremism on Telegram: Fueling Disinformation, Conspiracy Theories, and Accelerationism. Perspect. Terror. 15, 100–124 (2021). 7. Knuutila, A., Herasimenka, A., Bright, J., Nielsen, R. & Howard, P. N. JUNK NEWS DIS- TRIBUTION ON TELEGRAM: The Visibility of English-language News Sources on Public Telegram Channels. COMPROP Data Memo 5, 1–5 (2020). 8. Hoseini, M., Melo, P., Benevenuto, F., Feldmann, A. & Zannettou, S. On the Globalization of the QAnon Conspiracy Theory Through Telegram. arXiv:2105.13020v1 (2021). 9. Bovet, A., Delvenne, J.-C. & Lambiotte, R. Flow stability for community detection in tem- poral networks. arXiv:2101.06131v2 (2021). 10. Baumgartner, J., Zannettou, S., Squire, M. & Blackburn, J. The Pushshift Telegram Dataset. arXiv:2001.08438v1 (2020).

Exploring Bias and Information Bubbles in YouTube’s Video Recommendation Networks
PRESENTER: Baris Kirdemir

ABSTRACT. This study audits the structural and emergent properties of YouTube’s video recommendations, with an emphasis on the black-box evaluation of recommender bi-as, confinement, and formation of information bubbles in the form of content communities. Adopting complex networks and graphical probabilistic approaches, the results of our analysis of 6,068,057 video recommendations made by the YouTube algorithm reveals strong indicators of recommendation bias leading to the closely-knit and confined content communities. Also, our qualitative and quantitative exploration of the prominent content and discourse in each community further uncovered the formation of narrative-specific clusters made by the recommender system we examined.

Characterising different communities of Twitter users: Migrants and natives

ABSTRACT. Today, many users are actively using Twitter to express their opinions and to share information. Thanks to the availability of the data, researchers have studied behaviours and social networks of these users. International migration studies have also benefited from this social media platform to improve migration statistics. Although diverse types of social networks have been studied so far on Twitter, social networks of migrants and natives have not been studied before. This paper aims to fill this gap by studying characteristics and behaviours of migrants and natives on Twitter. To do so, we perform a general assessment of features including profiles and tweets, and an extensive network analysis on the network. We find that migrants have more followers than friends. They have also tweeted more despite that both of the groups have similar account ages. More interestingly, the assortativity scores showed that users tend to connect based on nationality more than country of residence, and this is more the case for migrants than natives. Furthermore, both natives and migrants tend to connect mostly with natives.

Third party stabilization of unstable coordination in systems of coupled oscillators
PRESENTER: Joseph McKinley

ABSTRACT. The Haken-Kelso-Bunz (HKB) system of equations is a well-developed model for dyadic rhythmic coordination in networks of biological systems. It captures ubiquitous empirical observations of bistability – the coexistence of in-phase and antiphase oscillations – in neural, behavioral, and social coordination. Recent work by Zhang and colleagues has generalized HKB to networks of many oscillators to account for new empirical phenomena observed in multiagent interaction. Utilizing this generalization, the present work examines how the coordination dynamics of a pair of oscillators can be augmented by virtue of their coupling to a third oscillator. We show that stable antiphase coordination emerges in pairs of oscillators even when their coupling parameters would have prohibited such coordination in their dyadic relation. We envision two lines of application for this theoretical work. In the social sciences, our model points toward the development of intervention strategies to support coordination behavior in heterogeneous groups(for instance in gerontology, when younger and older individuals interact). In neuroscience, our model will advance our understanding of how the direct functional connection of mesoscale or microscale neural ensembles might be switched by their changing coupling to other neural ensembles. Our findings illuminate a crucial property of complex networks: how the whole is different than the network’s parts.

A network neuroscience approach to understanding Developmental Topographical Disorientation
PRESENTER: Mahsa Faryadras

ABSTRACT. Please see the pdf file attached.

10:40-11:20Coffee Break & Online Session for Onsite Presenters of Poster Session P2 A-B-C
10:40-11:20 Session Poster P1A: [1-11] Machine Learning & Networks
Online Updates of Knowledge Graph Embedding
PRESENTER: Arijit Khan

ABSTRACT. Complex networks can be modeled as knowledge graphs (KG) with nodes and edges denoting entities and relations among those entities, respectively. Knowledge graph embedding assigns to each node and edge in a KG a low-dimensional semantic vector such that the original structure and relations in the KG are preserved in these learned semantic vectors. KG embedding supports downstream applications including KG completion, classification, entity resolution, link prediction, question answering, and recommendation. In real world, KGs are dynamic and evolve over time. State-of-the-art KG embedding models deal with static KGs. To support dynamic updates (even local), they must be retrained on the whole KG from scratch, which is inefficient. To this end, we propose a new context-aware Online Updates of Knowledge Graph Embedding (OUKE) method, which supports embedding updates in an online manner. OUKE learns two different vectors for each node and edge, i.e., knowledge embedding and context embedding. This strategy effectively limits the impacts of a local update in a smaller region, so that OUKE is able to efficiently update the KG embedding. Experiments on link prediction in dynamic KGs demonstrate both effectiveness and efficiency of our solution.

Anomaly Detection for CyberSecurity Using Inductive Node Embedding with Convolutional Graph Neural Networks
PRESENTER: Amani Abou Rida

ABSTRACT. In the face of continuous cyberattacks, many scientist have proposed machine learning-based network anomaly detection methods. While deep learning effectively captures unseen patterns of Euclidean data, there is a huge number of applications where data are described in the form of graphs. Graph analysis have improved detecting anomalies in non-Euclidean domains, but it suffered from high computational cost. Graph embeddings have solved this problem by going into low dimensional embeddings, but it lacks the ability of generalizing to unseen nodes. Graph convolution neural network method have solved this problem by its inductive embedding framework INE-ConvGNNs and showed better performance with less complexity than previous methods. In this paper we will see how INE-ConvGNNs improved the performance of detecting anomalies over the traditional graph analysis and graph embedding methods.

Link predictability classes in complex networks

ABSTRACT. In this paper, we study how the observed quality of a network link prediction method applied to a part of a network can be further used for the analysis of the whole network. Namely, we first show that it can be determined for a part of the network which topological features of node pairs lead to a certain level of link prediction quality. Based on it, we construct a link predictability (prediction quality) classifier for the network links. This is further used in the other part of the network for controlling the link prediction quality typical for the method on the network. The actual link prediction method is not used then already. Experiments with synthetic and real-world networks show a good performance of the proposed pipeline. The source code, the datasets and the results related to our study are publicly available on GitHub.

Vertex entropy based link prediction in unweighted and weighted complex networks

ABSTRACT. Network science is a domain in which we focus on studying complex networks like social networks, chemical networks, computer networks, telecommunication networks, cognitive networks, semantic networks, and biological networks. In recent years, link prediction in complex networks has become an active research field because of its various real-world applications. In this paper, we present a novel algorithm for link prediction influenced by the concept of vertex entropy and ego networks. We used 12 real-world datasets to evaluate the performance of the novel algorithm. Results are compared with the 12 baseline algorithm based on 4 metrics AUC, Precision, Prediction-Power, and Precision@K. Experimental results show the effectiveness of the novel algorithm.

Multiple Role Discovery in Complex Networks

ABSTRACT. The role of a node in complex networks is the aggregation of structural features and functions. Role discovery is the field of mining the proper roles, and many methods have been proposed. Those methods mainly focus on discovering a single role for each node. However, in real-world networks, a node may have multiple roles. Therefore, we propose a multiple-role discovery framework by extending the single-role discovery framework. Furthermore, we also suggest a way to assign sub-networks divided by community extraction methods to the source network and the validation network to select pre-labeling nodes, which is a significant challenge for multiple-role discovery in real-world networks. To evaluate the accuracy of the proposed method, we conduct computational experiments for multiple-role discovery of the real-world Wikipedia network and Blogcatalog network. We show that the proposed method achieves higher accuracy and more stable results than conventional methods used for comparison.

Graph Summarization with Latent Variable Probabilistic Models

ABSTRACT. This study addresses the issue of summarizing a static graph, known as graph summarization, effectively and efficiently. The resulting compact graph is referred to as a summary graph. Based on the minimum description length principle (MDL), we propose a novel graph summarization algorithm called Graph Summarization with Latent variable probabilistic models (GSL) for a static graph. MDL asserts that the best statistical decision strategy is the one that best compresses the data. The key idea of GSL is encoding the original and summary graphs simultaneously using latent variable probabilistic models with two-part coding, that is, first encoding a summary graph, then encoding the original graph given the summary graph. When encoding these graphs, we can use various latent variable probabilistic models. Therefore, we can encode a more complex graph structure than the conventional graph summarization algorithms. We demonstrate the effectiveness of GSL on both synthetic and real datasets.

Image Keypoint Matching using Graph Neural Networks

ABSTRACT. Image matching is a key component of many tasks in computer vision and its main objective is to find correspondences between features extracted from different natural images. When images are represented as graphs, image matching boils down to the problem of graph matching which has been studied intensively in the past. In recent years, graph neural networks have shown great potential in the graph matching task, and have also been applied to image matching. In this paper, we propose a graph neural network for the problem of image matching. The proposed method first generates initial soft correspondences between keypoints using localized node embeddings and then iteratively refines the initial correspondences using a series of graph neural network layers. We evaluate our method on natural image datasets with keypoint annotations and show that, in comparison to a state-of-the-art model, our method speeds up inference times without sacrificing prediction accuracy.

Relations between Entropy and Accuracy trends in Complex Artificial Neural Networks
PRESENTER: Antonio Liotta

ABSTRACT. Training Artificial Neural Networks (ANNs) is a non-trivial task. In the last years, there has been a growing interest in the academic community in understanding how those structures work and what strategies can be adopted to improve the efficiency of the trained models. Thus, the novel approach proposed in this paper is the inclusion of the entropy metric to analyse the training process. Herein, indeed, an investigation on the accuracy computation process in relation to the entropy of the intra-layers' weights of multilayer perceptron (MLP) networks is proposed. From the analysis conducted on two well-known datasets with several configurations of the ANNs, we discovered that there is a connection between those two metrics (i.e., accuracy and entropy). These promising results can be helpful in defining, in the future, new criteria to evaluate the training process goodness in real-time by optimising it and allow faster detection of its trend.

Extracting semantic information from dynamic graphs of geometric data
PRESENTER: Devavrat Dabke

ABSTRACT. In this paper, we demonstrate the utility of dynamic network sequences to provide insight into geometric data; moreover, we construct a natural syntactic and semantic understanding of these network sequences for useful downstream applications. As a proof-of-concept, we study the trajectory data of basketball players and construct ``interaction networks'' to express an essential game mechanic: the ability for the offensive team to pass the ball to each other. These networks give rise to a library of player configurations that can in turn be modeled by a jump Markov model. This model provides a highly compressed representation of a game, while capturing important latent structures. By leveraging this structure, we use a Transformer to predict trajectories with increased accuracy.

Graph-based Retrieval for Claim Verification over Cross-Document Evidence
PRESENTER: Misael Mongiovì

ABSTRACT. Verifying the veracity of claims requires reasoning over a large knowledge base, often in the form of corpora of trustworthy sources. A common approach consists in retrieving short portions of relevant text from the reference documents and giving them as input to a natural language inference module that determines whether the claim can be inferred or contradicted from them. This approach, however, struggles when multiple peaces of evidence need to be collected and combined from different documents, since the single documents are often barely related to the target claim and hence they are left out by the retrieval module. We conjecture that a graph-based approach can be beneficial to identify fragmented evidence. We tested this hypothesis by building, over the whole corpus, a large graph that interconnects text portions by means of mentioned entities and exploiting such a graph for identifying candidate sets of evidence from multiple sources. Our experiments show that leveraging on a graph structure is beneficial in identifying a reasonably small portion of passages related to a claim.

Supervised link weight prediction using node metadata
PRESENTER: Larissa Mori

ABSTRACT. Given that node metadata can provide key insights about the relationship between nodes, we investigate if incorporating it as a similarity feature (referred to as metadata similarity) between end nodes of a link can improve the accuracy of weight prediction when using common supervised learning methods. We compare the weight prediction accuracy when metadata similarity is added to a set of baseline topological similarity features to that of using only the topological features. The comparison is performed across four empirical datasets using regression-based and other supervised methods found in the literature. In this preliminary study, we find no significant evidence that metadata similarity improves prediction accuracy in the methods analyzed and within the experimental setup. We encourage further investigation in this research area.

10:40-11:20 Session Poster P1B: [12-17] Earth Science Applications
Local Hurst for agri-food time series analysis

ABSTRACT. This study presents a way to predict probability of median regression of a time series based on values of the scaling Hurst exponent.

Complex Networks Analysis of Solar Magnetograms along the 23rd Solar Cycle
PRESENTER: Víctor Muñoz

ABSTRACT. We build monthly complex networks based on solar magnetograms for the complete 23rd solar cycle. We use binarization, noise filtering, and then image recognition algorithms to determine the centroids of the sunspots. These coordinates are taken as nodes of the complex network. Connections between modes are given by the temporal sequence of the magnetograms. In order to study the network topology, we consider the degree distribution, the clustering coefficient, and various centrality metrics. Some metrics correlate and others anticorrelate with solar activity, whereas others are invariant along the solar cycle. Our results suggest that the complex network for solar magnetograms, built as described here, contains information on solar activity. The usefulness of this method to make predictions on when will the next solar maximum occur, and how intense it will be, is currently being investigated.

The New Abnormal: Identifying and Ranking Anomalies in the Land Trade Market

ABSTRACT. Large-scale national and transnational commercial land transactions, or Large-Scale Land Acquisitions (LSLA), are a global and consolidated phenomenon that plays an important role in basically every land-based business. While the predominant dynamic of the land trade market is certainly the one that reflects the power asymmetry between Global North and the Global South of the world (e.g., G20 countries investing in southern ones), other important dynamics exist, such as a South-South one (e.g., Latin American countries investing in Africa), and the one involving emerging economies, such as the so-called BRICS countries (Brazil, Russia, India, China and South Africa). While these emerging economies tend to have an investor profile, they are also involved as target countries in several deals. The aim of this work is to perform a deeper investigation on these intermediate country profiles, that can be considered as anomalies in the land trade market; notably, less than 25% of the countries included in the transnational land trade network are characterized by this double investor/target profile. To this purpose, we will rely on open access data about LSLAs included in the Land Matrix database, which will be modeled as a network graph were nodes represent countries, and an edge between two countries is drawn if there exists at least one LSLA deal.

Analyzing the physical dependence of scale-free critical exponent in seismic complex networks
PRESENTER: Fernanda Martin

ABSTRACT. We analyze the behavior of the critical exponent $\gamma$ (probability distribution of connectivity) in complex networks built with seismic data measured along the coast of Chile. We find this critical exponent does not depends of the number of data in the set studied and we find a dependence of this exponent and the physical characteristics of the zone studied. We need further studies to understand the dependence of this exponent and the possible meaning of it and the seismic physical processes.

Assigning Degree of Irreversibility to Light Curves from Blazars through the Horizontal Visibility Graph Method

ABSTRACT. Blazars are the most violent active galactic nuclei (AGN), emitting predominantly non-thermal radiation with strong variability across the electromagnetic spectrum. We focus on characterizing the high-energy emission mechanisms of blazars by analyzing the variability in the radio band of the light curves of more than a thousand sources. We have started a study of the variability properties in the radio band of blazars observed with a large-scale and fast cadence 15 GHz radio monitoring program with the Owens Valley Radio Observatory 40 m Telescope. We are interested in characterizing these sources with their degree of irreversibility, modeling the time series of the light curves with the method of the Horizontal Visibility Graph, which allows us to measure the Kullback-Leibler Divergence (KLD), that is, the degree of irreversibility of real-valued time series, presenting a new perspective on the methods commonly used to study AGNs. KLD is a tool for estimating entropy production from stationary trajectories, providing a lower bound on the entropy production of the physical process that generates the series. We observe that there is a slight tendency in both cases to present a smaller degree of irreversibility for increasing median radio flux and increasing excess variance of the light curve. Our preliminary analysis shows no obvious correlation, so we plan to investigate additional blazar parameters. We hope complex networks may be a valuable alternative tool to study and characterize AGNs according to their variability and energy flux as remotely measured from Earth.

A complex network analysis of solar magnetic activity via visibility graph
PRESENTER: Tomás Zurita

ABSTRACT. In this work we are going to study the solar activity using the Visibility Graph technique (VG). Mean Magnetic Field and Number of Sunspots obtained from the Wilcox Solar Observatory (WSO) project will be used in order to build the complex network, both measurements carried out from May 16, 1975, to December 6, 2015. The procedure to use these data consists of building a network for sunspots and another network for magnetic field, considering the totality of the data and applying the VG method to these time series. Our results show a sensitivity of the Betweenness Centrality metric to the solar cycle, while other metrics do not seem to show this correlation. We expect that this kind of analysis will offer a new way to analyze solar activity based on a complexity approach

10:40-11:20 Session Poster P1C: [18-20] Controlling Networks
Topologic implications in information routing in Caenorhabditis elegans.

ABSTRACT. Caenorhabditis elegans (c. elegans) is a transparent nematode of about 1mm of length. It was the first organism whose connectome, i.e. the neuronal connectivity diagram, has been fully tracked. Thus, it conforms a well-known,realistic example of a biological neural network.

This research is focused on the topology-based effects regarding information propagation, grounded on the male c. elegans connectome but not aiming to explain the process of information routing in the actual living organism. In particular, it has been found that the network structure may restrict the advantage of structural hubs for controlling the network, in favor of lower-degree neurons. Furthermore, in an heterogeneous frequency setting, the appearance of loops might be a key factor leading to the decline of the consistency, i.e. the ability of a neuron to replicate the same dynamics when applied an input with different initial conditions.

Synchronization of complex networks subject to impulses with average characteristics
PRESENTER: Bangxin Jiang

ABSTRACT. In this study, synchronization of complex networks subject to impulses with average characteristics is investigated. Specically, the ideas of average impulse interval and average impulse delay are applied to analyze the effect of delayed impulses on synchronization of complex networks. Further, the new concept of average impulse exponential gain is proposed to globally describe the multiple impulses whose magnitude is allowed to be time-varying. Interestingly, it is shown that the delay in impulses can possess synchronizing impact on the synchronization of complex networks with such multiple delayed impulses. Finally, an numerical example is presented to illustrate the validness of the derived results.

Deep Reinforcement Learning for FlipIt Security Game
PRESENTER: Laura Greige

ABSTRACT. Reinforcement learning has shown much success in games such as chess, backgammon and Go. However, in most of these games, agents have full knowledge of the environment at all times. In this paper, we describe a deep learning model in which agents successfully adapt to different classes of opponents and learn the optimal counter-strategy using reinforcement learning in a game under partial observability. We apply our model to FlipIt, a two-player security game in which both players, the attacker and the defender, compete for ownership of a shared resource and only receive information on the current state of the game upon making a move. Our model is a deep neural network combined with Q-learning and is trained to maximize the defender's time of ownership of the resource. Despite the noisy information, our model successfully learns a cost-effective counter-strategy outperforming its opponent's strategies and shows the advantages of the use of deep reinforcement learning in game theoretic scenarios. We also extend FlipIt to a larger action-spaced game with the introduction of a new lower-cost move and generalize the model to $n$-player FlipIt.

11:20-12:50 Session Oral 1A: Diffusion & Epidemics
Prioritizing high-contact professions raises effectiveness of vaccination campaigns
PRESENTER: Hendrik Nunner

ABSTRACT. Recent studies have proposed network interventions for reducing the propagation of COVID-19. By restricting close-range contact to occur only within predetermined interaction structures, the speed and reach of COVID-19 spread can theoretically be reduced. However, even severe social distancing policies such as full-scale lockdowns can only temporarily reduce infections and hospitalizations, leaving large-scale vaccination as the primary vehicle for sustainable control over the SARS-CoV-2 virus. Nonetheless, global vaccine roll-out has logistical and financial limits. The challenge is how to effectively control the virus with limited supplies.

A twenty-year-old idea from network science is that vaccination campaigns would be much more effective if high-contact individuals were preferentially targeted. Implementation is impeded by the ethical and practical problem of differentiating vaccine access on the basis of a personal characteristic that is informal and private. Here, we develop a computational model on how to effectively vaccinate in times of a pandemic by prioritizing specific occupational groups. We draw on data from a survey conducted at the beginning of the COVID-19 pandemic in early 2020 that measures close-range contact for occupational groups. The data reveal substantial occupational differences, with teachers and cashiers being among the most connected and computer programmers among the least connected. To investigate whether this variability can produce significant gains when exploited in targeted vaccination programs, we first used a genetic algorithm to generate networks of 10,000 nodes that map the occupational contact data onto network degree. We then simulated epidemics and compared the effectivity of vaccination campaigns that target individuals either randomly or targeted by occupational group membership, prioritizing the highest reported average number of social contacts.

Our simulations suggest that random distribution of vaccines amounts to 35% of nodes getting infected on average, compared to 60% in the baseline/no-vaccination condition. Prioritizing high contact professions, however, results in a mean of 20% of nodes getting infected, while the vast majority of epidemics are prevented entirely (median number of infections close to 0%). Furthermore, we show that the positive effect of targeted vaccination is stronger if networks are more clustered and if there is lower occupational group homophily. A comparison between random vaccination of 40% and targeted vaccination of 20% of the population (everything else equal) shows that the latter achieves similar numbers of cumulative infections, with later and lower epidemic peaks.

Based on our findings, we propose that occupational groups can function as a reasonably effective proxy to increase effectiveness of vaccination campaigns.

Reactive control policies to epidemic outbreaks in complex metapopulations

ABSTRACT. In the lack of an effective treatment for COVID-19, mobility restrictions have been the most effective ally to safeguard human lives, but the stringency of these measures has jeopardized the global economy. The limited knowledge and understanding about any new virus and its spreading behavior highlight the urgency to create evolving and strategic mobility-restrictions measures to keep the balance between human health and socio-economic activities. Here, we propose an automatized model of closure constraints that allows the (local) social fabric to be restored to normal levels once the capacity of the health system is released. Giving an alternative to the policymakers to keep their countries’ economies up and running while not overwhelming the health capacity response.

Quantifying the interplay between urban mobility and epidemic spreading in real cities

ABSTRACT. The systematic study of the mobility network of 163 real cities reveals an universal result: there exists an inverse relationship between the cities' vulnerability to epidemic outbreaks and the concentration of flows around the population density hotspots. Based on this result, different interventions acting on the architecture of the mobility flows are proposed. The theoretical findings here presented are validated with a case study, the spread of SARS-CoV-2 virus across the United States, showing the arguments based on population density and the mobility patterns of the population can shed light into the differences observed at the country level.

A general diffusion model and its influence maximisation on networks

ABSTRACT. With the rapid growth of online social networks such as Facebook and Twitter, social influence is ubiquitous in everyday life. Understanding how information spreads in social networks is a central theme in computational social sciences, with theoretical and practical implications, such as the influence maximisation (IM) problem for viral marketing. There are two main classes of diffusion models adopted in this context, the independent cascade (IC) model and the linear threshold (LT) model. The theoretical guarantee of current approximation algorithms is obtained under the assumption that the influence spread is a monotone and submodular function, which, for the LT model, occurs for a uniform distribution of thresholds. However, the IM problem when the thresholds are given by other distributions, even deterministic, is relatively unknown. Hence, we adopt a different perspective of the problem, and propose a general model that connects existing approaches. Furthermore, we formulate the IM problem under the general diffusion model as a mixed integer nonlinear programming (MINLP) and solve it by derivative-free methods, where we also propose a customised direct search method, with local convergence.

Simplicial models for higher order interactions in complex networks

ABSTRACT. Complex networks have been used vastly for the modeling of interactions within systems. However, these have the limitation of taking only pairwise interactions into account. We propose the use of simplicial complexes as higher order networks as a way to model local interactions beyond the dyadic ones. In particular, we claim that contagion processes in a simplicial complex gives place to different outbreak regimes. We include the results of numerical simulations of these processes. The incorporation of simplicial complexes in the modeling of contagion processes can aid into a better and more realistic understanding of this phenomena, and help us capture in a rigorous and quantifiable manner the local ``social pressure'' in nodes.

Disease trajectories, chains of infection and closeness in temporal networks
PRESENTER: Ashleigh Myall

ABSTRACT. Understanding the dynamics of disease transmission is crucial to control and prevent infection. A key aspect of transmission chains is the temporal ordering of person-to-person contacts. Yet, standard protocols for the investigation of disease outbreaks (e.g., in hospitals) typically aggregate patient contacts across time, disregarding their intrinsic temporal ordering. Here, we investigate the epidemiological utility of including time-respective pathways, an analogue to a disease trajectory passing through a series of hosts, to understand disease outbreaks. Specifically, we introduce a structural closeness measure over temporal contact networks that sheds light on directional disease transmission. We demonstrate the proposed concept in a large real-world dataset of 19,418 patients from a London hospital group, which includes 1,672 patients infected with COVID-19.

11:20-12:50 Session Oral O1B: Networks Analysis
Spectral Pruning of Fully Connected Layers: Ranking the Nodes Based on the Eigenvalues

ABSTRACT. Training of neural networks can be reformulated in dual space, by allowing eigenvalues and eigenvectors to act as target of the optimization. Working in this setting, we show that the eigenvalues can be used to rank the nodes' relevance, within the collection. By sorting the nodes based on their associated eigenvalues enabled us to devise effective pre- and post-processing trimming strategies to yield compact networks (in terms of the number of composing neurons) with virtually unchanged performance, as compared to their untrimmed homologues. The proposed methods are tested for different architectures, with just a single or multiple hidden layers, and against distinct classification datasets of valuable pedagogical interest.

The first-mover advantage explains gender disparities in Physics citations

ABSTRACT. Mounting evidence suggests that publications and citations of scholars in the STEM fields (Science, Technology, Engineering and Mathematics) suffer from gender biases. Such biases often cause an invisibility syndrome in women and other minorities, resulting in a higher dropout rate among women, a phenomenon known as leaky pipeline. Thus, it is of societal importance to accurately identify those biases and devise bottom-up approaches to tackle them. However, simply comparing the number of publications and citations of men and women is misleading, as recent findings show that when differences in career length are controlled for, male and female scientists have similar rates of publication and citation on average. In this work, we analyse publication and citation patterns in the Physics community, as it is one of the core STEM areas where women are exceedingly underrepresented. Our approach goes beyond studying gender disparities at the population level: we compare pairs of papers whose similarity is validated through a statistical test, focusing on couples with one of the papers written by a male primary author and the other by a female primary author. This way, we ensure that the two papers cover the same topics in a comparable way. Our results results suggest that the overall disparity in the citation network is caused by the cumulative advantages and the first-mover effect that men have in hysics, and not by intentional discriminatory acts against women.

Characterization of the digital knowledge ecosystem of Wikipedia
PRESENTER: Fumiko Ogushi

ABSTRACT. We introduce a self-consistent metrics for the Wikipedia editor-article network defined by the edit records and show that our metrics can capture well the character of editors’ activity and the articles’ level of complexity[1]. Our metrics can better identify the human-labeled high-quality articles, e.g., ”featured” articles and differentiate from the popular and controversial articles than other measures which are also use only the information of the editing network, i.e. the degree, strength, and eigenvector centrality. Moreover, the dynamics of the editor-article network is well captured by our metrics.

Analyzing Escalations in Militarized Interstate Disputes using Motifs in Temporal Networks

ABSTRACT. We present a temporal network analysis of militarized interstate dispute (MID) data from 1992 to 2014. MIDs progress through a series of incidents, each ranging from threats to uses of military force by one state to another. We model these incidents as a temporal conflict network, where nodes denote states and directed edges denote incidents. We analyze temporal motifs or subgraphs in the conflict network to uncover the patterns by which different states engage in and escalate conflicts with each other. We find that different types of temporal motifs appear in the network depending on the time scale being considered (days to months) and the year of the conflict. The most frequent 3-edge temporal motifs at a 1-week time scale correspond to different variants of two states acting against a third state, potentially escalating the conflict. Temporal motifs with reciprocation, where a state acts in response to a previous incident, tend to occur only over longer time scales (e.g. months). We also find that both the network’s degree and temporal motif distributions are extremely heavy tailed, with a small number of states being involved in many conflicts.

Effect of Peer Influence and Looting Concerns on Evacuation Behavior During Natural Disasters
PRESENTER: Chris Kuhlman

ABSTRACT. We study evacuation dynamics in a major urban region (Miami, FL) using a combination of a realistic population and social contact network, and an agent-based model of evacuation behavior that takes into account peer influence and concerns of looting. These factors have been shown to be important in prior work, and have been modeled as a threshold-based network dynamical systems model (2mode-threshold), which involves two threshold parameters—-for a family’s decision to evacuate and to remain in place for looting and crime concerns—-based on the fraction of neighbors who have evacuated. The dynamics of such models are not well understood, and we observe that the threshold parameters have a significant impact on the evacuation dynamics. We also observe counterintuitive effects of increasing the evacuation threshold on the evacuated fraction in some regimes of the model parameter space, which suggests that the details of realistic networks matter in designing policies.

Your Tribe Decides Your Vibe: Analyzing Local Popularity in the US Patent Citation Network
PRESENTER: Amit A. Nanavati

ABSTRACT. In many networks, the indegree of a vertex is a measure of its popularity. Past research has studied indegree distributions treating the network as a whole. In the US Patent citation network (USPCN), patents are classified into categories and subcategories. A natural question arises: How do patents gather their popularity from various (sub)categories? We analyse local indegree distributions to answer this question. The citation (indegree) of a patent within the same category indicates its internal popularity, while a cross-category citation indicates its external popularity. We analyze the internal and external indegree distributions at each level of USPCN hierarchy to learn how the internal and external popularity of patents varies across (sub)categories. We find that all (sub)categories have local preferences that decide internal and external patents’ popularities. Different patents are popular in different groups: Groups C1, C2 and C3 may not agree on popular patents in C1. In general, patent popularity appears to be a highly local phenomenon with subcategories (not even categories) deciding their own popular patents independent of the other (sub)categories.

11:20-12:50 Session Oral O1C: Networks in Finance & Economics
Shaping and Predicting the Urban Labor Markets
PRESENTER: Xiangnan Feng

ABSTRACT. 1 Introduction

Labour markets are complex systems, influenced by various factors including not only technological change, but also education, policy and regulation and geography. Much work has been done to uncover the role of technology in changing labour markets including skill demands, automation involvement and human resource mobility. For policy makers at all levels, from urban economies to state, national and continental, the goal is to drive the labour market to an optimal (or close as possible) state. This means imposing regulations, policies, incentives and investment that maximise fulfilling and rewarding employment in the workforce as well as social mobility.

Recently, research in the future of work has exploited network science to understand the structure of labour markets. This includes skills, jobs and unemployment shocks with great success. However, despite this evidence that labour markets are strongly mediated by network structure, our understanding of how their structure constrains the efficacy of policy and regulation is limited.

In this work we build on these foundational network based studies and apply advanced network science techniques to better understand the dynamics of these complex systems and how these vary between cities. More specifically, we propose a method to measure the occupation spreading ability in urban labor markets. This provides a foundational understanding of the labour markets function which in turn dictates how many and which jobs should be targeted when applying regulation, when seeking to maximise social mobility, ameliorate the effects of skill based technological automation and incentivise structural economic change e.g. green technologies. At the same time, link prediction methods are applied to predict the landscape of future occupation networks, which demonstrate that there exists a fundamental limit how evolution of labour markets can be predicted. With new tools from network theories applied on occupation data, more deeper findings could be expected about future of work to guide policy makers.

2 Results

We begin by considering data on occupations in each US city from O*NET. We construct a network of occupations linked by skill similarity. We thus have a network for each urban economy in which nodes are occupations found in the city and edges are the similarities between them. The number of nodes varies betweeen 64 and 586. As a first step to understand the diffusion of employment and technology, we investigate spreading patterns determined by this network structure. To guide and control the spreading phenomena on labor market optimally, identifying influential nodes is crucial for policy makers.

The maximum influence problem is regarded as NP-hard for general networks. Based on the collective influence which finds the “superspreaders” in networks, we define the structural spreading influence of occupation i in city c.

Based on the O*NET data, influential occupations with highest SSI values are obtained and cases in Los Angels and Dalton in Georgia are presented in Figure 1. A clear difference could be observed between these two cities: besides the 14 common occupations, in Los Angels area, there are more influential occupations in “Service” and “Sales and office” sectors, while in Dalton most different occupations lay in the “Production, transportation, and material moving” sector; also, it is clear that not all occupations with highest employment shares are the most crucial occupations locally.

To predict the landscape of future of work is another challenge for policy makers, and link prediction theories on networks could help predict future similarities among occupations. Adopting the SPM method on occupation networks, our prediction on possible links in the future labor market is presented, of which accuracy is high examined on data from 2011 and 2020. In Figure 2 the degree distributions in 2011, 2020 and predicted distributions by SPM in year 2030, 2040, 2050, 2060 and 2070 are presented. It could be observed that degree distributions are getting more uneven and degree variance keeps increasing, which may suggest occupations are becoming more heterogeneous and polarized. Policies could be made aiming at building more robust labor market with the future occupation contents shifting.

Economic recovery with unified price and quantity dynamics on input-output networks

ABSTRACT. In input–output economics national economies are modelled as a network in which nodes correspond to industrial sectors and the edges to the monetary or quantity flows between those. These input-output models have been used to analyse, how an economy reacts to an economic shock, like the recovery from a disaster or a pandemic. In conventional input-output analyses, namely the Leontief model, the sectors (nodes of the network) react to shocks exclusively by adjusting the quantity of their produced goods, while prices remain fixed. In another variant of this, the Ghosh model, quantities stay fixed and sectors are only able to adjust to a shock by adjusting their price. The challenge when unifying these two approaches is to prohibit the overshooting of the sectors to the applied shock and to maintain the self-consistency of the network. We propose an extension to these dynamic input-output recovery models, which allows for both, price and quantity adjustments simultaneously, does not overshoot the applied shock and keeps the network self consistent. To illustrate the robustness of the proposed model, we fit its parameters to the US-American economy and find that they are correlated across multiple years which suggests, that the flexibility to price- and quantity-adjustments is an intrinsic property of each sector.

Towards a theory of out-of-equilibrium network economies
PRESENTER: Johannes Lumma

ABSTRACT. One of the central research questions in economics is to understand the origin of macroeconomic fluctuations. We will argue that macroeconomic fluctuations can be caused endogenously through the intrinsic dynamics of the economy by considering a toy model that models the economy as a network of inter-connected firms.

Inferring supply networks from mobile phone data to estimate economic systemic risk
PRESENTER: Tobias Reisch

ABSTRACT. The functioning of our economies critically depends on the network of customer-supplier relationships as a central structure. Mostly due to a lack of firm-level transaction data, economic shock propagation along corporate supply links is typically analyzed on sector level aggregations of these networks. Only recently has data on firm-level trade relationships become accessible - restricted, however, to a small number of countries, such as Japan, Belgium, Brasil or Hungary. Such data is difficult to obtain and needs extensive surveys, tax or payment system data that is hard to get. Here we show how telecommunication data can be used as an inexpensive, more accessible, real-time and widely available alternative to estimate firm-level supply networks and assess the prevailing systemic risk in a country. We reconstruct a national supply network from mobile phone data and use a recently developed method to estimate the economic systemic risk of every company in the dataset.

Networks resilience under shocks propagation conditions

ABSTRACT. The concept of resilience i.e., the ability of a unied structure to absorb shocks, is of high relevance in the context of network modelling and analysis, mainly when referred to finance. This paper starts from this premise and deals with the resilience of a financial inter-banking system. At this aim, we firstly introduce a new measure of the resilience of a network, by taking into full consideration the influence of the topology of the network and the weights of its links in the propagation of the shocks; then, we build several financial weighted networks related to the inter-banking sector, whose weights are calibrated on high-quality empirical data; lastly, we compute the resilience measure of the considered networks. A discussion of the results is provided, by considering both finance and network theory perspectives.

How does systemic risk spread globally across supply chain network?

ABSTRACT. Uploaded as a pdf document.

12:50-14:15Lunch Break
14:15-16:15 Session Oral O2A: Network Models & Motifs
Graph signal processing and wavelet packets

ABSTRACT. Classical wavelet, wavelet packets and time-frequency dictionaries have been generalized to the graph setting, the main goal being to obtain atoms which are jointly localized both in the vertex domain, the analogue of the time domain for signals on the real line, and the graph spectral domain, the analogue of the frequency domain. Here we present a new method to generate frames of wavelet packets defined in the graph spectral domain to represent signals on weighted graphs.

Eigenvalues of random signed graphs with cycles: A graph-centered view of the method of moments with practical applications

ABSTRACT. We illustrate a simple connection between the cycles in a graph and eigenvalues its the adjacency matrix. Then we use this connection to derive properties of the eigenvalues of random graphs with short cyclic motifs and circulant graphs with random signs. We find that the eigenvalue distributions that emerge from those structures are surprisingly beautiful. Finally, we illustrate their practical relevance in the field Reservoir Computing.

Entropy metrics for graph signals

ABSTRACT. Entropy metrics (for example, permutation entropy) are nonlinear measures of irregularity in time series (1-dimensional data). These entropy metrics can be generalised to data on periodic structures (such as a grid or lattice pattern) using its symmetry, thus enabling their application to images. However, these metrics have not been developed for signals settled on irregular domains, defined by a graph. In this work, we define for the first time an entropy metric to analyse signals measured over irregular graphs by generalising permutation entropy, a well established nonlinear metric based on the comparison of neighbouring values within patterns in a time series, to data on general graphs. Our algorithm is based on the idea of comparing signal values on neighbouring nodes (using the adjacency matrix). We show that this generalisation preserves the properties of classical permutation for time series, and it can be applied to any structure with synthetic and real graphs.

Efficient enumeration of multilayer subnetworks
PRESENTER: Tarmo Nurmi

ABSTRACT. Efficiently finding connected, induced subgraphs is vital to many network analysis methods. There are several methods for enumerating subgraphs of a given size, such as ESU and Kavosh. As multilayered data has become more and more common, multilayer subnetwork enumeration has become an important task. To preserve and leverage the multilayer properties, subnetwork enumeration algorithms have been developed for different types of networks, such as temporal networks and multiplex networks. However, the existing approaches are adapted only to specific types and applications of multilayer networks, and are not applicable to generic multilayer networks. We present here two algorithms for enumeration and unbiased sampling of general multilayer connected induced subnetworks, NL-MESU and A-MESU, for node-layer-based multilayer ESU and aspect-based multilayer ESU, respectively. Both are generalizations of ESU, but with different approach to the generalization. We compare their running times in random multilayer networks and in real-world networks, and show that their relative performance is dependent on network density, subnetwork size, and data-specific properties.

wsGAT: Weighted and Signed Graph Attention Networks for Link Prediction
PRESENTER: Marco Grassia

ABSTRACT. Graph Neural Networks (GNNs) have been widely used to learn representations on graphs and tackle many real-world problems from a wide range of domains. In this paper we propose wsGAT, an extension of the Graph Attention Network (GAT) [24] layers, meant to address the lack of GNNs that can handle graphs with signed and weighted links, which are ubiquitous, for instance, in trust and correlation networks. We first evaluate the performance of our proposal by comparing against GCNII [6] in the weighed link prediction task, and against SGCN [8] in the link sign prediction task. After that, we combine the two tasks and show their performance on predicting the signed weight of links, and their existence. Our results on real-world networks show that models with wsGAT layers outperform the ones with GCNII and SGCN layers, and that there is no loss in performance when signed weights are predicted.

Propagation on Multi-relational Graphs for Node Regression

ABSTRACT. Recent years have witnessed a rise in real-world data captured with rich structural information that can be conveniently depicted by multi-relational graphs. While the inference of continuous node features across a simple graph is rather under-studied by the current relational learning research, we go one step further and focus on the node regression problem on multi-relational graphs. We take inspiration from the well-known label propagation algorithm aiming at completing categorical features across a simple graph and propose a novel propagation framework for completing missing continuous features at the nodes of a multi-relational and directed graph. Our multi-relational propagation algorithm is composed of iterative neighborhood aggregations which originate from a relational local generative model. Our findings show the benefit of exploiting the multi-relational structure of the data in node regression task.

Motif-Role Extraction in Uncertain Graph Based on Efficient Ensembles
PRESENTER: Soshi Naito

ABSTRACT. In this paper, we formulate a new problem of extracting node roles, which is an extension of the notion of network motifs, from a graph with edge uncertainty. In our problem framework, we first count roles for each node in each sampled graph, second calculate similarities between nodes in terms of role frequency, and divide all nodes into clusters according to the similarity of roles. To achieve good accuracy of role extraction for uncertain graphs, a huge amount of graph sampling, role counting, similarity calculation, or clustering is needed. For efficiently extracting node-roles from a large-scale uncertain graph, we propose four ensemble methods, graph-ensemble, vector-ensemble, similarity-ensemble and cluster-ensemble methods, and compare the similarity of results and efficiency. In the experiments, we used four different types of real networks with probabilities assigned to their edges, compared the similarity and efficiency of each ensemble method, and discussed the clustering results. As a result, we confirmed that the graph-ensemble method is much more efficient in terms of execution time compared to other ensemble methods equipped with the state-of-the-art motif counting algorithm.

Experiments on F-restricted bi-pattern mining

ABSTRACT. Bi-pattern mining has been previously introduced to mine attributed networks in which nodes may have two roles, as bi-partite or directed networks. A bi-pattern is then made of a pair of attribute patterns, and has a pair of support sets to figure the occurrences of the bi-pattern. Restricted bi-pattern mining has then been introduced to enforce both components of a bi-pattern to share part of their attribute content, so leading to a reduced bi-pattern space. The definition of a new closure operator allows then to enumerate such closed restricted bi-patterns. The presentation was theoretical and no experiments were provided. Our contribution is practical, it consists in experiments comparing efficiency and results of restricted bi-pattern mining of an attributed directed network to their unrestricted bi-pattern mining counterparts.

14:15-16:15 Session Oral O2B: Human Behavior
The impact of link recommendation algorithms in polarization and consensus dynamics

ABSTRACT. Online social networks are increasingly central in shaping political opinions. Link recommendation or social matching algorithms are implemented to recommend new online connections to social network users, based on supposed offline familiarity, similar interests, or the potential to serve as a source of useful information. Here we investigate 1) How do algorithmic link recommendations interplay with opinion formation? and 2) What are the long-term impacts of such algorithms on opinion polarization? We study a model where individuals interact in a dynamic social network. We focus on rewiring based on structural similarity, which is defined as similarity based on common features that exclusively depend on the network structure. Importantly, rewiring based on structural similarity is a backbone of link recommendation algorithms, which rely on link prediction methods to suggest connections to users. Our model combines three key ingredients: 1). Links are formed according to structural similarity, based on common neighbors; 2). Opinions evolve through social influence; 3). Nodes can react differently to out-group links, either converging in their opinions or polarizing further. We find that establishing links based on structural similarity alone (a process that is likely to be reinforced by link recommendation algorithms) contributes to opinion polarization.

Emergence of complex socioeconomic networks driven by individual and collective interests
PRESENTER: Jaime Iranzo

ABSTRACT. A combination of network science and game theory can help explain the basic mechanisms that lead to the creation of complex socioeconomic structures by examining how simple networks interact to build complex entities, both when connections among individuals are exclusively guided by self-interest or when they result of a mixture of individual and collective motivations. Here we present a theoretical framework where individuals or human groups from different communities connect to each other only if they increase their own eigenvector centrality, a topological measure of wide application in many different contexts that quantifies the importance of a node within the network. Our analytical and numerical results show that the emergence of interconnected networks is catalyzed by the self-interest of peripheral agents, who are penalized in the long run but transiently benefit from establishing links with nodes from other communities. Moreover, the interconnection process leads to a hierarchical, assortative, and very efficient structure where links across networks involve nodes of the same importance. These findings are robust to the introduction of moderate levels of collective-oriented behavior and compatible with the interconnection dynamics observed in real-world socioeconomic networks.

The impact of social contact networks on innovation and creation of new ideas
PRESENTER: Fatemeh Zarei

ABSTRACT. Innovation is fundamental for development and creates a competitive advantage for societies and organizations. Innovation generates novel complex protocols, technologies, or ideas, and is rarely the result of individual capabilities. Most often, people combine complementary expertise and perspectives to innovate. Social networks play a key role because the social structure defines how information flows and is exchanged within people in a group. Although the study of innovation could include both topics of creation and diffusion of new ideas, most of the recent studies are only about the diffusion of ideas in a social network. In this work, we study how the social network structure affects the creation of ideas from a dynamic perspective by designing an agent-based network model. The model studies mechanisms driving innovation through the simulation of human characteristics, social interactions, and innovation processes.

We proposed our model by assuming the population via a network of N individuals. Each individual has a set of technology. The technology is defined as a combination of symbols A and B such that the complexity level of technology is given by the number of letters representing the respective technology. The complexity of an individual is defined by the technology with the highest complexity in its set of skills. The diversity of technologies known by each individual is the number of technologies it has in its set of skills. At the initial time, all individuals have the simplest technologies with a complexity level equal to 1. At each time step, a randomly chosen individual can extend its set of skills through two mechanisms: (1) Individual based on its own knowledge of simpler technologies creates new technology with a higher complexity level, or (2) Individual learns the most complex technology of its neighbors (social learning). The probability that the individual will innovate depends on the complexity level and diversity of skills. We apply our algorithm of innovation to three different networks structure and study how heterogeneity in a network affects innovation. We first find the temporal evolution of the average complexity for all individuals in the networks with different structures. Then we treat the effect of the local behavior of individuals on innovation. Finally, we study the effect of local structure on the similarity of technologies of individuals to their neighbors.

Equilibrium selection in coordination games
PRESENTER: Tomasz Raducha

ABSTRACT. \section{Introduction} %

Cooperation among individuals is often crucial for the survival of the whole, be it economic system, small social group, or animal cohort. In many man-made systems proper regulations ensure that we end up in the optimal configuration. However, in many complex systems coordination emerges without any external supervision or regulation. It's not always clear how the system evolves into cooperative state and which state will be selected, if more than one exists. Understanding, this phenomena is relevant in a range of fields, from economics and sociology to biology. The conventional approach to model such phenomena is by using evolutionary game theory. In this framework individuals play games using different strategies and receive payoffs. Based on the payoff and the strategy of their neighbors they update their strategy in order to maximize the profit.

% \section{Results} %

We study equilibrium selection under different update rules by investigating a spectrum of coordination games played in a population of agents. Firstly, we look at their behavior in a single layer network of $N$ nodes, each with a degree $k$ (random regular graph). We run numerical simulations with agents initially using a random strategy for two games: the pure coordination game (PCG) with the payoff matrix: \begin{equation} \begin{blockarray}{ccc} & $A$ & $B$ \\ \begin{block}{c(cc)} $A$~~ & ~1~ & 0~ \\ $B$~~ & ~0~ & 1~ \\ \end{block} \end{blockarray}~~~, \label{eqn:matrix_pure} \end{equation} with two equivalent equilibria, and the general coordination game (GCG) with the payoff matrix: \begin{equation} \begin{blockarray}{ccc} & $A$ & $B$ \\ \begin{block}{c(cc)} $A$~~ & ~1~ & S~ \\ $B$~~ & ~T~ & 0~ \\ \end{block} \end{blockarray}~~~, \label{eqn:matrix_pure} \end{equation} for $T<1$ and $S<0$ with payoff- and risk-dominant equilibria \cite{buskens2016effects}. To analyze the time evolution of the system, we update the state of nodes, i.e. the last payoff and the used strategy, in each time step until a stationary state or a frozen configuration is reached. In each time step a node, called a focal or active node, is randomly selected to play the game with its neighbors and update its strategy based on its payoff and the applied update rule.

\vspace{-0.3cm} \begin{figure}[h!] \label{fig1} \centering \includegraphics[scale=0.6]{simple_ui_scaling.png} \includegraphics[scale=0.55]{UI_coop_mean_k32_new.png} \includegraphics[scale=0.58]{ccg_bc_ui_scaling.png} \includegraphics[scale=0.58]{nov_repl_k8.png} \caption{a) Coordination level $\alpha$ in the PCG vs. average degree $k$. b) Coordination level $\alpha$ in the T-S plane for the GCG with UI (black dashed line shows the theoretical transition). c) Critical value $S_c$ in the GCG vs. average degree $k$ for UI. d) Coordination level $\alpha_i$ in the GCG for two layers of a multiplex vs. node overlap $q$. } \end{figure} There are several update rules used in the field of evolutionary game theory \cite{roca2012individual}. Three of the most common ones are: the replicator dynamics (RD), the best response (BR), and the unconditional imitation (UI). The replicator dynamics can be seen mostly in biological applications, whether to describe replication of genes, species, or individuals. The best response update rule is usually considered in economic literature, where the assumption of rational agents is typical. The unconditional imitation is popular in complexity science, as it resembles social imitation. Surprisingly, all three update rules can lead to substantially different outcomes, therefore the evolutionary game environment is defined by the update rule as much as by the payoff matrix. Therefore, in order to describe experimental data one must properly identify the appropriate update rule.

To measure the level of coordination we incorporate a parameter $\alpha \in [0,1]$ -- the fraction of nodes using the strategy A. For the PCG we observe a transition from a frozen disorder configuration to full coordination with increasing $k$ when using UI (Fig.~\ref{fig1}a). In two other update rules the system always coordinates as long as there is a large connected component. In the GCG again results depend on the degree for UI -- the theoretical transition line $T=S+1$ at which the risk-dominant strategy changes is reached only for a complete graph, while for smaller values of $k$ it it shifted towards lower $S$ (Fig.~\ref{fig1}b). For $T=-1$ the critical value $S_c$ approaches $-2$ as a power law with increasing $k$ for any size of the network (Fig.~\ref{fig1}c).

Finally, we study coordination between two layers in a multiplex networks. On each layer a GCG with a different payoff matrix is played with such payoff values that each layer separately would coordinate on a different strategy. Both games, however, are chosen to be equally distant from the transition. Therefore, the setup is symmetrical in principle. We vary the node overlap $q$ (aka the degree of multiplexity) between the layers and analyze equilibrium selection. Overlapping nodes must be in the same state at both layers at all times~\cite{diakonova2016irreducibility}. Surprisingly, we observe symmetry breaking -- in most cases coordination on the Pareto-optimal strategy is preferred with bigger values of the overlap (Fig.~\ref{fig1}d).

\paragraph{All three studied update rules assume that a rational agent aims at increasing its payoff. Either by directly computing its payoff and choosing the bigger one like in the BR (therefore this rule requires players' knowledge about the payoff matrix), or by imitating a more successful neighbour with a bigger payoff like in the RD and UI. However, the outcome can be very different between those update rules in coordination games, even for the simplest PCG. A simple one-round two-player game is fully described by its payoff matrix. The conclusion from our simulations is that an evolutionary game is defined by the update rule as much as by the payoff matrix.}

Success at high peaks: a multiscale approach combining individual and expedition-wide factors

ABSTRACT. This work presents a network-based data-driven study of the combination of factors that contribute to success at high peaks. It simultaneously examines the effects of individual factors such as age, gender, experience etc., as well as expedition-wide factors such as number of camps, ratio of sherpas to paying climbers etc. Specifically, it combines the two perspectives into a multiscale network, i.e., a network of individual climber features within each expedition at the finer scale, and an expedition similarity network on the coarser scale. The latter is represented as a multiplex network where layers encode different factors. The analysis reveals that chances of failure to summit due to fatigue, altitude or logistical problems, drastically reduce when climbing with repeat partners, especially for experienced climbers. Additionally, node-centrality indicates that individual traits of youth and oxygen use are the strongest drivers of success. Further, the learning of network projections enables computation of correlations between intra-expedition networks and corresponding expedition success rates. Of expedition-wide factors, the expedition size and length layers are found to be strongly correlated with success rate. Lastly, community detection on the expedition-similarity network reveals distinct communities where a difference in success rates naturally emerges amongst the communities.

Effects of population structure on the evolution of linguistic convention
PRESENTER: Kaloyan Danovski

ABSTRACT. We define a model for the evolution of linguistic convention in a population of agents embedded on a network, and consider the effects of topology on the population-level language dynamics. Individuals are subject to evolutionary forces that over time result in the adoption of a shared language throughout the population. The differences in convergence time to a common language and that language's communicative efficiency under different underlying social structures and population sizes are examined. We find that shorter average path lengths contribute to a faster convergence and that the final payoff of languages is unaffected by the underlying topology. Compared to models for the emergence of linguistic convention based on self-organization, we find similarities in the effects of average path lengths, but differences in the role of degree heterogeneity.

Multiscale opinion dynamics on real networks

ABSTRACT. Despite group-level information is known to affect behavioural responses in social networks, very few models account for it. In this work we introduce the Multiscale Voter Model (MVM), that relies on network geometry to incorporate homophilic influences at different scales. We use numerical simulations to monitor the evolution of MVM dynamics in synthetic geometric networks and several real data sets. We identify a transition between a final stage of the dynamics with mixed binary opinions to one of full consensus, depending on the composition and size of the groups as well as their strength of influence. Strikingly, groups made of more diverse and less affine nodes promote eventual consensus. Contrarily, groups reflecting similarities between nodes, as captured by the latent geometry of the networks, yield to metastable clusters of same opinion which hinder the transition to general agreement. Finally, we study in detail the phenomenon of opinion segregation and discover that opinion clusters admit a visual interpretation as spatial patterns in the underlying hyperbolic space of networks.

An Adaptive Mental Network Model for Reactions to Social Pain
PRESENTER: Katarina Miletic

ABSTRACT. Reactions to social pain and the behavioral strategies adopted as a consequence of it are a complex and adaptive phenomenon that leads to major consequences to a person’s social functioning and overall well-being. As such behavioral strategies are something that changes over time and are difficult to explain by simple linear correlations, a model such as this one is useful to the goal of providing a more nuanced understanding of the phenomena and the way that it changes dynamically over time, potentially leading to richer theoretical predictions and potential new directions for research.

14:15-16:15 Session Oral O2C: Network Analysis & Measures
Recommender systems increase exposure diversity. Or do they? A complex networks approach.
PRESENTER: Augustin Godinot

ABSTRACT. While the ever-increasing emergence of online services has led to a growing interest in the development of recommender systems, the algorithms underpinning such systems have begun to be criticized for their role in limiting the variety of content exposed to users. In this context, the notion of diversity has been proposed as a way of mitigating the side effects resulting from the specialization of recommender systems. However, we still know little about how classic recommendation paradigms affect users’ behavior in terms of diversity. In this work, using a recent development on measuring diversity in Heterogeneous Networks, we show the relevance of a complex network approach to recommender systems evaluation: by conducting random walks on a tripartite graph modeling how users are exposed to different musical categories via recommendations, we were able to analyze the effect of a well-known recommendation model on the diversity of users' attention. The results show that the musical selections of a large proportion of users are diversified as a result of the recommendations. However, our study also reveals that this observation only holds for diversity in the sense of variety and that recommendations usually fail to provide a balanced exposure to the different categories.

Attributed Graphettes-based Preterm Infants Motion Analysis
PRESENTER: Davide Garbarino

ABSTRACT. The study of preterm infants neuro-motor status can be performed by analyzing infants spontaneous movements. Nowadays, available automatic methods for assessing infants motion patterns are still limited. We present a novel pipeline for the characterization of infants spontaneous movements, which given RGB videos leverages on network analysis and NLP. First, we describe a body configuration for each frame considering landmark points on infants bodies as nodes of a network and connecting them depending on their proximity. Each configuration can be described by means of attributed graphettes. We identify each attributed graphette by a string, thus allowing to study videos as texts, i.e. sequences of strings. This allows us exploiting NLP methods as topic modelling to obtain interpretable representations. We analyze topics to describe both global and local differences in infants with normal and abnormal motion patters. We find encouraging correspondences between our results and evaluations performed by expert physicians.

A Fair-Cost Analysis of Random Neighbor Sampling Methods
PRESENTER: Yitzchak Novick

ABSTRACT. Random neighbor sampling, or RN, is a method for sampling vertices with an average degree greater than the mean degree of the graph. It samples a vertex, then exchanges it for one of its neighbors which is assumed to be of higher degree. While considerable research has analyzed various aspects of RN, the extra cost of sampling a second vertex is typically not addressed. In this paper, we offer an analysis of RN from the perspective of cost. We define three separate costs, the cost of sampling a vertex, the cost of sampling a neighbor, and the cost of selecting a vertex (as opposed to discarding it in exchange for another). We introduce variations to RN, RVN which retains the first sampled vertex, and RN-RV and RVN-RV which are `hybrid’ methods that sample differently in two separate phases. We study all of these methods in terms of the costs we define. The cost-benefit analysis highlights the methods strengths and weaknesses, with a specific focus on their respective performances for finding high-degree vs. low-degree vertices. The analyses and results provide a far richer understanding of RN, and open a new area of exploration for researching additional sampling methods that are best suited for cost-effectiveness in light of particular goals.

Classification of Dispersed Patterns of Radio-Graphic Images with COVID-19 by Core-Periphery Network Modeling

ABSTRACT. In real world data classification tasks, we always face the situations where the data samples of the normal cases present a well defined pattern and the features of abnormal data samples vary from one to another, i.e., do not show a regular pattern. Up to now, the general data classification hypothesis requires the data features within each class to present a certain level of similarity. Therefore, such real situations violate the classic classification condition and make it a hard task. In this paper, we present a novel solution for this kind of problems through a network approach. Specifically, we construct a core-periphery network from the training data set in such way that core node set is formed by the normal data samples and peripheral node set contains the abnormal samples of the training data set. The classification is made by checking the coreness of the testing data samples. The proposed method is applied to classify radiographic image for COVID-19 diagnosis. Computer simulations show promising results of the method. The main contribution is to introduce a general scheme to characterize pattern formation of the data "without pattern".

Small number of communities in Twitter keyword networks
PRESENTER: Anthony Bonato

ABSTRACT. We investigate networks formed by keywords in tweets and study their community structure. Based on datasets of tweets mined from over seven hundred political figures in the U.S.\ and Canada, we hypothesize that such Twitter keyword networks exhibit a small number of communities. Our results are further reinforced by considering via so-called pseudo-tweets generated randomly and using AI-based language generation software. We speculate as to the possible origins of the small community hypothesis and further attempts at validating it.

Finding Cross-Border Collaborative Centres in Biopharma Patent Networks: A Clustering Comparison Approach Based on Adjusted Mutual Information

ABSTRACT. The recent speedy development of COVID-19 mRNA vaccines has underlined the importance of cross-border patent collaboration. This paper uses the latest edition of the REGPAT database from the OECD and constructs the co-applicant patent networks for the fields of biotechnology and pharmaceuticals. We identify the cross-border collaborative regional centres in these patent networks at NUT3 level using a clustering comparison approach based on adjusted mutual information (AMI). In particular, we measure and compare the AMI scores of the clustering before and after arbitrarily removing cross-border links of a focal node against the default clustering defined by national borders. The region with the largest difference in AMI scores is identified as the most cross-border collaborative centre, hence the name of our measure, AMI gain. We find that our measure both correlates with and has advantages over the traditional measure betweenness centrality and a simple measure of foreign share.

Realistic Commodity Flow Networks to Assess Vulnerability of Food Systems
PRESENTER: Abhijin Adiga

ABSTRACT. As the complexity of our food systems increases, they also become susceptible to unanticipated natural and human-initiated events. Commodity trade networks are a critical component of our food systems in ensuring food availability. We develop a generic data-driven framework to construct realistic agricultural commodity trade networks. Our work is motivated by the need to study food flows in the context of biological invasions. These networks are derived by fusing gridded, administrative-level, survey datasets on production, trade, and consumption. Further, they are periodic temporal networks reflecting seasonal variations in production and trade of the crop. We apply this approach to create networks of tomato flow for two regions -- Senegal and Nepal. Using statistical methods and network analysis, we gain insights into spatiotemporal dynamics of production and trade. Our results suggest that agricultural systems are increasingly vulnerable to attacks through trade of commodities due to their vicinity to regions of high demand and seasonal variations in production and flows.

Directed Random Graph Models and their AssociatedLaplacians

ABSTRACT. We consider spectral methods that uncover hidden structures in directed networks. We establish and exploit connections between node reordering via (a) minimizing an objective function and (b) maximizing the likelihood of a random graph model. We focus on two existing spectral approaches that build and analyse Laplacian-style matrices via the minimization of frustration and trophic incoherence. These algorithms aim to reveal directed periodic and linear hierarchies, respectively. We show that reordering nodes using the two algorithms, or mapping them onto a specified lattice, is associated with new classes of directed random graph models. Using this random graph setting, we are able to compare the two algorithms on a given network and quantify which structure is more likely to be present. We illustrate the approach on synthetic and real networks and discuss practical implementation issues.

16:15-16:55Coffee Break & Online Session for Onsite Presenters of Poster Session P1 A-B-C
16:15-16:55 Session Poster P2A: [1-11] Network Analysis & Measures
Analysis of radiographic images of patients with COVID-19 using fractal dimension and complex network-based high-level classification

ABSTRACT. An important task in combating COVID-19 involves the quick and correct diagnosis of patients, which is not only critical to the patient's prognosis, but can also help to optimize the configuration of hospital resources. This work aims to classify the chest radiographic images to help the diagnosis and prognosis of patients with COVID-19. In comparison to images of healthy lungs, chest images infected by COVID- 19 present geometrical deformations, like the formation of filamentous. Therefore, fractal dimension is applied here to characterize the levels of complexity of COVID-19 images. Moreover, real data often contains complex patterns beyond physical features. Complex networks are suitable tools for characterizing data patterns due to their ability to capture the spatial, topological and functional relationship between the data. Therefore, a complex network-based high-level data classification technique, capable of capturing data patterns, is modified and applied to chest radiographic image classification. Experimental results show that the proposed method can obtain high classification precision on X-ray images. Still in this work, a comparative study between the proposed method and the state-of-the-art classification techniques is also carried out. The results show that the performance of the proposed method is competitive. We hope that the present work generates relevant contributions to combat COVID-19.

Dynamical influence driven space system design
PRESENTER: Ruaridh Clark

ABSTRACT. Complex networks are emerging in low-Earth-orbit, with many thousands of satellites set for launch over the coming decade. These data transfer networks vary based on spacecraft interactions with targets, ground stations, and other spacecraft. While constellations of a few, large, and precisely deployed satellites often produce simple, grid-like, networks. New small-satellite constellations are being deployed on an ad-hoc basis into various orbits, resulting in complex network topologies. By modelling these space systems as flow networks, the dominant eigenvectors of the adjacency matrix identify influential communities of ground stations. This approach provides space system designers with much needed insight into how differing station locations can better achieve alternative mission priorities and how inter-satellite links are set to impact upon constellation design. Maximum flow and consensus-based optimisation methods are used to define system architectures that support the findings of eigenvector-based community detection.

Professional Judgments of Principled Network Expansions
PRESENTER: Ian Coffman

ABSTRACT. As data increase in both size and complexity, it is essential to determine optimal solutions for processing and interacting with network representations by human analysts. In this paper, we explore different ways of expanding networks from a small set of "seed" nodes to create tractable sub-networks for human analysis. In particular, we define a framework for generating different network expansions for preference testing and statistical analysis in a paired-comparison task. To illustrate this framework, we generate a range of expansions from a database of politically exposed persons and conduct the preference study with professional criminal network analysts. The resulting statistical analysis demonstrates a dispreference for purely random generated networks and preference for naive network expansion methods (depth-limited traversals based on one or two "hops" from the seed nodes) over more statistically rigorous methods (e.g., edge- and node-based sampling). From these results and inferential analysis, we propose practice considerations and define future research.

Navigating Multidisciplinary Research Using Field of Study Networks

ABSTRACT. This work proposes Field of Study networks as a novel network representation for use in scientometric analysis. We describe the formation of Field of Study (FoS) networks, which relate research topics according to the authors who publish in them, from corpora of articles where fields of study can be identified. FoS networks are particularly useful for the distant reading of large datasets of research papers, through the lens of exploring multi-disciplinary science. To support this, we include case studies which explore multi-disciplinary research in coropora of varying size and scope; namely, 891 articles relating to network science research and 166,000 COVID-19 related articles.

Public Procurement Fraud Detection: A review using Network Analysis

ABSTRACT. Public procurement fraud is a plague that produces significant economic losses in any state and society, but empirical studies to detect it in this area are still scarce. This article presents a review of the most recent literature on public procurement to identify techniques for fraud detection using Network Science. Applying the PRISMA methodology and using the Scopus and Web of Science repositories, we selected scientific articles and compared their results over a period from 2011 to 2021. Employing a compiled search string, we found cluster analysis and centrality measures as the most adopted techniques.

Evolution of the world stage of global science from a scientific city network perspective
PRESENTER: Hanjo Boekhout

ABSTRACT. This paper investigates the stability and evolution of the world stage of global science at the city level by analyzing changes in co-authorship network centrality rankings over time. Driven by the problem that there exists no consensus in the literature on how the spatial unit ``city’’ should be defined, we first propose a new approach to delineate so-called scientific cities. On a high-quality Web of Science dataset of 21.5 million publications over the period 2008--2020, we study changes in centrality rankings of subsequent 3-year time-slices of scientific city co-authorship networks at various levels of impact. We find that, over the years, the world stage of global science has become more stable. Additionally, by means of a comparison with degree respecting rewired networks we reveal how new co-authorships between authors from previously unconnected cities more often connect `close' cities in the network periphery.

The UK Theatre Ecosystem as Network

ABSTRACT. Policy models in creative and cultural sectors often shared one siloed and mechanistic understanding of social systems that lead to extremely narrowed policies which overlooked the sectors’ path-dependency and the myriad interactions behind the sectors’ dynamics. Despite qualitative and theoretical contributions from the creative and cultural ecology area, which precisely highlights such models’ limits and stresses the need for one complexity-informed framework, the latter lacks quantitative evidence. To fill this gap, the paper sheds light on creative and cultural production’s interconnectedness through one longitudinal analysis of workers’ flow among London’s not-for-profit, commercial and fringe theatres, abstracted in a series of multiplexes. The method highlights the frequent interactions among the three areas of the sector – and among the workers there employed – that supports the claim for one holistic analysis of the creative and cultural sectors.

From on-site to online teaching: Analysis of Temporal Social Networks in Higher Education

ABSTRACT. Recent research has focused on the effect of the pandemic and the sudden shift to online learning on students and teachers. Most of the results are obtained with qualitative methodologies such as focus groups, or through surveys. In contrast, this study explores the effect of the COVID-19 pandemic in teaching and learning by analysing the changes in the structure of the temporal networks created based on the discussion forum platforms’ data. The preliminary results show significant changes in the interactions’ levels and patterns. The students were more involved in the discussions each week, asking more questions and answering their peers’ questions. Nevertheless, the increment in questions and interactions implies a meaningful increment of the workload of teachers and TAs. Current research is focused on splitting the interactions by grade placement to discover and explain differences in the homophily.

The Technology Space Conditions the Digital Development of Nations

ABSTRACT. We hypothesise that the digital development of nations can be measured as a network of technologies and the relations between them. Global inequalities in digital development can then be understood as differences in the positions of countries in the ‘technology space’. Using data from Stack Overflow, the world's largest questions-and-answers website for programming related topics, we construct this technology space as a network. Exemplified with contributions to machine learning from India and the United States, we identify the position of the two countries in the network and describe digital inequalities in terms of their network positioning.

Measuring Law Over Time: A Network Analytical Framework with an Application to Statutes and Regulations in the United States and Germany
PRESENTER: Corinna Coupette

ABSTRACT. How do complex social systems evolve in the modern world? This question lies at the heart of social physics, and network analysis has proven critical in providing answers to it. In recent years, network analysis has also been used to gain a quantitative understanding of law as a complex adaptive system, but most research has focused on legal documents of a single type, and there exists no unified framework for quantitative legal document analysis using network analytical tools. Against this background, we present a comprehensive framework for analyzing legal documents as multi-dimensional, dynamic document networks. We demonstrate the utility of this framework by applying it to an original dataset of statutes and regulations from two different countries, the United States and Germany, spanning more than twenty years (1998–2019). Our framework provides tools for assessing the size and connectivity of the legal system as viewed through the lens of specific document collections as well as for tracking the evolution of individual legal documents over time. Implementing the framework for our dataset, we find that at the federal level, the American legal system is increasingly dominated by regulations, whereas the German legal system remains governed by statutes. This holds regardless of whether we measure the systems at the macro, the meso, or the micro level.

Parametric sensitivity in Graph Neural Network forUrban Cluster Detection, a case study
PRESENTER: Camila Vera

ABSTRACT. Urban cluster detection is a relevant task in urban planning, as it provides valuable input to assess precisely the level and patterns of segregation in a city. Even though there is a considerable body of literature exploring this problem, it is unclear whether the clusters inferred by the different methods are stable or dependent on an adequate selection of the methods’ hyperparameters. As ground truth data to validate the results is scarce, providing an algorithm that is, at the same time, effective and with low parametric sensitivity is a crucial aspect to allow the widespread application of this kind of urban data analysis.In this paper, we use a graph neural network-based clustering (GNNC) approach [3]to uncover urban structure using geocoded data on socioeconomic status (SES) and the aesthetic characteristics of the urban environment. We then perform a case study using data from Santiago, Chile, to examine the stability of the clusters inferred by the GNNC method with respect to changes in the data aggregation scheme and the model’s hyperparameters. Santiago is a large city with high segregation [6, 7], where most high-income households are located in the North-East, making it a suitable place to study parametric sensitivity. Results show that the territorial clusters do not suffer from substantial variations for different parameter specifications. However, in other case studies, the results of this method could vary, so an exhaustive analysis of the effect of hyperparameters on the results obtained should be conducted

Exploring network distances for movies comparison
PRESENTER: Majda Lafhel

ABSTRACT. In this paper, we investigate the similarity between movies. Rather than looking at the visual content, we exploit the movie script to build a multilayer network representation. Once this movie summary is built, the problem reduces to comparing networks. This paper extends our work to research appropriate measures for comparing movie networks. We investigate three classes of methods from the three main categories (statistical, embedding, and spectral approaches) to compare four movie categories (horror, adventure, romance, and comedy).

16:15-16:55 Session Poster P2B: [12-18] Network Medicine
Drug repositioning using multiplex-heterogeneous network embedding: a case study on SARS-CoV2

ABSTRACT. Drug repositioning (also called drug repurposing) is a strategy for identifying new therapeutic targets for existing drugs. This approach is of great importance in pharmacology as it is a faster and cheaper way to develop new medical treatments. In this paper, we present, to our knowledge, the first application of multiplex-heterogeneous network embedding to drug repositioning. Network embedding learns the vector representations of nodes, opening the whole machine learning toolbox for a wide variety of applications including link prediction, node labelling or clustering. So far, the application of network embedding for drug repositioning focused on heterogeneous networks. Our approach for drug repositioning is based on multiplex-heterogeneous network embedding. Such method allows the richness and complexity of multiplex and heterogeneous networks to be projected in the same vector space. In other words, multiplex-heterogeneous networks aggregate different multi-omics data in the same network representation. We validate the approach on a task of link prediction and on a case study for SARS-CoV2 drug repositioning. Experimental results show that our approach is highly robust and effective for finding new drug-target associations.

Network structure of depressive symptoms among adults with and without cerebral small vessel disease living in precarious housing or homelessness
PRESENTER: Andrea Jones

ABSTRACT. Background: Cerebral small vessel disease (cSVD) is a neurodegenerative condition that can manifest as ischemic stroke, intracerebral hemorrhage, cognitive impairment, as well as neuropsychiatric disorders such as depression. Depression is common among homeless and precariously housed persons and represents an important public health challenge, with limited understanding of the relationship between depressive symptoms and cSVD in this group. Recent studies have examined depressive symptoms as a complex dynamic system, suggesting that greater connectivity may drive disease progression. Methods: This study is part of a naturalistic prospective study of mental and physical health of a community-based sample (n=408) of precariously housed adults in Vancouver, Canada. Depressive symptoms were measured by the Beck Depression Inventory (BDI). Participants received T1, T2-FLAIR, and SWI 3T MRI sequences. cSVD burden was characterized using a modified cSVD score, with one point each for moderate-severe white matter hyperintensities, at least one cerebral microbleed, and at least one lacune. Participants were divided into groups with (n=103) and without cSVD (n=305). Network analysis was used to investigate relationships between depressive symptoms by constructing Gaussian graphical models and using least absolute shrinkage and selection operator (LASSO) procedure to visualize significant edges in the network. Global and local network structure were assessed. A bootstrap procedure was used to assess stability of edgeweights. Walktrap algorithm was used for community detection. Network comparison test (NCT) with 10,000 permutations was used to compare the global connectivity, global strength, and specific edge strength between groups. Participants with cSVD burden had similar total BDI scores than those without (W=15070, p=0.523). Among those with cSVD features, sleep disturbance and feeling punished demonstrated the greatest degree and betweenness centrality. Among those without cSVD features, feelings of self disappointment, crying, and low energy had the highest degree centrality, with crying and anorexia had the highest betweenness centrality. Three communities were identified in each network, and membership differed between the cSVD and no-cSVD groups. Global connectivity was similar between the two networks (cSVD: 9.5, no cSVD: 9.6; p=0.524). Several edges significantly differed between groups, including the presence of positive edge between indecision and suicidal ideation among those with cSVD. Conclusions: While total depressive symptom scores or global connectivity of depressive symptom networks were similar in those with and without cSVD features, the network architecture differed, with potential implications for clinical onset and progression among precariously housed adults. The complex relationships between cSVD and depressive symptoms require consideration when caring for precariously housed persons.

Causal Network Meta-analysis: empirical evidence of biobehavioral influence in cancer progression

ABSTRACT. Understanding how our psychology effects the progression of diseases is of great importance for their treatment. Our psychological health has an important impact on our physical health. To understand how these psychological factors rising from a diagnosis of cancer effect the progression of the disease, we developed a method of factor association using social network analysis and bibliographic meta-analysis to untangle this complex association.

Simulating diffusion of Amyloid-β through Gated Recurrent Units (GRU) and Graph Convolutional Neural Networks (GCN)

ABSTRACT. Alzheimer’s Disease (AD) is a neuro-degenerative disease which deteriorates cognitive functions, and significantly worsens the quality of life. One Hypothesis of its progression is the accumulation and diffusion of amyloid-β protein in the brain. This toxic protein leads neuronal cell deaths, and consequently cognitive declines. Thus, predicting its accumulation is beneficial to early diagnosis or prediction of its progression. Recent studies simulated diffusion of amyloid-β with mathematical models, considering its biological characteristics. However, the mathematical model needs many tunable hyper-parameters to describe the dynamics of the diffusion process, and it is hard to optimize them simultaneously and accurately. In this study, we suggested to use a deep learning model to simulate the diffusion of amyloid-β by applying gated recurrent units (GRU), and graph convolutional neural networks (GCN). The GRU can mimic simulation through Euler’s method, and the GCN can simulate spreading procedures. Learning procedure of the deep learning model efficiently substitutes the procedure to tune the hyper-parameters. We validated our model’s efficacy using a longitudinal data of multimodal neuroimages in the ADNI dataset. Our model showed 0.6384 mean correlation coefficient (median = 0.6466, ICR = [0.542, 0.7528]) between prediction and ground truth in test dataset. The hyper-parameters in our model may imply the diffusion, generation or clearing of amyloid-β over time. This model may serve a new framework of simulating various network diffusion models practically.

Alzheimer’s disease classification using Node-centric and Edge-centric Graph Convolution Neural Networks
PRESENTER: Jaehee Park

ABSTRACT. Alzheimer’s disease (AD) is a neuro-degenerative disease with severe memory deficits and cognitive declines due to various structural changes in the brain. In this study, we utilized a recent deep learning model to classify AD from normal control (NC) using differences in the brain networks. Since the connectivity data is a non-Euclidean data, we used graph convolutional neural networks (GCN). We proposed an edge-centric GCN, besides applying a traditional node-centric GCN on the brain networks. The edge-centric GCN interchanges between nodes and edges, in order to more focus on the edges. While the former propagates the characteristics of the nodes, the latter does those of the brain connections. They both performed reasonably good with over 90% accuracy: the node-centric GCN achieved 94.4% accuracy on average, while the edge-centric GCN did 92.6% (LOOCV accuracy). However, they utilized different features to classify the disease. We analyzed which brain regions or connections were employed for their decision through Grad-CAM. While their results overlapped with those found in the traditional statistical analyses, their results showed very different characteristics. Though the node-centric GCN utilizes the connectivity to propagate nodal features, differences in the connectivity would be collated back to the nodal features, and thus, it may identify only the brain regions whose overall connections were different in AD. While, the edge-centric GCN captured specific abnormal brain connections in AD.

Choosing a method for constructing task-modulated functional connectivity matrices for network analysis

ABSTRACT. Building a comprehensive map of human brain connections, or connectome, may be considered to be one of the largest and most challenging scientific problems in the field of human neuroscience over the last two decades. Today, most of the neuroimaging research was dedicated to studying the structural connections using diffusion tensor imaging and pairwise functional connectivity (FC) in the resting state, as quantified by functional magnetic-resonance imaging (fMRI). In recent years, the key importance of task-modulated functional connectivity and human brain network properties exploration has been increasingly emphasized. A new line of research has been named "task connectomics". However, while methods for constructing resting-state FC matrices have been well developed in recent years, methods for constructing task-modulated FC have not yet been established. To construct task-modulated FC matrices the following methods are usually used: 1) generalized psychophysiological interactions, gPPI 2) correlational psychophysiological interactions, cPPI 3) beta-series correlations; 4) simple correlation differences. We found that, symmetrization of gPPI matrices is adequate only in the case of block designs. Deconvolution step previously recommended for event-related projects results in a strong asymmetry of gPPI matrices. CPPI method, which allows us to avoid symmetrization, reveals task-modulated FC only in the generalized form and only for block designs. Both gPPI and BSC methods revealed similar task-modulated FC patterns. However, the BSC method did not require a questionable symmetrization procedure. Based on the obtained results, we recommend using the gPPI method for block designs and the BSC method for event-related designs to construct symmetrical task-modulated FC matrices.

Networking Brain Networks: Federated Harmonization of Neuroimaging Data
PRESENTER: Biozid Bostami

ABSTRACT. The significance of network neuroscience has reached a global scale with a growing number of large-scale projects related to impactful topics such as brain disease, brain development, and aging, brain-computer interfacing. Maximizing the potential of these large projects to reach their goal requires global knowledge and data sharing. One of the biggest challenges in constructing structural and functional brain networks is analyzing and accessing data. While many studies share data, combining across studies presents significant challenges and can be highly time-consuming as some data are kept private due to confidentiality or regulatory constraints. It is thus critical to develop approaches that can seamlessly analyze decentralized data. This is particularly true with widespread efforts in the neuroimaging community to maximize study power through multi-site investigation, data sharing, and team science. Big data can be viewed as larger storage of raw information; this information can be about an event, activity, transactions, or other data forms. However, pooling data from various sites into a single analysis introduces additional variability known as “site effects” due to differences in scanner protocols, imaging protocol, and acquisition methods. These site effects can reduce statistical power or lead to erroneous conclusions. Big data analytics is not specialized in addressing such issues related to neuroimaging data. Harmonization techniques aim to combine datasets generated from different sources, e.g., research facility or laboratory, so that the combined dataset does not capture any source dependency. In this work, we propose a decentralized model of an emerging harmonization technique called ‘Decentralized ComBat.’ The original ComBat [1] method was initially proposed in genomics to reduce batch effects from sample genes collected from various laboratories. Later, it was applied to different neuroimaging modularity [2]. Our model can work with distributed datasets around the world.

16:15-16:55 Session Poster P2C: [19-22] Networks in Finance & Economics
The role of smart contracts in the transaction networks of four key DeFi-collateral Ethereum-based tokens

ABSTRACT. We analyse the transaction networks of four dissimilar but innovative ERC-20 tokens that run on top of the public blockchain Ethereum and can be used as collateral in DeFi: Ampleforth (AMP), Basic Attention Token (BAT), Dai (DAI) and Uniswap (UNI). AMP is a stablecoin with an elastic supply policy. BAT, unrelated to stablecoins but accepted as collateral as well in decentralised finance (DeFi), tracks anonymously user attention within the Brave web browser. DAI is a stablecoin heavily used in decentralised finance. UNI is the token of Uniswap, a decentralised exchange that functions as a public good. We use complex network analysis to reveal the real distribution of their networks. We compute their preferential attachment and we investigate how critical code-controlled nodes smart contracts, SC) executed on the blockchain are in comparison to human-owned nodes (externally owned accounts, EOA), which should be controlled by end users with public and private keys or by off-blockchain code. Our findings contribute to characterise these new financial networks. We remove the top 200 nodes with the highest degree in these networks to analyse node criticality. 200 nodes proved a sufficient threshold to prove our point and keep consistency between the different tokens. We use three network dismantling strategies: smart contract nodes and known exchanges only, EOA nodes only and a final third hybrid SC-EOA approach. We conclude that smart contract nodes and their related exchange wallets play a structural role in holding up these networks, theoretically designed to be distributed but in reality tending towards centralisation. This sheds new light on the structural role that smart contracts and exchanges play in Ethereum and, more specifically, in Decentralized Finance (DeFi) networks and casts a shadow on on how much decentralised these networks really are. From the information security viewpoint, our findings highlight the need to better secure these smart contracts.

Ethnic Network in the U.S. International Trade

ABSTRACT. With globalization, cross-country trade has become more active, and economic dependence has increased. We analyze the dependence between the same ethnic group, the proportion of ethnic networks in economic globalization. We use the U.S. international trade dataset. And we extract ethnicity using the surname of corporate executives in the dataset. We analyze the degree of globalization by the ethnic groups targeting companies in the U.S. We find that despite the globalization of the U.S. market, some ethnic groups do not trade with various ethnic groups. In other words, international trade between ethnic groups has different degrees of globalization for each ethnic group.

Evolution of cohesion between USA financial sector companies: complex networks approach
PRESENTER: Vojin Stevic

ABSTRACT. This work presents the approach for inference of economic systems' structure based on complex networks theory utilizing the time series of prices. Our network is obtained from the correlation matrix between the time series of companies' prices by imposing a threshold to the values of the correlation coefficients. We analyze the community structure of the obtained networks and the relation between communities' inter and intra-connectivity as an indicator of systemic risk. Our results show how an economic system's behavior is related to its structure and how the crisis is reflected in the evolution of network cohesion. We show how regulation and deregulation affect the structure of the system.

Uncovering the Structure of Influence in Global Ownership Network
PRESENTER: Takayuki Mizuno

ABSTRACT. In a global complex ownership network, even innocent investors and firms are unexpectedly connected with socially irresponsible firms, like military and environmentally unfriendly ones. To promote peace and achieve sustainable development goals, understanding the structure of the global ownership network is inevitable, and Network Power Index (NPI) is developed as a measurement of the influence of shareholders. NPI is an extension of the Shapley-Shubik Power Index, which is defined as a (potential) probability of a player controlling decision-making, to a network. This index is defined only for ultimate owners and cannot quantify the importance of intermediary companies through which influence flows from upstream shareholders to downstream companies. In this research, extending NPI, we propose a new method, called Network Power Flow (NPF) is defined as the average number of companies any ultimate owners can control via a specific intermediary firm, to find key bridging firms in indirect corporate control in an ownership network. For instance, we can identify ultimate shareholders and key intermediaries who are indirectly responsible for the improvement of firms that deteriorate the environment.

16:55-17:35 Session Speaker S2: Yizhou Sun - Graph-based Neural ODEs for Learning Dynamical Systems
Graph-based Neural ODEs for Learning Dynamical Systems

ABSTRACT. Many real-world systems, such as moving planets, can be considered as multi agent dynamical systems, where objects interact with each other and co-evolve along with the time. Such dynamics is usually difficult to capture. Understanding and predicting the dynamics based on observed trajectories of objects become a critical research problem in many domains. In this talk, we propose to treat dynamical systems as evolving graphs, and design graph neural network (GNN)-based ODEs to approximate system dynamics via the observed signals. In particular, we provide solutions to two challenging scenarios. First, we propose to learn system dynamics from irregularly-sampled partial observations with underlying graph structure. To tackle the above challenge, we present LG-ODE, a latent ordinary differential equation generative model for modeling multi-agent dynamical system with known graph structure, which is the first Graph ODE model in this direction. Second, we propose to learn system dynamics where both node features and edges are evolving. We present a dual Graph ODE, CG-ODE, where both node and edge dynamics are modeled. Experiments on motion capture, spring system, charged particle datasets, COVID-19, and simulated social dynamical systems demonstrate the effectiveness of our approaches.

17:35-19:35 Session Oral O3A: Dynamics on/of Networks
Overcoming the complexity barrier of the dynamic cavity method in scale-free networks
PRESENTER: Giuseppe Torrisi

ABSTRACT. The dynamic cavity method provides the most efficient way to evaluate probabilities of dynamic trajectories in systems of stochastic units with unidirectional sparse interactions. It is closely related to sum-product algorithms widely used to compute marginal functions from complicated global functions of many variables, with applications in disordered systems, combinatorial optimization and computer science. However, the complexity of the cavity approach grows exponentially with the in-degrees of the interacting units, which creates a de-facto barrier for the successful analysis of systems with fat-tailed in-degree distributions. In this letter, we present a dynamic programming algorithm that overcomes this barrier by reducing the computational complexity in the in-degrees from exponential to quadratic, whenever couplings are chosen randomly from (or can be approximated in terms of) discrete, possibly unit-dependent, sets of equidistant values. As a case study, we analyse the dynamics of a random Boolean network with a fat-tailed degree distribution and fully asymmetric binary $\pm J$ couplings, and we use the power of the algorithm to unlock the noise dependent heterogeneity of stationary node activation patterns in such a system.

The distribution of first return times \\ of random walks on random regular graphs

ABSTRACT. %% This is LLNCS.DEM the demonstration file of %% the LaTeX macro package from Springer-Verlag %% for Lecture Notes in Computer Science, %% version 2.4 for LaTeX2e as of 16. April 2010 %% \documentclass{llncs} % \usepackage{mathptmx} % selects Times Roman as basic font \usepackage{helvet} % selects Helvetica as sans-serif font \usepackage{courier} % selects Courier as typewriter font \usepackage{type1cm} % activate if the above 3 fonts are not available on your system \usepackage{makeidx} % allows index generation \usepackage{graphicx} % standard LaTeX graphics tool when including figure files \usepackage{multicol} % used for the two-column index \usepackage[bottom]{footmisc}% places footnotes at page bottom \usepackage{subfigure} \usepackage{amsfonts} \usepackage[cmex10]{amsmath}


\title{The distribution of first return times \\ of random walks on random regular graphs }

\titlerunning{First return times of random walks on random regular graphs} % abbreviated title (for running head) \author{Ofer Biham, Ido Tishby and Eytan Katzav} \authorrunning{Ido Tishby et al.} % abbreviated author list (for running head) \institute{Racah Institute of Physics, The Hebrew University, Jerusalem 9190401, Israel,\\ \email{} }


We present analytical results for the distribution of first return (FR) times of random walks (RWs) on random regular graphs (RRGs) consisting of $N$ nodes of degree $c \ge 3$ \cite{Tishby2021}. Starting from a random initial node $i$ at time $t=0$, at each time step $t \ge 1$ an RW hops into a random neighbor of its previous node. We calculate the distribution $P ( T_{\rm FR} = t )$ of first return times to the initial node $i$. The distribution of first return times was studied before on the Bethe lattice, which exhibits a tree structure of an infinite size \cite{Hughes1982,Cassi1989,Giacometti1995}. However, no closed-form analytical results were available for $P(T_{\rm FR}=t)$ on random networks of a finite size $N$. A more general problem involves the first passage (FP) time $T_{\rm FP}$, which is the first time at which an RW starting from an initial node $i$ visits a specified target node $j$ \cite{Redner2001,Sood2005,Peng2021}. The first return problem is a special case of the first passage problem, in which the initial node is also chosen as the target node.

A key observation in this work is the distinction between first return trajectories in which the RW retrocedes its own steps backwards all the way to the initial node $i$ and those in which the RW returns to $i$ via a path that does not retrocede its own steps. In the retroceding (RETRO) scenario, each edge that belongs to the RW trajectory is crossed the same number of times in the forward and backward directions [Fig. \ref{fig:1}(a)]. In the non-retroceding ($\lnot$RETRO) scenario the subgraph that consists of the nodes visited by the RW and the edges it has crossed between these nodes includes at least one cycle [Fig. \ref{fig:1}(b)]. The overall probability that an RW will first return to its initial node $i$ via a retroceding trajectory is given by $P( {\rm RETRO} ) = \frac{1}{c-1}$, while the complementary probability that it will first return to $i$ via a non-retroceding path (or that an RW on an infinite RRG will never return to $i$), is given by $P( \lnot {\rm RETRO} ) = \frac{c-2}{c-1}$ \cite{Cassi1989,Debacco2015,Martin2010}.

\begin{figure} \centerline{ \includegraphics[width=3.6cm]{fig1a.eps} \hspace{0.6in} \includegraphics[width=3.6cm]{fig1b.eps} } \caption{ (a) Schematic illustration of a retroceding first-return trajectory on an RRG in which the RW returns to the initial node (empty circle) by retroceding its own steps backwards; (b) A non-retroceding first return trajectory in which the RW returns to the initial node via a cycle. Note that in this illustration the RRG is of degree $c=4$. } \label{fig:1} \end{figure}

Using a combination of probabilistic and combinatorial methods, we obtained the tail distribution of first return times of RWs on RRGs, which is given by % \begin{equation} P(T_{\rm FR} > t) = P(T_{\rm FR} > t| {\rm RETRO}) P( {\rm RETRO} ) + P(T_{\rm FR} > t| \lnot {\rm RETRO}) P( \lnot {\rm RETRO} ), \label{eq:PTFRgt2p} \end{equation}

\noindent where the conditional tail distribution for retroceding trajectories is % \begin{equation} P(T_{\rm FR}>t| {\rm RETRO}) = C_{ \lfloor \frac{t}{2} \rfloor } \frac{c (c-1)}{(c-2)^2} \left( \frac{c-1}{c^2} \right)^{ \lfloor \frac{t}{2} \rfloor } \, _2F_1 \left[ \left. \begin{array}{c} 1, \frac{3}{2} \\ \lfloor \frac{t}{2} \rfloor + 2 \end{array} \right| - \frac{4(c-1)}{(c-2)^2} \right], \label{eq:PTsum2} \end{equation}

\noindent and the conditional tail distribution for non-retroceding trajectories is

\begin{equation} P(T_{\rm FR} > t | \lnot {\rm RETRO} ) = \left\{ \begin{array}{ll} 1 & \ \ \ \ \ \ t=0,1,2 \\ \exp \left[ - \left( \frac{c-2}{c-1} \right) \frac{t-2}{N-2} \right] & \ \ \ \ \ \ t \ge 3. \\ \end{array} \right. \label{eq:P_FRT_tail2} \end{equation}

\noindent The coefficient $C_{\lfloor \frac{t}{2} \rfloor}$ in Eq. (\ref{eq:PTsum2}) is the Catalan number, $\lfloor x \rfloor$ is the integer part of $x$ and $_2F_1[ \ ]$ is the hypergeometric function. To leading order, the conditional distributions $P(T_{\rm FR}>t| {\rm RETRO})$ and $P(T_{\rm FR}>t| \lnot {\rm RETRO})$ are exponential distributions, whose characteristic time scales are of order $1$ and of order $N$, respectively.

In Fig. \ref{fig:2} we present analytical results (solid lines) for the tail distribution of first return times of RWs on an RRG of size $N=1000$ and $c=3$, $c=4$ and $c=10$. %The analytical results, %are in excellent agreement with the results %obtained from computer simulations (circles). The analytical results, obtained from Eq. (\ref{eq:PTFRgt2p}), are in excellent agreement with the results obtained from computer simulations (circles) in all the relevant time scales.

\begin{figure} \centerline{ \includegraphics[width=13cm]{fig2.eps} } \caption{ Analytical results (solid lines) and simulation results (circles) for the tail distribution $P(T_{\rm FR} > t)$ of first return times of an RW on a random regular graph of size $N=1000$ and degrees $c=3$, $c=4$ and $c=10$. The insets show a magnified view of $P(T_{\rm FR} > t)$ at very short times, where the retroceding scenario is dominant. } \label{fig:2} \end{figure}

In the large network limit of $N \gg 1$, the mean first return time is given by % \begin{equation} \langle T_{\rm FR} \rangle = N + \mathcal{O} \left( 1 \right). \end{equation}

\noindent Interestingly, this result can be obtained directly from the Kac lemma, which employs general properties of discrete stochastic processes \cite{Kac1947}.

In the limit of $N \rightarrow \infty$ the RRG converges towards the Bethe lattice. The Bethe lattice exhibits a tree structure, in which all the first return trajectories belong to the retroceding scenario. Moreover, in the limit of $N \rightarrow \infty$ the trajectories of RWs on RRGs are transient, namely they return to the initial node with probability $<1$. In this sense they resemble the trajectories of RWs on regular lattices of dimensions $d \ge 3$ \cite{Polya1921}.

The first return process is an important landmark in the life-cycle of RWs on networks. The characteristic time scale of this process is of order $t \sim N$. Another landmark is the first hitting process, which is the first time in which the RW enters a previously visited node \cite{Tishby2017,Tishby2021a}. The characteristic time scale of the first hitting process is $t \sim \min \{c,\sqrt{N} \}$, namely $t \sim c$ in dilute networks and $t \sim \sqrt{N}$ in dense networks. In both cases the first hitting time is much shorter than the first passage time \cite{Tishby2017,Tishby2021a}. Yet another important event which occurs at much longer time scales is the step at which the RW completes visiting all the nodes in the network. The time at which this happens is called the cover-time, which scales like $t \sim N \ln N$ \cite{Cooper2005}. The challenges involved in the effort to extend this work to a broader class of random networks will be discussed.

\begin{thebibliography}{10} \providecommand{\url}[1]{\texttt{#1}} \providecommand{\urlprefix}{URL }

\bibitem{Tishby2021} Tishby, I. Biham, O., Katzav, E.: Analytical results for the distribution of first return times of random walks on random regular graphs. {\it J. Phys. A: Math. Theor.} {\bf 54}, 325001 (2021)

\bibitem{Hughes1982} Hughes B.D., Sahimi M.: Random walks on the Bethe lattice. {\it J. Stat. Phys.} {\bf 29}, 781 (1982)

\bibitem{Cassi1989} Cassi D.: Random walks on Bethe lattices. {\it Europhys. Lett.} {\bf 9}, 627 (1989)

\bibitem{Giacometti1995} Giacometti A.: Exact closed form of the return probability on the Bethe lattice. {\it J. Phys. A} {\bf 28}, L13 (1995)

\bibitem{Redner2001} Redner S.: {\it A Guide to First Passage Processes}. Cambridge University Press (2001)

\bibitem{Sood2005} Sood V., Redner S., ben-Avraham D.: First-passage properties of the Erd{\H o}s–R\'enyi random graph. {\it J. Phys. A} {\bf 38}, 109 (2005)

\bibitem{Peng2021} Peng J., Sandevc T., Kocarev L.: First encounters on Bethe lattices and Cayley trees. {\it Communications in Nonlinear Science and Numerical Simulation} {\bf 95}, 105594 (2021)

\bibitem{Debacco2015} De Bacco C., Majumdar S.N., Sollich P.: The average number of distinct sites visited by a random walker on random graphs. {\it J. Phys. A} {\bf 48}, 205004 (2015)

\bibitem{Martin2010} Martin O., \v{S}ulc P.: Return probabilities and hitting times of random walks on sparse Erd{\H o}s-R\'enyi graphs. {\it Phys. Rev. E} {\bf 81}, 031111 (2010)

\bibitem{Kac1947} Kac M.: On the notion of recurrence in discrete stochastic processes. {\it Bull. Amer. Math. Soc.} {\bf 53}, 1002 (1947)

\bibitem{Polya1921} P\'olya G.: \"{U}ber eine aufgabe der wahrscheinlichkeitsrechnung betreffend die irrfahrt im strassennetz. {\it Mathematische Annalen} {\bf 84}, 149 (1921)

\bibitem{Tishby2017} Tishby I., Biham O., Katzav E.: The distribution of first hitting times of random walks on Erd{\H o}s–R\'enyi networks. {\it J. Phys. A} {\bf 50}, 115001 (2017)

\bibitem{Tishby2021a} Tishby I., Biham O., Katzav E.: Analytical results for the distribution of first hitting times of random walks on random regular graphs. {\it J. Phys. A} {\bf 54}, 145002 (2021)

\bibitem{Cooper2005} Cooper C., Frieze A.M. The cover time of random regular graphs. {\it SIAM J. Discrete Math.} {\bf 18}, 728 (2005)



The distribution of cover times of random walks on random regular graphs

ABSTRACT. We present analytical results for the distribution of cover times of random walks (RWs) on random regular graphs consisting of $N$ nodes of degree $c$ ($c \ge 3$). Starting from a random initial node at time $t=1$, at each time step $t \ge 2$ an RW hops into a random neighbor of its previous node. In some of the time steps the RW may visit a new, yet-unvisited node, while in other time steps it may revisit a node that has already been visited before. The cover time $T_{\rm C}$ is the number of time steps required for the RW to visit every single node in the network at least once. We derive a master equation for the distribution $P_t(S=s)$ of the number of distinct nodes $s$ visited by an RW up to time $t$ and solve it analytically. Inserting $s=N$ we obtain the cumulative distribution of cover times, namely the probability $P(T_{\rm C} \le t) = P_t(S=N)$ that up to time $t$ an RW will visit all the $N$ nodes in the network. Taking the large network limit, we show that $P(T_{\rm C} \le t)$ converges to a Gumbel distribution. The cover time is relevant to a broad range of random search processes. These include search processes involving multiple targets in which all the targets need to be found. More specifically, in cases that the number of targets is unknown one needs to perform an exhaustive search in which all the nodes in the network must be visited.

Topology controls the emergent dynamics in nonlinear flow networks

ABSTRACT. Flow networks are essential for both living organisms and enginneered systems. These networks often present complex dynamics controlled, at least in part, by their topology. Previous works. have shown that topologically complex networks interconnecting explicitly oscillatory or excitable elements can display rich emerging dynamics. Here we present a model for complex flow networks with non-linear conductance that allows for internal accumulation/depletion of volume, without any inherent oscillatory or excitable behavior at the nodes. In the absence of any time dependence in the pressure input and output we observe emerging dynamics in the form of self-sustained waves, which travel through the system. The frequency of these waves depends strongly on the network architecture and it can be explained with a topological metric.

From mean-field to complex topologies: network effects on the algorithmic bias model

ABSTRACT. Nowadays, we live in a society where people often form their opinion by accessing and discussing contents shared on social networking websites. While these platforms have fostered information access and diffusion, they represent optimal environments for the proliferation of polluted contents, which is argued to be one of the co-causes of polarization/radicalization. Moreover, recommendation algorithms - intended to enhance platform usage - are likely to augment such phenomena, generating the so called "Algorithmic Bias". In this work, we study the impact that different network topologies have on the formation and evolution of opinion in the context of a recent opinion dynamic model which includes bounded confidence and algorithmic bias. Mean-field, scale-free and random topologies, as well as networks generated by the Lancichinetti-Fortunato-Radicchi benchmark, are compared in terms of opinion fragmentation/polarization and time to convergence.

Hypergraph dynamics: assortativity and the expansion eigenvalue
PRESENTER: Nicholas Landry

ABSTRACT. The largest eigenvalue of the matrix describing a network's contact structure is often important in predicting the behavior of dynamical processes. We extend this notion to hypergraphs and motivate the importance of an analogous eigenvalue, the expansion eigenvalue, for hypergraph contagion processes. Using a mean-field approach, we derive an approximation to the expansion eigenvalue and its associated eigenvector in terms of the degree sequence for uncorrelated hypergraphs. We introduce a generative model for hypergraphs that includes degree assortativity, and use a perturbation approach to derive an approximation to the expansion eigenvalue and its corresponding eigenvector for assortative hypergraphs. We validate this result with both synthetic and empirical datasets. We define the dynamical assortativity, a dynamically sensible definition of assortativity for uniform hypergraphs, and describe how reducing the dynamical assortativity of hypergraphs through preferential rewiring can extinguish epidemics.

Sensing enhancement on complex networks
PRESENTER: Markus Brede

ABSTRACT. Previous work has shown that communication between agents with some preference towards adopting the majority opinion can enhance the quality of error-prone individual sensing from dynamic environments. In this paper, we compare the potential of different types of complex networks for such sensing enhancement. Numerical simulations on complex networks are complemented by a mean-field approach for limited connectivity that captures essential trends in dependencies. Our results show that whilst bestowing advantages on a small group of agents degree heterogeneity tends to impede overall sensing enhancement, while clustering and spatial structure play a more nuanced role depending on overall connectivity. We find that for low connectivity sensing enhancement is maximised by random regular networks, whereas for large connectivity best sensing enhancement is found for ring graphs.

Cascading Failures and the Robustness of Cooperation in a Unified Scale-Free Network Model
PRESENTER: Mingxuan He

ABSTRACT. In this paper, we examine the effects the cascading failures phenomenon has on unified scale-free complex networks utilizing prisoner's dilemma game (PDG) dynamics. We find that a single defector may severely impact a large network that may result in a total failure of the entire network. We extend existing results to a network model that unifies the scale-free property and the high-clustering property. Furthermore, we observe that highly connected networks are more vulnerable to the cascading failure effect than less connected ones, as exhibited in both lower average survival rate and lower extinction boundary across different network topologies. As we attempt to examine the cascading failure effect in a more realistic network that is scale-free and highly clustered, we believe that these findings may be beneficial to studying such effects in the real world.

17:35-19:35 Session Oral O3B: Structural Network Measures
Weighted network motifs as random walk patterns

ABSTRACT. Attached file

The distance backbone of complex networks
PRESENTER: Luis M. Rocha

ABSTRACT. Redundancy needs more precise characterization as it is a major factor in the evolution and robustness of networks of multivariate interactions. We investigate the complexity of such interactions by inferring a connection transitivity that includes all possible measures of path length for weighted graphs. The result, without breaking the graph into smaller components, is a distance backbone subgraph sufficient to compute all shortest paths. This is important for understanding the dynamics of spread and communication phenomena in real-world networks. The general methodology we formally derive yields a principled graph reduction technique and provides a finer characterization of the triangular geometry of all edges---those that contribute to shortest paths and those that do not but are involved in other network phenomena. We demonstrate that the distance backbone is very small in large networks across domains ranging from air traffic to the human brain connectome (see Fig. 1), revealing that network robustness to attacks and failures seems to stem from surprisingly vast amounts of redundancy.

A network approach to identification of individual traits in animal groups

ABSTRACT. This work sheds light on the general mechanisms that identify individual traits in animal groups and suggests that basic recordings of collective motion can be sufficient to uncover dominance patterns of individual traits. We also propose a way to construct a feature space that measures individuals' behaviour from the basic trajectories of the group motion. We apply a network-based data analysis method, the diffusion map, to extract leading variables from the reduced feature space, and efficiently identify the individual traits from high dimensional experimental data. Based on the analysis, we aim in future work to implement a similar approach to identify individual traits and potentially discover new trait axes directly from lab recordings of animal groups.

Correlation lag times provide a reliable early-warning indication of approaching bifurcations in regular networks of interacting nodes modelling spatially extended systems
PRESENTER: Giulio Tirabassi

ABSTRACT. Identifying upcoming bifurcations and regime transitions from observations is a significant challenge in time series analysis. Well-known indicators are the increase in spatial and temporal correlation. Here we propose a new indicator that takes into account both. By inspecting the distribution of lag times that maximize the cross-correlation between pairs of adjacent points, we find that the variance of the distribution consistently decreases as the bifurcation approaches. We demonstrate the reliability of this indicator using different models that exhibit different types of bifurcations.

Method to detect opinion leaders in a political Twitter conversation

ABSTRACT. In many online social networks scenarios, opinion leaders dominate public opinion orientation and influence the information sharing and knowledge diffusion. This is why their identification and characterization can be critical in order to understand the dynamics and behavior of an online conversation. The definition of opinion leaders differs between different areas of knowledge and context of studies. However, all definitions agree that opinion leaders have influence on people’s attitudes, opinions and behaviors. In this work, we introduce a method to identify opinion leaders in a Twitter conversation. This method takes into account information interaction and network structure analysis. The methodology will be applied to the political conversation around the Catalan independence with data of 2017. This allow us to infer the general opinion distribution and associated phenomena.

Mining new information from high spectral gap networks

ABSTRACT. A key point in understanding the structure and behavior of a network is the determination of its most important nodes. For this reason, several centrality measures have been implemented, some considering only the local properties of the nodes (such as the degree centrality) others looking at the entire network (such as the eigenvector centrality). However, the node rankigs obtained by applying the degree and eigenvector centrality are almost the same for high spectral gap networks, despite the different criteria underlying these metrics. In this work, we propose a global centrality measure and show that it can capture different information than the degree and/or eigenvector centrality.

Locality of centrality measures on networks

ABSTRACT. We discuss rank-1 approximations to adjacency matrix functions, and the effective locality of centrality measures for dense networks. In particular, we show that both resolvent-based and other nonlinear centrality measures have a rank-1 approximation based on the local degree. Our technique works both for simple, weighted and directed graph.

Unveiling the higher-order structure of multivariate time series
PRESENTER: Andrea Santoro

ABSTRACT. Time series analysis has proven to be a powerful method to characterize different phenomena in biology, neuroscience, economics, and to understand some of their underlying dynamical features. However, to date it remains unclear whether the information encoded in multivariate time series, such as the evolution of financial assets traded in major financial markets, or the neuronal activity in the brain, stem from independent individual entities, or rather, they occur through group interactions (i.e., higher-order structures [1]). A clear example of this issue is provided by the conventional “functional connectivity” between two brain regions: a pairwise connection is drawn irrespectively of whether the activities of the two regions peaked as a pair, or as part of a larger group of functionally coherent regions.

In this work, inspired by the recent advances in computational neuroscience [2, 3], non-linear time series analysis, and Topological Data Analysis (TDA), we propose a novel framework to investigate the higher-order dependencies within a multivariate time series. First, we distinguish the co-fluctuation patterns according to their order (i.e. pairs, triplets, etc), and then characterise the additional coherence of higher-order co-fluctuation patterns using TDA tools. In particular, after i) z-scoring the N original time series, ii) we calculate the element-wise product of the z-scored time series for all the \binom{N}{k} k-order patterns (i.e. edges, triplets, etc). Here, the generic elements represent the instantaneous co-fluctuation magnitude between a (k+1) group interaction. In order to be able to distinguish concordant group interactions from discordant ones in a k-order product, concordant signs are always positively mapped, while discordant signs are negatively mapped (Fig. 1a-b). iii) The resulting new set of time series encoding the k-order co-fluctuations are then further z-scored across time, to make products comparable across k-orders. iv) Finally, for each time frame t, we construct a weight filtration, by sorting all the k-order co-fluctuations by their weights. The weight filtration proceeds from the top down − in the spirit of persistent homology − so that when k-order patterns are gradually included, weighted holes and cliques start to appear (i.e., descending from more coherent patterns to less coherent). Yet, to maintain a well-defined weight filtration, only k-order patterns respecting the simplicial closure condition are included, while the remaining ones are considered as simplicial violations, or “hypercoherent” states, and analysed separately (Fig. 1c).

We show that the instantaneous persistent topological properties and the number of violations uncovered by our framework distinguishes different regimes of coupled chaotic maps [4]. This includes the transition between different dynamical phases and the onset of various types of synchronization (Fig. 1e). Armed with these interpretational benchmarks, we also apply our method to resting-state fMRI signals and to financial time series (Fig. 1f). We find that, during rest, the human brain mainly oscillates between chaotic and partially intermittent states, with higher-order structures reflecting Default Mode Network (DMN) and somatomotor regions, respectively. In financial time series, by contrast, higher-order structures are less important in periods of financial stability. Overall, our approach suggests that investigating the higher-order structure of multivariate time series might provide new insights compared to standard methods, and might allow to better characterise group dependencies inherent to real-world data.

[1] F. Battiston, G. Cencetti, I. Iacopini, V. Latora, M. Lucas, A. Patania, J.-G. Young, and G. Petri, Physics Reports 874, 1 (2020). [2] F. Z. Esfahlani, Y. Jo, J. Faskowitz, L. Byrge, D. P. Kennedy, O. Sporns, and R. F. Betzel, Proceedings of the National Academy of Sciences 117, 28393 (2020). [3] J. Faskowitz, F. Z. Esfahlani, Y. Jo, O. Sporns, and R. F. Betzel, Nature neuroscience 23, 1644 (2020). [4] K. Kaneko, Chaos: An Interdisciplinary Journal of Nonlinear Science 2, 279 (1992).

17:35-19:35 Session Oral O3C: Network Embedding
Simple negative sampling for link prediction in knowledge graphs
PRESENTER: Md Kamrul Islam

ABSTRACT. Knowledge graph (KG) embedding methods learn the low dimensional vector representations of entities and relations of a knowledge graph, facilitating the link prediction task in knowledge graphs. During learning of embeddings, sampling negative triples is important because KGs have only observed positive triples. To the best of our knowledge, uniform-random, generative adversarial network(GAN)-based, and NSCaching are three negative sampling methods in the literature. Unfortunately, they suffer from computational and memory inefficiency problems. In addition, their prediction performance are affected by the ‘vanishing gradient’ problem because of poor quality of sampled negative triples. In this paper, we propose a simple negative sampling (SNS) method based on the assumption that the entities which are closer in the embedding space to the corrupted entity are able to provide high-quality negative triples. Furthermore SNS has a good exploitation potentiel as it uses sampled high-quality negatives for improving the quality of negative triples in next steps. We evaluate our sampling method through link prediction task on five well-known knowledge graph datasets, WN18, WN18RR, FB15K, FB15K-237, YAGO3-10. The method is also evaluated on a new biological KG dataset (FIGHT-HF-23R). Experimental results show that the SNS improves the prediction performance of KG embedding models, and outperforms the existing negative sampling methods.

Hybrid Graph Embedding Techniques in Estimated Time of Arrival Task
PRESENTER: Vadim Porvatov

ABSTRACT. Recently, deep learning has achieved promising results in the calculation of Estimated Time of Arrival (ETA), which is considered as predicting the travel time from the start point to a certain place along a given path. ETA plays an essential role in intelligent taxi services or automotive navigation systems. A common practice is to use embedding vectors to represent the elements of a road network, such as road segments and crossroads. Road elements have their own attributes like length, presence of crosswalks, lanes number, etc. However, many links in the road network are traversed by too few floating cars even in large ride-hailing platforms and affected by the wide range of temporal events. We explore the generalization ability of different spatial embedding strategies and propose a two-staged approach to deal with such problems.

SSSNET: Semi-Supervised Signed Network Clustering

ABSTRACT. Node embeddings are a powerful tool in the analysis of networks; yet, their full potential for the important task of node clustering has not been fully exploited. In particular, most state-of-the-art methods for generating node embeddings of signed networks focus on link sign prediction, and those that pertain to node clustering are usually not graph neural network (GNN) methods.

Here, we introduce a GNN framework for semi-supervised signed network embedding, called SSSNET, with an efficient Signed Mixed Path Aggregation (SIMPA) scheme. The method is end-to-end in combining embedding generation and clustering without an intermediate step. It includes a novel probabilistic balanced normalized cut loss for training nodes and has node clustering as main focus, with an emphasis on polarization effects arising in networks.

The main novelty of our approach is a new take on the role of social balance theory for signed network embeddings. The standard heuristic for justifying the criteria for the embeddings hinges on the assumption that an ``enemy's enemy is a friend". Here, instead, a neutral stance is assumed on whether or not the enemy of an enemy is a friend. Experimental results on various data sets, including synthetic data, in the form of a signed stochastic block model, along with a polarized version of it, and real-world data at different scales, demonstrate that our method can achieve comparable or better results than state-of-the-art spectral clustering methods, for a wide range of noise and sparsity levels. Moreover, our approach complements existing methods through the possibility of including exogenous information, in the form of node-level features or labels.

A Multi-purposed Unsupervised Framework for Comparing Embeddings of Graphs
PRESENTER: Pawel Pralat

ABSTRACT. A graph embedding is a mapping of the nodes of a graph into $k$-dimensional real vectors. Good embeddings should capture the graph topology and node-to-node relationship. Many graph embedding algorithms are available and for each algorithm, hyper-parameters need to be set such as the dimension of the embedding space. Additionally many such algorithms are randomized which means that when one runs them several times different embeddings are generated. As a result, selecting the best embedding is challenging since it is an unsupervised learning task. On the other hand a carefully selected and properly used embedding often can be used to provide features that improve the predictive power of supervised models for such tasks as node classification or link prediction. The reason is that embedding algorithms are capable of performing highly complex transformations of information stored in an input graph that would not be easily possible by standard predictive modelling techniques....

Vaccine skepticism detection by network embedding
PRESENTER: Ferenc Beres

ABSTRACT. We demonstrate the applicability of network embedding to vaccine skepticism, a controversial topic of long past history. With the Covid-19 pandemic outbreak at the end of 2019, the topic is more important than ever. Only a year after the first international cases were registered, multiple vaccines were developed and passed clinical testing. Besides the challenges of development, testing and logistics, another factor that might play a significant role in the fight against the pandemic are people who are hesitant to get vaccinated, or even state that they will refuse any vaccine offered to them. Two groups of people commonly referred to as a) pro-vaxxer, those who support vaccinating people b) vax-skeptic, those who question vaccine efficacy or the need for general vaccination against Covid-19. It is very difficult to tell exactly how many people share each of these views. It is even more difficult to understand all the reasoning why vax-skeptic opinions are getting more popular.

In this work, our intention was to develop techniques that are able to efficiently differentiate between pro-vaxxer and vax-skeptic content. After multiple data preprocessing steps, we analysed the tweet text as well as the structure of user interactions on Twitter. From an open source Python library~\cite{karateclub}, we deployed several node embedding and community detection models that scale well for graphs with millions of edges.

Joint Use of Node Attributes and Proximity for Node Classification
PRESENTER: Arpit Merchant

ABSTRACT. Node classification aims to infer unknown node labels from known labels and other node attributes. Standard approaches for this task assume homophily, whereby a node's label is predicted from the labels of other nodes nearby in the network. However, there are also cases of networks where labels are better predicted from the individual attributes of each node rather than the labels of nearby nodes. Ideally, node classification methods should flexibly adapt to a range of settings wherein unknown labels are predicted either from labels of nearby nodes, or individual node attributes, or partly both. In this paper, we propose a principled approach, JANE, based on a generative probabilistic model that jointly weighs the role of attributes and node proximity via embeddings in predicting labels. Experiments on multiple network datasets demonstrate that JANE exhibits the desired combination of versatility and competitive performance compared to baselines.

High-Speed and Noise-Robust Embedding of Hypergraphs Based on Double-Centered Incidence Matrix

ABSTRACT. In this study, for the purpose of robust clustering against noise, we propose a fast method of mapping hypernodes and hyperedges onto a unit hypersphere by focusing on the sparsity of the incidence matrix of a hypergraph. Our embedding method quantifies the relative strength of the relationship between a hypernode and a hyperedge from a double-centered incidence matrix, so that the weight of the noisy relationship can be suppressed to a small value, and the influence of noise on the embedding vectors of neighbor nodes and edges can be reduced. From the experimental evaluations using synthetic and real-world hypergraph data, we confirmed that our method outputs clustering results with good accuracy as well as a state-of-the-art method, especially for very noisy hypergraphs, and faster than other compared methods.

What is Learned in Knowledge Graph Embeddings?
PRESENTER: Michael Simkin

ABSTRACT. A knowledge graph (KG) is a data structure which represents entities and relations as the vertices and edges of a directed graph with edge types. KGs are an important primitive in modern machine learning and artificial intelligence. Embedding-based models, such as the seminal TransE [Bordes et al., 2013] and the recent PairRE [Chao et al., 2020] are among the most popular and successful approaches for representing KGs and inferring missing edges (link completion). Their relative success is often credited in the literature to their ability to learn logical rules between the relations.

In this work, we investigate whether learning rules between relations is indeed what drives the performance of embedding-based methods. We define motif learning and two alternative mechanisms, network learning (based only on the connectivity of the KG, ignoring the relation types), and unstructured statistical learning (ignoring the connectivity of the graph). Using experiments on synthetic KGs, we show that KG models can learn motifs and how this ability is degraded by non-motif (noise) edges. We propose tests to distinguish the contributions of the three mechanisms to performance, and apply them to popular KG benchmarks. We also discuss an issue with the standard performance testing protocol and suggest an improvement.

19:35-20:00 Session ERC A: Konstantina Topouridou - ERC's Funding Opportunities and Evaluation Process
ERC's funding opportunities and evaluation process

ABSTRACT. ’The European Research Council (ERC) is one of the most prestigious funding bodies in Europe. It supports individual researchers of any nationality and age who wish to pursue their frontier research in an EU member state or associated country. We encourage in particular proposals that cross disciplinary boundaries, pioneering ideas that address new and emerging fields, and applications that introduce unconventional, innovative approaches. (You can find more details here: In this context, the ERC funds projects of those working in the thematic areas of t Complex Networks 2021. In this session, you will be able to hear about the ERC evaluation process and funding opportunities as well as to pose questions to a scientific officer working for the ERC Executive Agency, which organizes the evaluation. ‘’