previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 7: Plenary session
Location: Gran Cancún 1
Collective behaviors at three scales: nanoscale oscillators, interdependent infrastructure, and macaque societies (Springer Complexity Lecture).

ABSTRACT. Results from a multi-year research effort aimed at understanding the behaviors of interdependent complex networks, and ultimately controlling their resulting collective phenomena, will be presented.  Our advances focus on systems at three different scales.  Probing into more classic nonlinear dynamics, we study both theoretically and empirically the attractor space of synchronization for a ring of reactively coupled nanoelectromechanical (NEMs) oscillators. The goal is to understand attractor switching networks and to design small control interventions. At the scale of critical infrastructure, consisting of collections of power girds, water networks and gas networks, our focus is on understanding interdependence and leveraging it for resilience and restoration efforts. This work relies on system identification techniques, optimization methods and graphs products. Finally at the scale of social systems we study the multilayered interactions found in macaque monkey societies, including aggression, grooming, policing and huddling networks.  Our focus is on multi-layered ranking metrics, mechanisms underlying formation of hierarchy, and multilayered interactions leading to abrupt societal collapse. 


This work is a collaboration between research teams at UC Davis, California Institute of Technology, Rice University, and University of Washington, funded by the ARO MURI program with coauthors Brianne Beisner, Airlie Chapman, Jim Crutchfield, Leonardo Duenas-Osorio, Jeff Emenheiser, Warren Fon, Andres Gonzalez, Darcy Hannibal, Matthew Matheny, Brenda McCowan, Mehran Mesbahi, Marton Posfai, Michael Roukes, and Anastasiya Salova. 

Multilayer Interconnected Complex Networks
SPEAKER: Alex Arenas

ABSTRACT. Multilayer networks are attracting large interest because they describe complex systems in formed by several networks indicating interaction of different nature. Examples are ubiquitous from infrastructure to transportation and biological networks. We will describe the state of the art for characterizing and modelling the structure of multilayer networks and for studying their robustness properties.

Neural coding of subjective sensory experience and uncertainty of perceptual decisions
SPEAKER: Ranulfo Romo

ABSTRACT. When a near-threshold sensory stimulus is presented, a sensory percept may or may not be produced. The unpredictable outcome of such perceptual judgment is believed to be determined by the activity of neurons in early sensory cortex. We found that these responses did not covary with the subjects’ perceptual reports. In contrast, the activity of frontal lobe neurons did covary with trial-by-trial judgments. Further control and micro-stimulation experiments indicated that frontal lobe neurons are closely related to the subjects, subjective experiences during sensory detection.

10:30-11:00 Session : Coffee Break

Coffee break & poster session

Location: Cozumel A
11:00-13:00 Session 8A: Foundations of Complex Systems - Structure

Parallel session

Location: Cozumel 1
Modeling structure and resilience of the dark network
SPEAKER: Alex Arenas

ABSTRACT. While the statistical and resilience properties of the Internet are no longer changing significantly across time, the Darknet, a network devoted to keep anonymous its traffic, still experiences rapid changes to improve the security of its users. Here we study the structure of the Darknet and find that its topology is rather peculiar, being characterized by a nonhomogeneous distribution of connections, typical of scale-free networks; very short path lengths and high clustering, typical of small-world networks; and lack of a core of highly connected nodes. We propose a model to reproduce such features, demonstrating that the mechanisms used to improve cyber-security are responsible for the observed topology. Unexpectedly, we reveal that its peculiar structure makes the Darknet much more resilient than the Internet (used as a benchmark for comparison at a descriptive level) to random failures, targeted attacks, and cascade failures, as a result of adaptive changes in response to the attempts of dismantling the network across time.


Manlio De Domenico and Alex Arenas Phys. Rev. E 95, 022313 (2017)

Reconstruction methods for networks: the case of economic and financial networks

ABSTRACT. Partial information is a problem that is systematically encountered in studying complex networks, irrespective of the specic case (typically a social, economic or biological system) that we describe with a graph. In order to compensate for the scarcity of data, researchers have tried to develop algorithms to achieve the best possible reconstruction of the networks under analysis. The techniques proposed to make optimal use of the available information have led to the birth of a research eld which is now known as \network reconstruction". Many researchers working in disciplines as dierent as physics, economics and nance have contributed to it, but each method has been tailored, so far, on the specic needs of each domain, often popularizing a particular algorithm exclusively into the eld where it was originally proposed. Therefore, the results achieved by dierent groups are still scattered across heterogeneous publications and a systematic comparison of the analytic and numerical tools employed for network reconstruction is currently missing. We provide a unifying framework to present all these studies; we also provide examples from various elds even if we focus mostly on economic and nancial networks since their structure is particularly dicult to access because of privacy issues. Unfortunately, partial information on the set of interconnections between nancial institutions dramatically reduces the possibility of providing a realistic estimate of crucial systemic properties (e.g., the resilience of the considered networks to the propagation of shocks and losses). Therefore the ability of reconstructing a reliable nancial network is not only important from a scientic point of view, but also from a societal one.

Burstiness and ties reinforcement in social temporal networks
SPEAKER: Enrico Ubaldi

ABSTRACT. The growing availability of high-quality and longitudinal datasets recording human activities allowed to gain a deeper understanding of how individuals interact and which strategies they apply in exploring their social circles.

In this work [1] we expand and generalise the activity-driven-network (ADN) modelling framework to account for three prominent mechanisms known to shape the individuals' ego-nets evolution, i.e., i) the diverse propensity to engage in a social interaction [2] (activity), ii), the heterogeneous time scales between two social events [3] (burstiness), and iii) their different strategies in allocating such interaction among their alters [4,5] (ties reinforcement).

The implementation of these mechanisms is usually done in a data-driven fashion, by directly measuring the distributions of inter-event time and the probability to engage a new social tie from empirical data. The question is then whether different scenarios of social exploration strategies and bursty inter-event time distributions, featuring different local scale behaviour but an analogous asymptotic limit, can lead to the same long time and large scale structure of the evolving networks.

Here, we tackle this problem in its full generality, by encoding a general functional form of these components into the ADN analytical framework, so as to account for both different strategies of ties activation and individual activity patterns. We then analytically solve the model in the asymptotic limit finding a rich phase diagram drawn by the dynamical interaction of the nodes activation mechanism and the ties reinforcement process. Their interplay is non trivial and, interestingly, the effects of burstiness might be suppressed in regimes where individuals exhibit a strong preference towards the reinforcement of previously activated ties. We also find that the asymptotic network evolution is driven by a few characteristics of the burstiness and reinforcement functional forms that can be extracted from direct measurements on large datasets.

The results are tested against numerical simulations and compared with two empirical datasets with very good agreement. Consequently, the framework provides a principled method to classify the temporal features of real networks, and thus yields new insights to elucidate the effects of social dynamics on spreading processes.

References: [1] Ubaldi et al., ArXiv e-prints 1607.08910 (2016); [2] Perra et al., Sci. Rep., 2, 06 (2012); [3] Karsai et al., Sci. Rep., 2, 05 (2012); [4] Ubaldi et al., Sci. Rep., 6, 10 35724 (2016); [5] Miritello et al., Sci. Rep., 3, 06 (2013).

Community Detection with Selective Zooming

ABSTRACT. Many complex networks exhibit hierarchical structure across a wide range of scales. Prior work has shown that time-sweeping using Markov dynamics allows the creation of a “zooming lens” to detect communities at different scales [1]. In this work a continuous-time Infomap algorithm [2] is extended to selectively zoom into community structure based on pre-determined node importance, allowing prior information to be incorporated into the community detection algorithm. The capacity of our method is illustrated with simple block models and image segmentation problems, and applications to information retrieval are discussed.

[1] Schaub, M. T., J.-C. Delvenne, S. N. Yaliraki, and M. Barahona (2012). Markov dynamics as a zooming lens for multiscale community detection: non clique-like communities and the field-of-view limit. PloS one 7 (2), e32210. [2] Schaub, M. T., R. Lambiotte, and M. Barahona (2012). Encoding dynamics for multi-scale community detection: Markov time sweeping for the map equation. Physical Review E 86 (2), 026112.

Graph-based semi-supervised learning for complex networks
SPEAKER: Leto Peel

ABSTRACT. see attached pdf

Efficient detection of hierarchical block structures in networks
SPEAKER: Leto Peel

ABSTRACT. see attached pdf

Topology-dependent rationality and quantal response equilibria in structured populations
SPEAKER: Sabin Roman

ABSTRACT. Nash equilibria of games are frequently used to reason about the decision-making. However, the underlying assumption of perfect rationality has been shown to be violated in many examples of decision-making in the real world. Accordingly, we explore a graded notion of rationality in socio-ecological systems of networked actors. We parametrise an actors' rationality via their place in a social network and quantify system rationality via the average Jensen-Shannon divergence between the games' Nash and Logit Quantal Response equilibria.

Previous work by [1] has argued that scale-free topologies maximise a system's overall rationality in this setup. Here, we show that while, for certain games, it is true that increasing degree heterogeneity of complex networks enhances rationality, rationality-optimal configurations are not scale-free. For the Prisoner's Dilemma and Stag Hunt games, we provide analytic arguments complemented by numerical optimisation experiments to demonstrate that core-periphery networks composed of a few dominant hub nodes surrounded by a periphery of very low degree nodes give strikingly smaller overall deviations from rationality than scale-free networks. If isolated nodes are allowed to form during optimisation, optimal networks are found to consist of a core made up by a complete graph with all other nodes being isolated. Similarly, for the Battle of the Sexes and the Matching Pennies games, we find that the optimal network structure is also a core-periphery graph but with a smaller difference in the average degrees of the core and the periphery. If no connectivity constraints are enforced, then in the case of the Battle of the Sexes a graph with a strongly bi-modal degree distribution emerges, while for the Matching Pennies game we obtain a quasi-regular graph. So, in contrast to [1], we have demonstrated that highly heterogeneous degree distributions do not necessarily maximise system rationality for all classes of games.

These results provide insight on the interplay between the topological structure of socio-ecological systems and their collective cognitive behaviour, with potential applications to understanding wealth inequality and the structural features of the network of global corporate control.

[1] Kasthurirathna, D. and Piraveenan, M. (2015). Emergence of scale-free characteristics in socio-ecological systems with bounded rationality. Nature Scientific reports, 5:10448.

11:00-13:00 Session 8B: Information and Communication Technologies

Parallel session

Location: Xcaret 1
Studying the relation between online social networks, subjective well-being, and mental health
SPEAKER: Johan Bollen

ABSTRACT. Social media are now an integral part of our social lives. They allow billions of people to connect across social, economic, and geographic boundaries, as they weave intricate social networks through which individuals share the most minute details about their lives and conditions with others. Online social networking may have become so popular because it fulfills a basic human need for connection, but does it actually promote our well-being? In my presentation, I will discuss our recent work on two aspects of this puzzle. First, I will discuss our findings that suggest that the prevailing topology of social networks and the distribution of subjective well-being in human populations interact in such a way that we find significant homophily with respect to longitudinal mood states, i.e. people tend to be connected to others with similar mood states. We furthermore find that due to a strong friendship and happiness paradox most people will be surrounded by friends that are both more popular and happier on average than they themselves are. Second, I will discuss our work that leverages large-scale social media to study the mental health dynamics of individuals and populations. The outcomes of this work might lead to a better understanding of how social media affects our well-being and mental health, the creation of early warning indicators for mental health transitions in individuals, and more accurate models of how emotions and mental health issues typically emerge and evolve.

Detecting behavioral groups in sequences of labeled data

ABSTRACT. Zipf-like distributions characterize a wide set of phenomena in physics, biology, economics and social sciences. Furthermore, in many human activities ranging from the type fo purchases with credit card and the offenses within the criminal careers the zipf-law is a peculiar property. These two datasets not only manifest the highly uneven frequencies where the vast majority of distribution is picked on few elements but have temporal sequences in the appearances of the events (purchases/offenses) which is typical for each individual. In this work, we are interested in detecting ubiquitous patterns of collective behavior extracted from this kind of data. A first approach to this problem from \cite{hidalgo2009dynamic} proposed to eliminate the redundant information inside the data and classify the users, analyzing the frequency-inverse document frequency (FT-ID). This method despite its results does not take into account temporal sequences in the appearances of the events. Our goal here is to eliminate redundancy while detecting habits and keeping the sequence of events and their order, which represent an important signature of an individual's routine. With this aim we apply the Sequitur algorithm \cite{nevill1997identifying} to the users' sequence of events to infer a grammatical rule that generate words, defined as two or more events that appear frequently in chronological sequence. To detect the words that are significant, we generate 100 randomized code sequences for each user. We extract for each user the set of significant words with z-score greater than 2. This represents the routines that indicate the sequences of significant events in the user habits. We calculate the matrix $M$ of user similarity measuring the Jaccard similarity coefficient of the set of significant words between the individuals (in Fig.\ref{figure1} example obtained from the sequence of purchase). Finally we identify groups in $M$ by applying the Louvain clustering algorithm. With this framework we are able to detect different behavioral groups in the data. Individuals within each group are also similar in a wide range of socio-demographic attributes such as age, gender, income, and mobility (in Fig. \ref{figure2} example for credit card). Taken together, we show that the detection of significant sequences is a critical ingredient in the process, because benchmark methods based on frequency ranking when applied to this data do not detect any habits \cite{roque2011using}. We presented a novel method to detect behavioral groups in labeled data \cite{di2017Crime,di2016revealing} that could also be applied to identify groups in other types of labeled datasets with Zipf-like distributions. Paralleling motifs in network science, which represent significant connections hidden within power law degree distributions, this method uncovers sets of significant sequences within labeled data with Zipf-type distribution.

The laws of forgetting: How time, popularity, and death shape human collective memory

ABSTRACT. Collective memory--the common representation of the past created by a group of people--originates from a balance between the transmission and forgetting of information. Yet, while the literature on the diffusion and transmission of information is vast, the literature describing how information is forgotten is relatively small. Here, we use data on the present and past popularity of songs, and on the daily online popularity of more than one thousand biographies, to study how the attention received by songs and biographies decays with time, initial popularity, and the death of cultural icons. First, we show that a song's present day popularity decays quickly as a function of its age for the first 1,600 days, and then, decays significantly more slowly during decades. Next, we find that initial popularity predicts present day popularity in songs, since a song's highest ranking in Billboard and the number of weeks it was ranked, predict a song's present day popularity once the effects of time have been removed. Then, we study the impact of the death of cultural icons on the online attention received by their biographies by using daily Wikipedia pageview. We focus on the biographies of famous people who died between July 2008 and April 2016 and show that the pageviews received by these biographies decay after their death following a power-law with an exponent of around -1.35. Moreover, we find that the biographies of people who have died experience a small but significant excess of attention, or ``attention premium,'' when compared to their pre-death popularity. We show this attention premium scales sublinearly with initial popularity, meaning that the premium is relatively larger for people who were relatively less popular prior to their death. Together, these findings show that the dynamics of human forgetting are characterized by a narrow set of mathematical functions and contribute to our understanding on the dynamics of human forgetting.

Urban Mobility and Food Ordering Services: A web mining perspective

ABSTRACT. Despite the popularity of online food ordering services (OFOS) few studies have attempted to understand their value in everyday life. A common motivation for using these services is their convenience for customers since they allow them to avoid traffic jams or traffic congestion in highly dense cities. Is heavy traffic related with patterns of online food ordering services in highly dense cities? Here we tackle this question by describing possible ways of extracting relevant information from Waze Live Map and online food ordering services in Bogotá city. Our approach highlights the potential benefits of using geographic coordinates to locate exact positions of food providers in the city and evaluate if their perceived quality is somehow affected by traffic dynamics as captured by Waze Live Map.

The Gendered Selfie

ABSTRACT. As social media increasingly become part of our everyday lives, they do not only shape how we see the world, but also how we see ourselves and how we fit into the social world. Image-based expressions in particular are becoming a powerful source of self-perception, as we rely on others' views, judgments and appraisals to develop our social self. This is emphasized by the psychological concept of the “looking-glass self”, which describes how we develop our sense of self based on the perceptions of those with whom we interact. Some psychologists have argued that social media constitutes a powerful new such looking-glass (e.g. Gärdenfors 2017). Through such effects, social media also constitutes a channel for the spread and perpetuation of social norms. Some such norms can be undesirable, in particular, feminine and masculine gender-norms are widely understood to fit with and reinforce the problematic socialization of women into subordinate social roles, contributing to gender inequality (e.g. Millett 1971:26). This paper investigates to what extent the new “looking-glass” of social media distorts our reflection by being gendered, and whether it constitutes a conveyer of problematic gender-norms. As our self-image is one of the clearest expressions of how gender roles affect and limit how we understand and portray ourselves, this paper explores how gender plays out in self-portraits on social media. A large number of “selfies” and associated user comments are collected from the major image-based social networks Instagram and Flickr. These selfies are categorized by subject gender, using name classification databases and, in the cases where this is not possible, deep learning image classification. Image expressions are furthermore characterized, also by using deep learning, and various text analytic methods are applied to study differences in how men and women are commented as a function of the image expressions. Furthermore, as the photos are geotagged, the geographical distribution of gendered speech and visual expression is explored. This allows us to answer pressing questions regarding gendered expressions in social media: do men and women conform to gender-norms in their self-expression? Are there detectable differences on commonly theorized dimensions such as active/passive, hard/soft, powerful/weak, cold/warm, etc.? Can a policing of such gender-norms be detected in comments, in which ‘appropriate’ self-expression is reinforced, and deviant expressions discouraged? Are there geographical differences in gender-expressions? Are there differences between platforms, e.g. following from platform-specific social norms? This contributes useful insights as to how social media plays part in the perpetuation of problematic social norms, and furthermore serves to illustrate how digital trace data can contribute to the large theoretic framework associated to gender studies.

Modeling Social Organizations as Communication Networks
SPEAKER: David Wolpert

ABSTRACT. Human social groups can achieve extraordinary levels of complexity. Examples range from individual firms competing in a market to ancient city-states to military units,to modern governmental institutions. One key element of such groups is how they are internally organized. Accordingly, the question of what determines the precise organizational structure that a given group adopts is a central concern of many social sciences, including economics, political science, sociology, and anthropology.

Despite the importance of these issues, there has yet to be a formal and testable theory that explains what properties of the agents and external environment determine the organization's structure. While not formalized, many have suggested that it is the information requirements and constraints on an organization that determine the need and structure of an organization. For example, Ken Arrow conjectured, ``the desirability of creating partially determined by the characteristics of network information flows.'' Nevertheless, such a conjecture was never formalized and expounded in economics or any other field that is concerned with organization structure. In this paper, we formalize and propose initial answers to the question ``how does informational requirements and limitations impact the optimal organization structure?''

We take a group selection approach and model human organizations as telecommunication networks. Specifically, we focus on how the agents' ability to receive, transmit and synthesize information determines the organization's (approximate) optimal structure in terms of network topology. In the model, agents (with a common goal) within an organization receive information from various sources and must decide how to transform such information and transmit the results to other agents in the organization. At the same time, information transmission is costly and noisy. We then use this model to show how the size of the organization, the noise in the communication channels and the informational requirements of each agent in the organization determine the organization's optimal structure. We focus on ``phase transitions'' and show how at certain parameter specifications, the optimal organization structure switches from relatively flat to hierarchical. An ancillary contribution is that we show how to leverage the computational power of neural networks, regularizers and genetic algorithms to solve for the optimal network structure under each parameterization. Specifically, we use neural network to determine how each agent should transform and transmit information for a given network topology. We implement informational processing constraints with regularizers. Then, we use genetic algorithms to optimize over the network topology.

We also discuss several extensions to our baseline model. In particular, we discuss how to extend our model to analyze the dynamics of the optimal network topology. We also discuss augmenting the theory to analyze the organization's optimal network structure when there are multiple competing organizations. To contextualize our approach to modeling social organizations, we suggest other alternatives that include applying the new field of network coding.

11:00-13:00 Session 8C: Cognition and Linguistics - Interaction

Parallel session

Location: Xcaret 2
On the universal structure of human lexical semantics
SPEAKER: Hyejin Youn

ABSTRACT. How universal is human conceptual structure? The way concepts are organized in the human brain may reflect distinct features of cultural, historical, and environmental background in addition to properties universal to human cognition. Semantics, or meaning expressed through language, provides indirect access to the underlying conceptual structure, but meaning is notoriously difficult to measure, let alone parameterize. Here, we provide an empirical measure of semantic proximity between concepts using cross-linguistic dictionaries to translate words to and from languages carefully selected to be representative of worldwide diversity. These translations reveal cases where a particular language uses a single “polysemous” word to express multiple concepts that another language represents using distinct words. We use the frequency of such polysemies linking two concepts as a measure of their semantic proximity and represent the pattern of these linkages by a weighted network. This network is highly structured: Certain concepts are far more prone to polysemy than others, and naturally interpretable clusters of closely related concepts emerge. Statistical analysis of the polysemies observed in a subset of the basic vocabulary shows that these structural properties are consistent across different language groups, and largely independent of geography, environment, and the presence or absence of a literary tradition. The methods developed here can be applied to any semantic domain to reveal the extent to which its conceptual structure is, similarly, a universal attribute of human cognition and language use.

Dynamics of Social Interactions in a Collective Creativity Experiment

ABSTRACT. The study of the dynamics behind the emergence of novelties and innovation is a relatively recent field of study in complex systems, fostered by the abundance of data about the creation and sharing of artworks and about on-line activity in general. Despite the recentness of the topic, many works have been able to discover and characterize many interesting statistical patterns related to the emergence of new creative elements and a very general mathematical framework describing the collective process of discovering and sharing novelties has been developed. However,a lot has still to be discovered concerning the conditions, either historical and social, fostering the emergence of creative elements from a group of interacting individuals. From a social perspective, many hypotheses have been suggested and tested concerning the relations between individuals, like the presence of ``weak ties'' in social networks or the ``folding'' of different social groups into larger ones sharing a common goal. To the best of our knowledge, Complex Systems Science has given little contributions to the understanding of how the dynamics behind social interactions can contribute to foster the emergence of creativity. \newline In this work we present the results of a collective social experiment in which individuals were asked to collaborate in the realization of a certain number of LEGO bricks sculptures. The participants were provided with particular RFID tags developed in the framework of the SOCIOPATTERNS project, that enabled a quite precise mapping of the social interactions occurring during their activity within the experiment. The interaction with the LEGO Sculptures were similarly mapped by means of other RFID tags placed around the sculptures, and their growth in volume has been recorded with the aid of infra-red depth sensors. The RFID sensors allowed for the reconstruction of the dynamical network of social interactions between the participants in the experiment. We looked for correlations between the evolving structure of this network and the growing patterns of the sculptures, spotting the local social structures more prone for a rapid growth of the volume in small amounts of times and in long term periods. In this way, we were able to identify the social patterns more fruitful in terms of ``local consensus'' around the development of the collective artwork, indicating a shared vision around the actions to be performed on it. Moreover, we were able to identify how the presence of ``influential individuals'' characterized by means of information spreading models favored the growth of the sculptures in the long-term. The novelty behind the proposed approach could contribute to shed light on the phenomena related to creativity and could innovate in the way in which collective creativity experiments are conceived and designed.

Working with Machines: A Complex Dynamical Systems Approach to Human-Machine Interaction
SPEAKER: Maurice Lamb

ABSTRACT. When humans work together, coordination emerges from complex interactions within individual co-actors, between co-actors, and between co-actors and task relevant environmental properties. In contrast, when machines complete a task, while the mechanics and programming of the machine may be complicated, the behavior does not emergent from system complexity. When it coordinates with humans or other machines, it must do so deliberately with a specific plan and method for doing so. One potential result of these differences is that when humans and machines work in shared spaces, the fundamental dynamics of each are significantly different. When the machine and its behavior are simple or limited within clear spatial boundaries (often defined by brightly colored fences), the difference in dynamics is insignificant. When the machines behavior is simple, humans can easily learn the machines patterns and adapt their behavior. When the machine is kept separate, risk of injury is minimized and demands on close proximity coordination are eliminated or reduced. However, there are many potential applications where it would be beneficial for humans to be able to quickly interact and coordinate with a machine with little to no training on how to do so, including: neurorehabilitation, assistive robotics, local and remote operation of multi-agent robotic systems, and industrial applications. Building on complex dynamical systems research of human-human joint action, we have developed a method for implementing human inspired dynamics in artificial and machine systems. As a proof of concept, we have implemented a joint action dynamical pick-and-place algorithm in both a virtual avatar system and a robotic arm. In each case both the movements and task specific decisions of the algorithm driven system emerge from the interactions of the human-machine co-actors and their environment. We demonstrate not only that this approach can successfully accomplish the task with a human co-actor in a robust and adaptive way, but can do so in a way that mimics the dynamics that emerge from the complexity of human agents. Theoretically, our approach builds on the fact that in complex systems with many degrees of freedom, interaction among system degrees of freedom at different scales constrain the system resulting in dynamics that can be characterized by far fewer degrees of freedom, e.g. an order parameter. Along with implementation and demonstration of a joint action pick-and-place algorithm, we will discuss possible future extensions and applications of our method in domains of assistive and therapeutic technologies and teleoperation of multi-agent robotic systems.

Association but not Recognition: An Alternative Model for Differential Imitation from 0-2 Months

ABSTRACT. Skepticism toward the existence of neonatal imitation is fostered by views that assign it excessive socio-cognitive significance and generate unwarranted expectations about the kinds of findings experimentalists are supposed to look for. We propose a theoretical analysis that may help address the empirical question of whether early imitation really exists. We distinguish three models. The first posits automatic visuo-motor links evolved for socio-cognitive functions: we call it Genetically Programmed Direct Matching (GPDM). The second is Meltzoff's Active Intermodal Matching (AIM), which postulates a comparison between the acts of self and other. The third is the alternative we propose: we call it Association by Similarity Theory (AST), as it relies on this domain-general process. AST describes early imitation merely as the differential induction or elicitation of behaviors that already tend to occur spontaneously. Focusing on the contrast between AIM and AST (Figure 1), we argue that AST is preferable for three reasons. First, AST is more parsimonious and plausible than AIM. AST does not commit one to superfluous assumptions and to the problematic claim that a specific form of social cognition starts in the newborn period, i.e. the recognition of the similarity between the acts of self and others. In AST, similarity has a tacit functional role but is not the object of recognition experience. Moreover, AST fits better with the common coding/ideomotor approach as advocated by Wolfgang Prinz from 1990-2009. Second, whereas the extant findings tend to disqualify AIM, AST can account for them adequately. AST does not posit a propensity to match the gestures of others indispensable for socio-cognitive functions; hence it explains the extensive variability of the findings and the considerable absence of imitation in naturalistic environments. AST better accounts for the narrow range of gestures exhibiting imitation (perhaps just 2), the “drop out” after two months, the imitation/spontaneous behavior correlation, and the progressive increase in amplitude and vigor, which, however, does not exhibit goal-directedness. Additionally, AST does not inflate the operational definition of imitation (differential imitation does not look like ordinary imitation). Third, AST has the potential to give new impulse to empirical research because it discriminates promising lines of inquiry from unproductive ones. In contrast to AIM, AST predicts: (a) imitation will be low or absent in naturalistic (domestic) environments, but present in artificial laboratory settings designed to maximize attention to the kinematic features of the model; (b) imitation will hardly be detectable through research aiming at external validity (averaging data across large numbers of infants), but demonstrable through research emphasizing internal validity (where each infant is taken as its own control); (c) mouth opening will be more helpful to prove the existence of imitation than other facial gestures less clearly differentiated in proprioceptive experience; (d) eye tracking of inactive (presenting no imitative response) mouth opening observation will exhibit differentiation from equally arousing, but not-already-executed gestures. We intend to take up these directions of enquiry and invite other experimentalists to do the same in order to settle the debate on whether early differential imitation exists.

11:00-13:00 Session 8D: Economics and Finance - Institutions, social norms, trade & development

Parallel session

Location: Tulum 4
The Complex Network of Public Policies. An Empirical Framework for Identifying their Relevance in Economic Development

ABSTRACT. The traditional view that there is a set of common factors that precludes the possibility of closing the income gap between developing and developed countries is somehow misleading. When analyzing the relative relevance of policies and institutions, it is frequently assumed that their impact on countries’ economic growth does not vary in terms of their current stage of development. However, there are many pieces of empirical evidence of policy interventions exhibiting a large heterogeneity in countries’ outcomes, since they are implemented in a wide array of economic and governance structures. In this paper a data-driven framework for establishing development guidelines is elaborated based on the idea that societal outcomes are the result of a large set of public policies with many interactions. Consequently, for selecting a particular combination of policies, which could be helpful for the performance of a country with specific ‘initial conditions’, it is convenient to build a complex network of public policies. In particular, the inclusion of a large number of factors, or development pillars, in such a network allows analyzing the relative relevance of different categories of policies and governance variables. In order to identify which policies might be suitable for a specific country, it is assumed that, as countries evolve, they leave behind a ‘development footprint’ reflected in their set of policy indicators. Therefore, in the first step of the ‘development footprint’ framework, a set of targeted countries has to be selected for specifying the values of the policy indicators to be replicated by the country under treatment. Besides of choosing targeted countries positioned in the next income category of the treated country, the set is reduced even more by taking into account only those countries whose economic structure is similar to the one observed in the treated country. Then, in a second step, a complex network of public policies is used to simulate the impact that certain combinations of policy interventions have on the value of different policy indicators, with the aim that the treated country can move from its original policy indicators to those exhibited by the targeted countries. Because there is a large set of policy combinations to be attempted, the framework uses a genetic algorithm to find optimal solutions. In this case, the function to be minimized is a mean square error defined as the difference between the simulated values of policy indicators and the corresponding values prevailing in targeted countries. The main results generated when the model is calibrated with a panel of countries in all income categories for the period 2006-2012 are as follows: (i) public policies are context dependent; (ii) there are different development modes that any country can undertake; (iii) policy interventions within each mode are part of a consistent package and, thus, they cannot be easily substituted in isolation; (iv) bosting public governance indicators does not seem to be important for the poorest countries of income group 4, but this type of actions is very critical in the upper-middle income countries of group 2.

Economic Complexity of Special Economic Zones (SEZs) in Mexico: Networks analysis for diversification and regional productive sophistication

ABSTRACT. Regional economic development based on public policies aimed at the design, implementation and development of SEZs has gained special relevance at the international level in the last 30 years. Although it is not a new type of public policy, the observed impact on economic growth, the improvement in the incomes received by the local population and the increase in the general welfare in the regions of the world where its implementation has been successful have been sufficient reasons for many developing countries to continue implementing this scheme. However, success in some areas of East Asia and Latin America has not been uniform, and there are even a number of cases, mainly in Africa, where economic zones have failed to achieve their objectives.

In Mexico, the implementation of SEZs is predicated on the believed that this intervention will help close the growing economic gap between Mexico’s northern and southern states. However, the current policy architecture does not consider the existing industrial base in these regions, their differences and ultimate the intricate web of economic links across the different economics activities in these regions.

This study focusses on quantitatively characterizing the economic complexity of the regions where the SEZs are being implemented in Mexico. For this, we applied the economic complexity framework by Hausmann and Hidalgo (2014) using data from the Economic Censuses generated by the National Institute of Statistics, Geography and Informatics (INEGI). Based on their framework, we estimate the product space of the economic branches in Mexico and quantitatively characterize the existing economic networks. Through this analysis, we determine how the Mexican SEZs are located in the national product space, as well as their possible routes of productive diversification (i.e. using the proximity between economic branches as a proxy for potential diversification). Finally, based on the estimation of distance within the network of capacities and products, and their historical dynamics we develop a framework that can be used to evaluate which industries are more likely to succeed in SEZs.

Synergies and scaling in the occupational composition of cities
SPEAKER: Alje van Dam

ABSTRACT. Understanding the economic structure of cities is becoming increasingly important as urbanization keeps increasing throughout the world. It has been posed that the economic output of a city is the product of the interactions between people with complementary and specialized skill sets and know-how, enabling the development of increasingly complex and hence economically valuable goods and services (Bettencourt et al., 2014). Hence a substantial part of a cities’ economy is determined by the number of people living in it, the diversity of their skillset and perhaps more importantly how complementary the different skillsets of different people within a city are to each other. In this work we study these complementarities by considering the composition of the population of cities within the United States in terms of their occupational specializations. We propose a new measure of complementarity between occupations that determines which combinations of specific occupations account for the synergies that drive the productivity of cities. Our measure is grounded in information theory and naturally extends to measures that we believe capture in some way the ‘economic complexity’ of both occupations and cities, which can be interpreted as a measure of specialization of a certain occupation and the division of labor in a cities labor force respectively. We also relate our measure of the complexity of an occupation to the way specific occupations scale with city size, shedding light on the relation between the scaling behavior of occupations (Bettencourt et al., 2014, Youn et al., 2016) and the concept of economic complexity (Hidalgo & Hausmann, 2009), interpreted here as the level of specialization of a certain occupation. We validate our measure by seeing to what extent it can explain a cities’ economic over- or underperformance given its population size, and the wages of individuals occupations. This work addresses the question of how scale, diversity and integration of occupations contributes to a cities’ economic productivity and improves on existing methodologies (Muneepeerakul et al., 2013) to measure pairwise interdependence of occupations by starting from an information-theoretic basis, allowing to generalize to higher-order interactions.

Bettencourt, L.M.A., Samaniego, H. & Youn, H. 2014 Professional diversity and the productivity of cities. Sci. Rep. 4, 5393; DOI:10.1038/srep05393

Hidalgo C.A., Hausmann R. 2009 The building blocks of economic complexity. Proc. Natl Acad. Sci. USA 106, 10 570 – 10 575; DOI:10.1073/pnas.0900943106

Muneepeerakul R., Lobo J., Shutters S.T., Goméz-Liévano A., Qubbaj M.R. 2013 Urban Economies and Occupation Space: Can They Get ‘‘There’’ from ‘‘Here’’? PLoS ONE 8(9): e73676; DOI:10.1371/journal.pone.0073676

Youn H., Bettencourt L.M.A., Lobo J., Strumsky D., Samaniego H., West G.B. 2016 Scaling and universality in urban economic diversification. J. R. Soc. Interface 13: 20150937; DOI:10.1098/rsif.2015.0937

Zipf distribution of artists’ income

ABSTRACT. Despite the production and consumption of art have been important elements of human activity, it is only relatively recently, with William J. Baumol and William Bowen’s (1966) Performing Arts: The economic Dilemma, that cultural economics or more particularly the economics of the arts emerged as an independent field of economic analysis. Economics of arts and culture cover several topics; from the participation of art markets in the GDP and sales of the major artists in auction houses (Christie’s and Sotheby’s) to the consumption of art as an addiction and the pecuniary and aesthetic nature of artworks. However, the major focus of the economics of the arts is on prices: How do rates of return on investment in art compare with returns elsewhere? Or what are the main determinants of the price of art works? This paper aims to contribute to the economics of arts by examining whether commercial success in the art market conforms to an empirical concentration. To do so, in this paper I provide empirical evidence about the dispersion of income among artists. Specifically, by using the publicly available data on art market trends from 2002 to 2013 in terms of annual auctions sales of the firm Artprices, I show that the distribution of rewards artists receive from the sale od their artworks is Zipf-distributed. The paper is organized as follows. In the next section, I provide a general perspective about the literature on artists’ employment and earnings. Section three describes the self-organizing nature of many phenomena and how Zipf’s law can explain them. In sections four and five I show that artists’ income follows Zipf’s law. This papers ends with some final remarks.

11:00-13:00 Session 8E: Infrastructure, Planning and Environment - Urban Infraestructure

Parallel session

Location: Xcaret 3
Testing the fundamental assumptions of singly-constrained models of spatial flows

ABSTRACT. Domestic migrations, along with traffic congestion and the spread of infectious diseases, are processes in which the presence of flows induces a net change of the spatial distribution of some quantity of interest (population, vehicles, pathogens). The ability to accurately describe the dynamics of these processes depends on our understanding of the characteristics of the underlying spatial flows. Statistical models of spatial flows have been traditionally developed starting from the principle of entropy maximisation subject to various constraints such as the presence of a finite amount of resources for travels, or using utility theory to describe individual choices over competing alternatives. Despite being derived from different principles, many of these approaches share the same fundamental assumptions which lead to estimate flows as the product of two types of variables, one type that depends on an attribute of each individual location (e.g. the population), and the other type that depends on a quantity relating a pair of locations (i.e. a distance). The difference between the various models consists in the kind of variables considered, and the specific functional forms in which these variables enter. When the estimates of these models are not accurate it is impossible to determine whether this is due to a poor choice of the explanatory variables and functional forms, or because the fundamental assumptions are not satisfied. Resolving this ambiguity would require the development of a methodology to assess if a set of empirical flow data is compatible with the fundamental assumptions of a class of spatial flow models.

Here we present a general framework to model spatial flows based on a limited number of fundamental assumptions on the model’s structure that does not require to specify a priori a particular set of explanatory variables or functional forms, which are determined from the empirical flow data. In particular, we focus on singly-constrained models in which the probability of a unit flow from location $i$ to $j$ is $p_{ij} = w_j f(r_{ij}) \big/ \sum_k w_k f(r_{ik})$, where $r$ denotes a distance and the weights $w$ are characteristic variables of the locations. Differently from traditional approaches that introduce additional, arbitrary assumptions on the specific functional forms of the deterrence function $f$ (usually exponential or power-law) and the weights $w$ (usually function of local variables such as population or employment rate), in the proposed approach $f$ is expressed as a sum of adaptive basis functions which can approximate any function with arbitrary precision, while $w$ can assume any positive value. We will describe a procedure to calibrate the model using an iterative maximum likelihood algorithm and analyse its performance on synthetic data and empirical migration and commuting data.

An Integrated Participatory Computational Framework for Supporting Long-term Infrastructure Planning in Complex Policy Contexts: The Case of The Monterrey Water Master Plan

ABSTRACT. The City of Monterrey in Nuevo Leon is rapidly increasing its demand for potable water due to its growing industrial activity and population. It is widely believed that the expansion of the city´s water infrastructure is a key measure needed to support future water demand. However, environmental concerns of different projects and more importantly climate change and water demand uncertainty have increased the complexity of this decision.

This research describes an integrated computational framework that has been developed for supporting the State of Nuevo Leon’s water infrastructure decisions. This framework uses conjunctively three different computational models: a water demand Monte-Carlo simulator, a water supply hydrological model and a dynamic optimization model. This framework is used in a computational experiment that uses a large ensemble of future scenarios exploring a vast space of water demand and water supply scenarios. The resulting database future scenarios is then analysed using statistical clustering algorithms to identify the factors that increase or reduce the vulnerability of different infrastructure portfolios. Finally, this vulnerability assessment is used to developed adaptive infrastructure investment plans. Our results show future water demand in the city can be met progressively through a combination of different projects. In the short term, small-to-medium scale grey infrastructure that take advantage of different water sources (i.e. surface and groundwater sources) can be used to meet future demand in the face of climate uncertainty. In the medium term, the combination of water efficiency and medium size grey infrastructure projects can help the city meet future demand and potentially save close to 1 billion dollars on grey infrastructure investments.

Spatially-dependent growth model for lodgings-services network in urban areas

ABSTRACT. We present a spatially embedded growing network model that reproduces the interrelationship between lodgings and services in urban areas. We assume that a lodging is linked to a specific service if a representative person hosted in the lodging enjoys/consumes the service. These nodes (lodgings and services) are located in a metric space which is represented by a planar network where distance is defined by the length of geodesic paths. The growing process assumes that, at every time step, a new lodging and m new services are created and located in the metric space. The probability that the new lodging is connected to a specific service follows a preferential attachment rule inversely weighted by the the length of a geodesic path form both nodes.

This attachment law is similar to the one proposed in by Xulvi-Brunett and Sokolov, Phys. Rev. E, 66, 02611870, 2002, but adapted to the case of a bipartite network. Moreover, we assume that the major determinant of the social distance to consume a service is the intermediate attractions and lodgings. Thus, the length of the geodesic path is preferred instead of the usually chosen euclidean distance.

We have tested the model with real data. Specifically we have sampled a real lodgings-services network in a tourist area (Maspalomas, Spain) from online data of recommendations published in during the period 2005-2016. The sample size is about 78.000 opinions on 223 lodgings and 3003 services/attractions. The geographic location of lodgings and services in the real data were used to build the planar network where the spatial network model is embedded. This planar network is not homogeneously distributed, since few major services (e.g. monument or man-made attractions) are more highly connected to lodgings than the rest. The comparison between the simulated and real network shows some topological similarities, such as the clustering coefficient and average path length, but also differences, such as the degree distribution. Other economic characteristics from the supply-side (promotion, quality of the service) or demand-side (visitor's preferences) may explain part of the gap between numerical and empirical results.

Vulnerability tradeoffs in an urban socio-hydrological agent-based model due to the decision-making dynamic of influential actors
SPEAKER: Andres Baeza

ABSTRACT. As urban environments become larger and more heterogeneous, they also come to be highly vulnerable to water-related hazards. Investment in “hard” or grey infrastructure has historically been the primary response by local, regional, and central governments to address such risks. These investments in turn respond to social and political factors (social-political infrastructure) associated with narratives or mental models about how the world is perceived by urban authorities, stakeholders and residents. While techniques to elicit these ontologies have been developed, few studies have gone beyond representing these priorities in space to include them into the dynamics processes that generate a continual feedback among social-political infrastructure, hard infrastructure and the production of urban vulnerability. In this talk we present an agent-based model to illustrate potential consequences for socio-hydrological vulnerability in a stylized urban landscape when considering feedback between social-political infrastructure and geospatial patterns of vulnerability outcomes, via different strategies for investments in physical “hard” infrastructure. The model is motivated by the Mexico City water management system and its socio-hydrological vulnerability. In the model, a water authority agent makes decisions as to where to invest limited resources to either create new infrastructure or invest in maintenance to reduce flooding or scarcity. These decisions are made by calculating a multi-decision criteria metric, which is constructed based on how authorities prioritize of a set of indicators of system performance, including the demand for immediate attention by neighborhoods. We simulated scenarios representing contrasting prioritization schemes by water authorities, and we conducted numerical experiments under similar biophysical and budgetary constraints. Our results indicate that minimal changes in prioritization can have significant consequences on the transient and steady state of sustainability indicators. We show that tradeoffs in performance can emerge under managers with different priorities, even under similar biophysical conditions. Finally, we observed that because of the complex interactions between the biophysical environment and neighborhood responses, managers can unwittingly exacerbate the problem that most concerns when they prioritizing specific criteria to guide their decision-making. We contrast these theoretical findings with the insights gained from the empirical finding from Mexico City’s water governance. We discuss the development of new methods to elucidate the specifications of the cognitive processes that can mechanistically connect the decisions of dominating actors with the dynamics of the biophysical environment in complex urban systems.

Detecting the Relationship Between Heuristic Decision-Making and the Complexity of Planning Processes

ABSTRACT. A common observation in the study of complex systems is that those systems frequently become more complex as they evolve over time. Following John Holland, we can deduce this development to the application of simple rules; they can be a catalyst for the complexity of systems. This paper aims to shed light on the relationship of simple rules and increasing complexity in the planning of transport infrastructure. I propose that human decision-makers deploy heuristics – simple decision-making rules – to deal with the complexity of their environment. I further argue that due to this heuristic deployment decision-makers contribute to the increasing complexity of the planning situation. Planning processes involve continuous public decision-making where actors - including project managers and their teams, local politicians and administrators, and citizens - must deal with ever-changing technical details as well as with each other’s desires and beliefs. They eventually need to decide on one planning option, taking into account environmental implications, effects on the population, questions of urban development, constructional feasibility etc. Often discussions take several twists, previously discussed options are eliminated, and new options and discussion points evolve over time: the situation becomes more complex. Actors deal with this complex environment but being the human beings they are, they cannot possibly consider all the information available to them and arrive at some rationally correct decision. As extensive previous psychological research has shown, they deploy heuristics, i.e., intuitively, often subconsciously activated decision-making mechanisms, instead. This paper’s aim is to take a selection of experimentally well-researched heuristics and to investigate their operation in and their influence on the complexity of real-life decision-making processes. To this end, I analyze the decisions and interactions of stakeholders in the case of the ongoing railway planning processes in the cities of Bamberg (Germany) and Bergen (Norway). The analysis is based on secondary data, semi-structured in-depth interviews with stakeholders, and observations from city council meetings. In a first step, both cases are analyzed regarding (1) the stakeholders’ heuristic deployment with the help of systematic coding and content analysis, and (2) the complexity of the planning process at t₀ and t₁ by application of a coding scheme based on Nicholas Rescher’s taxonomy of complexity. In a second step, the influence of heuristics on the respective planning process is analyzed using within-case Process Tracing. Finally, both cases are compared, attempting to uncover potential patterns in heuristic deployment and their influence on the process evolution. Preliminary findings show that actors’ perceptions of the process vary due to their heuristic deployment, leading to distorted intercommunications and decisions which contribute to the stagnation of the process rather than to a straight forward development towards its final decision. The analysis can show that both planning processes were considerably delayed due to heuristic decision-making of all actors. Both situations became more complex to the actors involved as more options and opinions arose during the decision-making.

The complexity of environmental infrastructure

ABSTRACT. Most would agree that the majority of designs for contemporary buildings and the built environment are simplistic in a negative sense of the word. Generally they lack the materiality, scale and sense of life that many vernacular cities and spaces possess, not to mention issues of sustainability.

Concepts of emergence and adaptation are ubiquitous in complex systems whose dynamics interact in non-linear ways. How can our design of the built environment from infrastructure to the design of cities, take advantage of this knowledge to promote more sustainable futures? Factoring in more variables and parameters is part of the equation; understanding the true costs and connections of what we do on a socio, economic and environmental level is a start. Even if we cannot measure or factor in everything, we need to start by incorporating more and not oversimplifying (being reductive). Adaptability and resiliency are also key in our current climate of multiplicity, inter-connectivity and indeterminacy.

Warren Weaver in his pivotal 1948 essay in American Scientist spoke of the scientific methodologies for dealing with organized complexity. He saw this as a historical development from the focus on problems of simplicity during the nineteenth centuries to the developments in disorganized complexity during the early 20th century. He saw organized complexity as a middle region of the two prior extremes, which was unsolvable by the statistical approach of disorganization. Organized complexity were all “problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole”. Is this where we are now in the contemporary design fields? Have we finally moved past the false simplification of classicalism and modernism, beyond the realms of deconstruction and meaningless parametricism to the study of organized complexity? In the same article Weaver states that there were two promising developments that had come out of World War II to aid in this new field of organized complexity; computation and inter-disciplinary team work.

This paper will present various connections between complexity theory and designs for the built environment, highlighting some of the most promising examples of digital methodologies. Hypothesizing on how we evaluate the success of these strategies in a world which is more pluralistic and open-ended: showing that disciplinary boundaries are being dissolved and how computation is the necessary glue that is binding these elements.

Bayesian networks for air quality analysis in Mexico City
SPEAKER: Carlos Perez

ABSTRACT. The application of Bayesian networks learning has been useful to get a better understanding of certain domains or to make predictions based on partial observations; for example in applied science fields like medicine, finance, industry, environment, and recently social sciences. Air quality in Mexico City is a major problem because the levels of air pollution are of the highest in the world, with a high average of daily emissions of several primary pollutants, such as hydrocarbons, nitrogen oxides and carbon monoxide. The pollution is due primarily to transportation and industrial emissions, and when this pollutants are exposed to sunshine, they undergo chemical reactions and yield a variety of secondary pollutants, ozone being the most important. People at risk from breathing air containing ozone include people with asthma, children, older adults, and people who are active outdoors, especially outdoor workers. This indicates a real need of models able to forecast the pollution level several hours, or even a day in advance in order to be able to take emergency measures, make contingency plans, estimate pollution where there are no measurements and to take preventive actions to reduce the health hazard produced by high pollution levels. This work poses some straightforward results that provide a deeper and better understanding of the air quality problem in Mexico, primarily determine which factors are more important for the ozone concentration that leads to a data driven estimation answer, by taking into account only the relevant information and as a byproduct sheds light in the discovery of critical causes of pollution. Learning algorithms are applied to obtain initial structure of the phenomena and Bayesian networks are used to model the variables interactions and reveal the relevance of this factors for estimating pollutant levels. The analysis is extended to other major cities and its possibility of generalization is assessed. The most relevant results conform a cornerstone to environmental policy in terms of its potential application to planning and evaluation endeavors. The complex system framework enables to discover unknown relations, estimate unobserved measurements and evaluate different scenarios through revealed information about data interactions. In other words, this Bayesian network approach constitutes solid ground for decision making in environmental policy.

11:00-13:00 Session 8F: Biological and (Bio)Medical Complexity - Complex diseases and public health I

Parallel session

Location: Tulum 1&2
Using heterogeneous contact patterns in the modeling of infectious diseases
SPEAKER: Dina Mistry

ABSTRACT. The dynamics of infectious diseases strongly depends on the structure of the social contact patterns among individuals.In order to have an accurate estimate of the impact of epidemic outbreaks and which effective control measures to take, we need an appropriate description of these patterns. A simple way to improve the homogeneous mixing assumption is to introduce age contact patterns. Here we follow the approach of Fumanelli et al (PLoS Computational Biology, 8(9):e1002673, 2012) to estimate the age mixing patterns of virtual populations using highly detailed census data for Argentina, Australia, Brazil, Canada, India, Mexico, Turkey and the United States. Considering age contact matrices for these countries we study the epidemiological relevant quantities and their relation with the sociodemographic data. Our results show that even for the same country the impact of epidemics outbreaks could be very different when we consider age contact matrices. These results can be explained as a result of a change in the average age of the population in the different regions of the countries. This study also provides the first estimates of contact matrices for the previously mentioned countries.

New Insights on HIV-1 infection dynamics

ABSTRACT. We present a multi-compartment model for the HIV-1 infection dynamics. The model considers the interactions of HIV-1 with naive, activated and memory T-cells of the immune system in different body compartments: blood plasma, interstitial spaces of lymphoid tissue, and virus attached to follicular dendritic cells in the form of immune complexes. We show that the viral and T-cell dynamics under the administration of potent antiviral drugs observed in clinical trials can be understood in terms of virus and cell creation, destruction, and circulation among the different compartments. Our study suggests that the main features and characteristic parameters of HIV- dynamics measured in blood of infected patients reflect a complex dynamics taking place in lymphoid tissues.

Lymphocytes differentiation, plasticity, and immune-mediated diseases

ABSTRACT. We study the differentiation process of CD4 T lymphocytes by means of a regulatory network that integrates transcriptional regulation, signaling pathways, and a micro-environment determined by cytokine expression. The network interactions are characterized by dynamic Boolean propositions involving fuzzy logics. As a result, the model yields immune cell phenotypes giving rise to cellular, humoral, inflammatory, and regulatory responses, as well as other reported T-cell types. Plasticity and reprogramming of T-cell fates are attained by altering the micro-environmental conditions. We discuss the results of our model within the context of several immune-mediated diseases.

Public health monitoring of drug interactions, patient cohorts, and behavioral outcomes via network analysis using multi-source user timelines

ABSTRACT. Social media and mobile application data enable population-level observation tools with the potential to speed translational research. We have shown recent work demonstrating Instagram’s importance for public surveillance of drug interactions [1]. Our methodology is based on the longitudinal analysis of social media user timelines at different timescales: day, week and month. Weighted graphs are built from the co-occurrence of terms from various biomedical dictionaries (drugs, symptoms, natural products, side-effects, and sentiment) at various timescales. We showed that spectral methods, shortest-paths, and distance closures [2,3] reveal relevant drug-drug and drug-symptom pairs, as well as clusters of terms and drugs associated with the complex pathology associated with depression [1]. Here we extend the approach to include validation measures for discovered drug interactions and adverse reactions via curated databases (DrugBank & SIDER); additional social media sources: Twitter, Facebook, ChaCha and the Epilepsy Foundation public forums; and multi-level network analysis with data from single patients in multiple mediums. We present preliminary results on the prediction of behavioral transitions for patient cohorts at risk for “Sudden Unexpected Death in Epilepsy” (SUDEP) from Facebook, where we scored their written text to sentiment dimensions used in the prediction. We also present new links between drugs and symptoms related to depression, epilepsy, and the recent opioid epidemic in the US using user data from Twitter and Instagram. We present a methodology to identify user biographical information solely from self-reported information – available in clinical settings but often lacking in online settings – needed for estimating population-level statistics such as age and gender of cohorts of interest. Finally, we will showcase a general-purpose web-tool environment – featuring a virtual reality 3D knowledge network visualization – that can facilitate public health monitoring of social media for conditions, drugs, and cohorts of interest, expanding upon our previous Instagram Drug Explorer tool [1].

[1] R.B. Correia, L. Li, L.M. Rocha [2016]. Pac. Symp. Biocomp. 21:492-503. (PMCID: PMC4720984) [2] T. Simas and L.M. Rocha [2015]. Network Science, 3(2):227-268. [3] G.L. Ciampaglia, P. Shiralkar, L.M. Rocha, J. Bollen, F. Menczer, A. Flammini [2015]. PLoS One. 10(6): e0128193.

Lifestyle diseases as Complex Adaptive Systems: Perspectives and Challenges

ABSTRACT. The phenomenology of Complex Adaptive Systems is immensely richer than that of physical systems, both structurally and, more importantly, behaviourally. Although the possible microscopic configurations of 1023 atoms that make up a gas may be similar to that of a similar number of atoms that make up a living organism, the phenomenology of an organism is immensely more difficult to describe than that of the gas. Correspondingly, the amount of data required to organize and understand the phenomenology and then adequately describe the organism is much greater. Although this is a seemingly abstract consideration, it lies at the very heart of developing a better understanding of disease. Any disease, seen correctly as a Complex Adaptive System, is immensely multifactorial. However, chronic diseases that are highly lifestyle dependent, such as obesity, type 2 diabetes, many cancers etc. are even more challenging, involving factors across the entire spectrum of scales and disciplines: genetics, epigenetics, cell biology, physiology, psychology, neuroscience, epidemiology, sociology, economics, politics and ethics. An important reason why, in spite of a huge investment in research, such diseases are still on the increase, is that there is no “cure” associated with a particular discipline, such as a vaccine. Rather, the risk factors form a complex causal network of interactions and no particular factor is much more important than the others. In this presentation we will discuss the challenges of the predictive modelling of chronic diseases. In particular, we will discuss those challenges in the context of the construction of predictive models for obesity developed from data obtained from a cohort of 1,076 workers and researchers at the UNAM. Over 3,000 variables were measured – epidemiological, social, physiological and genetic. From the analysis of this data we can show explicitly how different factor types – nutrition, lifestyle, personal and family antecedents, genetics, health knowledge and demographics – all contribute to the overall risk, emphasising that each risk factor can be individually quantified and contrasted with others, and noting that such risk is dependent on an enormous spectrum of factors, indicating that there is no “magic bullet” solution. We will show that, although very predictive models can be created, there are significant challenges in understanding what the model is telling us and how to turn it into actionable information both at the clinical and public policy levels. We believe that this type of model also illustrates many of the difficulties to be faced in modelling any Complex Adaptive System.

Animal social networks relevant to disease transmission among free-roaming dogs in Chad
SPEAKER: Laura Ozella

ABSTRACT. Animal social network analysis is increasingly used to understand many ecological and epidemiological processes. The knowledge of the contact structure of an animal population is the first step for predicting and controlling disease outbreaks, including zoonosis. Free-roaming domestic dogs (Canis familiaris) are hosts of a variety of zoonosis (e.g., rabies) in most of Africa, and little is known about the population dynamics of these animals. Usually, modeling efforts are challenged by the limited availability of data on dogs movements and mixing patterns. We used wearable proximity sensors to detect close-range interactions between free-roaming domestic dogs living in 4 villages in rural Chad. The study included 138 dogs, 60 females and 78 males, classified by age as: pups (birth to 6 months), juveniles (6 months to 1 year), subadults (1-2 years), and adults (more than 2 years). Every dog belonged to one of 91 different households, with 32 households hosting more than one dog. The experimental period lasted from 2 to 12 days, depending on the village. We also used GPS trackers to estimate their home range, and to understand their movement behavior. Furthermore, we combined the proximity sensors data and the GPS data to obtain a multi-layer proximity network description based on different contact definitions. We defined temporal contact networks and contact matrices, and determined mixing patterns by age and by gender. Contacts occurred mostly between dogs living in the same household, however, 98% of the dogs had contacts with dogs living in a different household. Contacts within households occurred mainly among pups, while inter-household contacts were between subadults and juveniles and subadults and adults. Results show that the pups are more sociable with dogs of the same litter, while dogs more than 6 months tend to interact with dogs living in different households. With respect to their gender, contacts occurred mainly between males and females both within and across households. The temporal evolution of the number of contacts showed, in each village, some distinct temporal features, specifically daily oscillations with two activity peaks, one in the morning (6AM - 8AM) and one in the evening (6PM - 8PM), according to the typical daily activity patterns of dogs. Moreover, our results showed a positive and significant correlation between the time spent in proximity by dogs detected by the proximity sensors and by the GPS. Our study shows the feasibility of accurate measures of contact patterns among free-roaming dogs in a rural context in Africa, providing novel insights into the structure and behavior of animal contact networks and their implication for disease transmission.

True miRNOME Landscape in Retinoblastoma Analysis Reveals a Critical 30 miRNA Core

ABSTRACT. miRNAs exert their effect through a negative regulatory mechanism silencing protein expression upon hibridizing to their target mRNA, thus miRNAs have a prominent position in the control of many cellular processes including carcinogenesis. High throughput tools to assess a miRNome are mainly based on microarrays and RNA-seq. Analysis and published results of expression microarrays whether miRNA or mRNA species especially in the cancer field, lack at large, a proper and robust approach to describe their findings with a real integrative approach. In this work we examine and show with a broad perspective the whole miRNome expression using a high throughput microarray platform including 2578 mature miRNAs in 12 samples of primary human retinoblastoma, an intraocular malignant tumor of early childhood and probably the most robust clinical model of genetic predisposition to develop cancer on which, the first tumor suppressor gene RB1 was identified. miRNA studies on retinoblastoma are limited to specific miRNAs previously reported in other tumors or to medium density arrays. This work delineates the miRNA landscape in human retinoblastoma samples with a non-biased approach using as an initial guide, discretized data from detection call scores as an approximation to the “ON”/”OFF” or expressed/not expressed state for each miRNA. With this data we generated a very informative hierarchical map of miRNAs on which we discovered a core-cluster of 30 miRNAs highly expressed in all the cases, a cluster of 993 not expressed in all cases and 1022 variably detected in the samples accounting for inter tumor heterogeneity. We explored mRNA targets, pathways and biological processes affected by some of these miRNAs, from this exploration we propose that the 30 miRs core represent a shared miRNA machinery in retinoblastoma affecting most pathways considered hallmarks of cancer. Interestingly, 36 miRNAs were differentially expressed between males and females, some of their potential pathways were associated with hormones and developmental processes. We also identified miR-3613 as a potential down regulator hub, because is highly expressed by all the samples and has at least 36 tumor suppressor genes as potential mRNA targets including the RB1 gene itself. Our results indicate that human retinoblastoma share a common and fundamental miRNA expression profile regardless of heterogeneity. This work also shows how relevant oncology concepts like inter tumor heterogeneity or oncogene addiction can be uncovered and described with high throughput data, closing the vocabulary and tool use gap frequently found when different fields intersect.

11:00-13:00 Session 8G: Socio-Ecological Systems (SES) - Social Systems and Human Interactions

Parallel session

Location: Cozumel 3
Leveraging Computational Social Science to address Grand Societal Challenges

ABSTRACT. The increased access to big data about social phenomena in general, and network data in particular, has been a windfall for social scientists. But these exciting opportunities must be accompanied with careful reflection on how big data can motivate new theories and methods. Using examples of his research in the area of networks, Contractor will argue that Computational Social Science serves as the foundation to unleash the intellectual insights locked in big data. More importantly, he will illustrate how these insights offer social scientists in general, and social network scholars in particular, an unprecedented opportunity to engage more actively in monitoring, anticipating and designing interventions to address grand societal challenges.

Humans display a reduced set of consistent behavioral phenotypes in dyadic games.

ABSTRACT. Socially relevant situations that involve strategic interactions are widespread among animals and humans alike. These situations are commonly studied in economics, psychology, political science, and sociology, typically using a game theoretic framework to understand how decision-makers approach conflict and cooperation under highly simplified conditions, generating valuable insights about human behavior. However, most of the results reported so far have been obtained from a population perspective and considered one specific conflicting situation at a time. This makes it difficult to extract conclusions about the consistency of individuals’ behavior when facing different situations and to define a comprehensive classification of the strategies underlying the observed behaviors.

Here, we attempt to shed light on this issue by focusing on a wide class of simple dyadic games that capture two important features of social interaction, namely, the temptation to free-ride and the risk associated with cooperation. For this purpose, we present the results of a lab-in-the-field experiment in which subjects face four different dyadic games, with the aim of establishing general behavioral rules dictating individuals’ actions. The games used in our study are the Prisoner Dilemma, the Stag Hunt, the Snowdrift and the Harmony. We recruited 541 subjects of different ages, educational level, and social status. The experiment consisted of multiple rounds, in which participants were randomly assigned partners and assigned randomly chosen payoff values. By varying two parameters of the payment matrix, we obtained 121 different games, which allowed us to study the behavior of the same subject in a wide range of situations, simultaneously obtaining data from various observables, such as the tendency to cooperate and the risk aversion of the subjects.

By analyzing our data with an unsupervised robust classification algorithm, the K-means clustering algorithm, we find that all the subjects conform, with a large degree of consistency, to a limited number of behavioral phenotypes (Envious, Optimist, Pessimist, and Trustful), with only a small fraction of undefined subjects. In agreement with abundant experimental evidence, we have not found any purely rational phenotype: the strategies used by the four relevant groups are, to different extents, quite far from self-centered rationality. We also discuss the possible connections to existing interpretations based on a priori theoretical approaches. Our findings provide a relevant contribution to the experimental and theoretical efforts toward the identification of basic behavioral phenotypes in a wider set of contexts without aprioristic assumptions regarding the rules or strategies behind actions. From this perspective, our work contributes to a fact-based approach to the study of human behavior in strategic situations, which could be applied to simulating societies, policy-making scenario building, and even a variety of business applications.

Poncela-Casasnovas, Julia, et al. "Humans display a reduced set of consistent behavioral phenotypes in dyadic games."Science Advances 2.8 (2016): e1600451.

Mixing, homophily and avoidance in a scientific conference

ABSTRACT. During the GESIS Winter Symposium 2016, we set up a SocioPatterns experiment in order to study how people mix during a scientific conference. Using the SocioPatterns system, we recorded the face-to-face contacts between participants. We also asked them to answer a survey about general sociodemographic information, along with a personality test based on the Big Five model. The Big Five model is a standard test in Psychology, that scores five aspects of an individual's personality: openness, conscientiousness, extraversion, aggreeableness, and neuroticism.

We have thus gathered data on how participants interacted during the conference, and measured their mixing behaviour based on sociodemographic attributes. We found for instance that no gender bias could be measured, but that there existed avoidance strategies based on age class. We were also interested in investigating the existence of homophilic or heterophilic behaviours between individuals of various personality types, i.e. for example whether extraverted individuals are more likely to connect with other extraverts, or whether neurotic individuals avoid other personality types. We could not find evidence of such homophilic or heterophilic behaviour linked to personality types.

Finally, we aimed at determining whether personality types, as defined and measured by the Big Five model, could predict features of real social interactions, such as the number of person encountered, the average time of interaction, etc. Preliminary results seems to show that personality traits have no predictive power for social behaviour in such a context. These results need to be confirmed by future studies.

Hunter-gatherer networks and cumulative culture

ABSTRACT. Social networks in modern societies are highly structured, usually involving frequent contact with a small number of unrelated friends. However, contact network structures in traditional small-scale societies, especially hunter-gatherers, are poorly characterized. We developed a portable wireless sensing technology (motes) to study within-camp and inter-camp proximity networks among Agta and BaYaka hunter-gatherers in fine detail. We show that hunter-gatherer social networks exhibit signs of increased efficiency for potential information exchange (see Refs.[1,2] for full details).

In particular, to estimate global network efficiency[3], we first built weighted social networks using our motes proximity data from Agta and BaYaka camps, and subdivided the networks into three decreasing levels of relatedness: close kin, extended family and non-kin. We estimated the contribution of each relatedness level to global network efficiency by comparing our hunter-gatherer network structures with randomly permuted networks. Our analyses show that randomization of interactions among either close kin or extended family (including affinal kin) does not affect the global efficiency of hunter-gatherer networks. In contrast, randomization of non-kin relationships (friends) greatly reduces global network efficiency. Therefore, increased global efficiency in our networks results from investing in a few strong close friends in addition to an extended net of social acquaintances, or a combination of strong and weak ties[4]. In agreement with classic studies of small-world networks[5], our results show that only a few shortcuts (friendships) connecting closely knit clusters (households consisting mostly of close kin) suffice to significantly reduce the average path length or distance between any two points across the whole network, thus reducing redundancy and the cost of maintaining strong links with a large number of unrelated individuals. Since unrelated individuals often live in different households, they provide a small number of reliable shortcuts between households. Both the Agta and BaYaka had between one and four unrelated close friends with whom they interact as frequently as with close kin. This number is consistent across ages and camps, and with the finding that people in western societies are in close contact with an average of four friends[6].

We also show that interactions with non-kin appear in childhood, creating opportunities for collaboration and cultural exchange beyond family at early ages. We also show that strong friendships are more important than family ties in predicting levels of shared knowledge among individuals. We hypothesize that efficient transmission of cumulative culture[7-10] may have shaped human social networks and contributed to our tendency to extend networks beyond kin and form strong non-kin ties.

Evidence for a conserved quantity in Human Mobility

ABSTRACT. Faced with effectively unlimited choices of how to spend their time, humans are constantly balancing a trade-off between exploitation of familiar places and exploration of new locations. Previous analyses have shown that at the daily and weekly timescales individuals are well characterized by an activity space of repeatedly visited locations. How this activity space evolves in time, however, remains unexplored. Here we analyse high-resolution spatio-temporal traces from 850 individuals participating in a 24-month experiment. We find that, although activity spaces undergo considerable changes, the number of familiar locations an individual visits at any point in time is a conserved quantity. We show that this number is similar for different individuals, revealing a substantial homogeneity of the observed population. We point out that the observed fixed size of the activity space cannot be explained in terms of time constraints, and is therefore a distinctive property of human behavior. This result suggests an analogy with the so-called Dunbar number describing an upper limit to an individual's number of social relations, and we anticipate that our findings will stimulate research bridging the study of Human Mobility with the Cognitive and Behavioral Sciences.

Self-Organized Anticipatory Synchronization of Chaotic Human Behavior by Artificial Agents During Real Time Interaction

ABSTRACT. Rapid advances in cyber-technologies and robotics present increasing opportunities for the implementation of interactive, artificial agents within contexts of human behavior. This includes, but is not limited to, assistance during the performance of everyday tasks and the development of new skills. Work has already been done, for example, on the development of virtual agents able to assist elderly individuals with the organization of daily activities, and to create a robot whose structured interaction may help to improve interpersonal coordination in children with autism spectrum disorders. However, researchers have recently drawn attention to the fact that engineers working to design virtual and robotic agents do not always prioritize those aspects which will allow for smooth, effortless human interaction, while psychologists studying interpersonal or joint-action do not always take into account technical realizability in describing what they see as the fundamental aspects of successful multi-agent coordination. One potential solution to this issue is to identify and model the behavioral dynamics of natural human-human interaction using low-dimensional differential equations that can be easily implemented within interactive robotic or machine systems. Recent work has provided support for the idea that relatively simple self-sustaining, nonlinear dynamical systems can be used to construct virtual interaction partners capable of successful, flexible coordination with human actors. The development of these agents has primarily focused on their ability to exhibit coordination with periodic behaviors, or synchronize with fluctuating movement speeds using a velocity estimation algorithm. However, one only has to consider a pedestrian navigating a busy city sidewalk to be reminded that people are often capable of prospectively coordinating their behavior with highly variable, seemingly unforeseeable events in an effortless manner. Our own recent research in human motor control and joint-action has demonstrated that small perceptual-motor feedback delays, such as those known to exist within the human nervous system, may actually facilitate the ability to achieve anticipation of such continuous chaotic events. This phenomenon, referred to as strong anticipation or self-organized anticipatory synchronization, has been found to emerge when a unidirectional coupling exists between a “slave” system and a chaotically behaving “master” system. Surprisingly, as the slave system begins to synchronize with the chaotic behavior of the master system, the introduction of small temporal feedback delays results in the slave system anticipating the ongoing behavior exhibited by the chaotic master system. Understanding human anticipatory behavior as defined by the same universal dynamical laws as other physical systems provides a novel opportunity to inform the advancement of artificial agents. The goal of the current project was therefore to harness the phenomenon of anticipatory synchronization in developing an artificial agent capable of achieving adaptive anticipation during interaction with a human co-actor. Here individuals interacted with a robot avatar defined by a time-delayed, low-dimensional dynamical model via a virtual reality headset. This agent displayed prospective coordination with seemingly unpredictable human behavior, making this work the first to employ the understanding of anticipatory synchronization in physical systems for the creation of an artificial agent capable of anticipating complex human behavior in real time.

Options for transfer of ecosystem research methodologies from ecology to innovation research
SPEAKER: Dries Maes

ABSTRACT. During the last decades, companies have increasingly moved from short-term oriented closed innovation towards more lasting open innovation structures. Open innovation indicates the use of external knowledge during R&D processes, as well as the application of internal knowledge for the benefit of external partners. These growing networks are described as innovation ecosystems featuring higher innovation capacities and regional stability. In parallel, the study of open innovation has moved towards a system level thinking during the last years. This has led to the concept of innovation ecosystems to characterize the intricate complex structure of innovation network activities in a region. The ecosystem metaphor reflects the increasing interconnectedness of the innovation networks, and the ever-present changes in network structures and dynamics.

The study of innovation ecosystems has received criticism lately for using a mere ecosystem analogy as a concept for research. This criticism has led to scholars to review the merits of the innovation ecosystem concept and to call for increasingly rigorous applications. This discussion however is for a large part held within the domain of innovation research. Reviews or conceptual discussions rarely include the latest developments in concepts or methodologies from ecology and biological sciences. This not only limits the scope of the discussion, it also prevents potential opportunities for interdisciplinary learning. Other concepts, like resilience, have shown that crossing disciplinary boundaries can open up an entirely new perspective, and may induce new avenues of research, such as the research on resilience of socio-ecological systems.

In this paper we review side-to-side the methodologies used for ecosystem research in ecology and in economic science, with the aim of identifying the instances where innovative methodologies from ecology can be transferred to the study of innovation ecosystems. We focus on methodologies to go beyond the concepts and thinking frameworks. For each methodology, the theoretical foundation, underlying assumptions and benefits are reviewed. The review of the underlying assumptions takes the differences between the two scientific domains into account, and controls whether ecological methods are applicable in systems that exhibit markets, coordination and foresight. Some methodologies, such as input-output analysis, seem to have evolved separately in the two scientific domains. Others, such as ascendency measurements, see a growing interest in ecology, but are only scarcely applied in socio-ecological systems, revealing options for new approaches in innovation research.

11:00-13:00 Session 8H: Complexity in Physics and Chemistry - Soft matter

Parallel session

Location: Xcaret 4
Scaling laws and collective dynamics in far from equilibrium growth-fragmentation processes

ABSTRACT. The kinetic theory of far from equilibrium growth-fragmentation processes provides natural mathematical models for many types of complex systems. The phenomenology of such models is very rich. They sometimes reach steady states that exhibit power law scaling of the cluster size distribution over some range of scales. However, depending on the details and relative strength of the growth and fragmentation mechanisms, we may find a transition between a stationary stable phase in which some maximum characteristic size is reached and a non-stationary growing phase in which the characteristic cluster size grows indefinitely. In some growing phases, the characteristic cluster size can diverge in a finite time - a mechanism for very rapid formation of very large clusters. In some stable phases, the stationary cluster size distribution can be unstable and transitions to a regime in which the kinetics become oscillatory with the largest clusters appearing and disappearing in a periodic fashion. In this talk, I will discuss recent developments in this area and try to relate them to some of the commonly discussed phenomenology of complex systems.

Universality of fractal to non-fractal transitions in stochastic growth processes

ABSTRACT. From the formation of lightning-paths to vascular networks, stochastic growth processes of pattern formation give rise to intricate structures spread everywhere and at all scales in nature, often referred to as fractals. One striking feature of these growth processes are the fractal to non-fractal morphological transitions that they undergo as a result of the interplay of the entropic and energetic aspects of their growth dynamics, that ultimately manifest in their structural geometry. However, due to the lack of a complete far-from-equilibrium scaling theory to describe them, an important aspect of the theory dealing with the nature of these transitions and the best quantities to characterize them, is still in need of a comprehensive description. In this work, we present a framework for the study of these transitions that is based on the concepts and tools of information theory and fractal geometry. First, by means of two-dimensional fundamental aggregation models, the Diffusion-Limited Aggregation, the Ballistic Aggregation, and the Mean-Field infinite interaction model, we present four fractal to non-fractal transitions that are able to reproduce all the main morphologies observed in fractal growth. Second, we present a general dynamical model for the information dimension of the clusters, whose solution is able to describe their fractality along the respective transition. As the main result, we found that the effective scaling and fractality of all these transitions, including that of the paradigmatic Dielectric Breakdown Model (the base of the fundamental Laplacian growth theory), can be described by a single universal equation regardless of the symmetry-breaking process that governs the transition, the initial configuration of the system, and the Euclidean dimension of its embedding space.

References: - Nicolás-Carlock, J.R., Carrillo-Estrada, J.L., and Dossetti, V. "Fractality a la carte: a general aggregtion model" Scientific Reports 6, 19505 (2016); "Universality in fractal to non-fractal morphological transition of stochastic growth processes" e-print: arXiv:1605.08967 (2016) (Under Revision).

Universal modeling approach for the topological structure of multifunctional polymer networks using random graphs

ABSTRACT. When speaking about soft matter and its physical properties, most of the time the material is thought of and modeled as being continuous. However, if one zooms in on the material, its discrete structure comes into sight, consisting of single molecules that are connected with each other. Often, these molecules have not only formed simple chains, but appear as strongly crosslinked, complex networks. It is the topology, which defines the macroscopic, physical properties of a material. Considering the example of a polymerization process, the emerging topology defines the phase transition from the liquid to the solid state. We developed a novel mathematical model to predict the topology of highly crosslinked growing networks and to derive global properties of these networks. The complex topology of these networks can be described by a graph. If we model polymer networks, the nodes of the graph are interpreted as the monomer units, the edges as the chemical bonds between monomer units and the connected components describe the whole molecules consisting of connected monomers. Further, we view the evolution of the growing network as a random process. Hence, the network is modeled as a configurational model for random graph with directed and undirected edges using generating functions. The trivariate degree distribution that defines the random graph is given for every moment of time and obeys the statistics of the local chemical reactions. Utilizing our extended random graph formalism we derive an expression for the weak component in the generating function domain. An exact analytic criterion for a phase transition in the network (the emergence of the giant component) is obtained, which corresponds to the physical phase transition in the chemical system from the pre-gel to the gel regime. This criterion only utilizes the moments of the trivariate degree distribution as an input. Furthermore, global properties are calculated as the component size distribution, the gel fractions for different classes of species as well as the distribution of distances between crosslinks. The method is illustrated on the example of photocuring of hexanediol diacrylate (HDDA) including initiation, propagation and termination reactions. As the method is highly universal, it can be applied on various other chemical systems.

Topological properties of hydrogen bond networks for water in different thermodynamic phases

ABSTRACT. The study of water has always been a very active area of research among the scientific community because it is the most abundant liquid in the planet, it is fundamental for life, and it presents a very rich phase diagram. Besides, although water has a very simple molecular structure, it exhibits a great number of anomalies not found in most of simple one component fluids. These anomalies are mainly related with the formation of hydrogen bonds among molecules. So, a good water model should be able to reproduce the hydrogen bond network at different thermodynamic states. In this work, we have used the Molecular Dynamics simulation technique to study several water models (TIP5P, TIP4P/2005 and TIP4P/Ice) at different temperature and pressure conditions to simulate single phase properties and their transitions. From the equilibrated simulated configurations we have built networks for the structure of water characterizing the hydrogen bonds with three different geometrical criteria. Once the corresponding networks are well established, we computed some topological properties like the average degree, the clustering coefficient (C), the average path length (L) and the degree distributions. The networks were created with the purpose of analyzing the behavior of the topological properties in different single thermodynamic phases (gas, liquid and solid) and in the neighborhood of the transitions between them. In general it was observed that the topological properties are sensible to the selected water model and/or hydrogen bonding criteria in the different single phases. Besides, some of the topological properties can detect a change of phase, like for example, the clustering coefficient or the average degree. The single phase properties near the coexistence lines can differ approximately in one order of magnitude. As a conclusion we can say that the topological properties of the hydrogen bond networks are a good indicator for characterizing the distinct water thermodynamic phases (solid, liquid and vapor) and their transitions. Besides, the topological properties provide an economical way of testing different hydrogen bonding criteria while building better water models.

Auto-generated Reaction Networks for Polymerization of Triacylglycerides
SPEAKER: Yuliia Orlova

ABSTRACT. Many phenomena in life and nature can be viewed as stochastic process where the system goes from one state to another following some finite number of rules. Knowing the initial state of the system and the mechanism of metamorphosis between different states, one can recover the whole configuration space of the process. In mathematical linguistics, this principle has been long since exploited to generate a set of well-formed sentences by applying a generative grammar to the initial dictionary of simple words. In the same way, we define rules on molecules and automatically form new species, which are able to react further and keep the system alive till the whole reaction network is completely recovered. In this manner we explore the polymerization process of triacylglycerides, where thousands of competitive reactions are happening from the very beginning of the polymerization process. Our main purpose is to construct the kinetic model, which captures the distribution of concentrations of all the intermediate species and products over time. We suggest a new methodology, which consists of automated generation of reaction mechanisms and results in a complete chemical reaction network. In order to generate this network, we define reaction rules for each type of the reaction in the system. Molecules are viewed as molecular graphs. Thus, the reaction rules are described as grammar on patterns (subgraphs of molecular graphs), which correspond to the reactive sites of the molecule. In this setup, a reaction happens only between the patterns of a reactant and a product. The next issue to tackle is to recognize these patterns in the molecule. It brings us to subgraph isomorphism problem. For this purpose we utilize FastOn algorithm, which is based on Ulman algorithm and aims to reduce the subgraph search space. Every time a transformation of patterns happens, it is recorded in the reaction network. Thus, we end up with a list of all configurations of the triacylglyceride monomer and the complete reaction network. Having this information we convert the network into a kinetic model, which provides detailed information about the concentrations of all species in our system. Furthermore, one can extract and investigate species of interest and analyze the behavior of the system in different points of time. In particular we focus on obtaining the distributions of crosslinked species. Further these distributions will be used as an input for the random graph model that recovers macroscopic structure of the resulting polymer network.

Branched Polymers: The Most Real-world of All Networks
SPEAKER: Ivan Kryven

ABSTRACT. During the last two decades the community that gathered around a freshly coined concept of network science has been chasing new models explaining the “real-world” networks. This process was bristled with constantly redefining what ‘real-world’ means by trying to associate the concept with social sciences: power law in degree distribution, small average shortest path, clustered structure. This talk focuses on a different type of networks. Networks that are not less ‘real’ and certainly more tangible: the polymer networks. From materials for tables and chairs to nano-robots transporting drugs in the human body, polymer networks comply with the most extraordinary demands due to a broad range of physical properties they exhibit. In conditions of fluidity and constant change of shape it is mostly connection between molecules (or molecular topology) that makes the same molecular network being the same at different points in time. We model formation of a polymer network from multifunctional precursors with a temporal random graph process. The process does not account for spatial positions of the monomers explicitly, yet the Euclidean distances between the monomers are derived from the topological information by applying self-avoiding random walk formalism. This allows favouring reactivity of monomers that are close to each other, and to disfavour the reactivity for monomers obscured by the surrounding. The phenomena of conversion-dependent reaction rates, gelation, and structural inhomogeneity are predicted by the model.

[1] I. Kryven, J. Duivenvoorden, J. Hermans, P.D. Iedema. "Random graph approach to multifunctional molecular networks." Macromolecular Theory and Simulations 25.5 (2016): 449-465.

11:00-13:00 Session 8I: Economics and Finance - Banking, financial markets, risk & regulation II

Parallel session

Location: Gran Cancún 1
Predicting stock market movements using network science: An information theoretic approach
SPEAKER: Hiroki Sayama

ABSTRACT. A stock market is considered as one of the highly complex systems, which consists of many components behaving up, down and/or staying still interdependently with each other in their publicly known market values over time. This complex nature of the stock markets challenges us on making a reliable prediction of its future movements. In this research, we aim at building a new method to forecast the future movements of the stock markets by constructing complex networks of Standard & Poor's 500 Index (S&P 500) underlying constituents with companies representing network nodes and mutual information of 60-minute price movements of the pairs of the companies representing network link weights. By studying the relationship between the dynamics of the measurements we have created with the degree information of the nodes and (S&P 500), we show that the changes in degree distributions of the networks provide important information on the network's future movements. To show the predictability of over time degree distribution changes to the S&P 500, we built two predictors using the degree distribution information, the relative strength (the strength, average degree of all nodes, of a network relative to that of the average of the previous networks) and Kullback-Leibler divergence (KLD) of the networks. We found that, through a linear combination of the two metrics, the combined predictor and the future (one hour to the future) changes in S&P 500 shows a quadratic relationship from which we can predict the amplitude of the future change when we have a new observation of the predictor. The result shows large fluctuations in S&P 500 Index when the predictor makes hikes. These findings are useful for financial market policy makers as an indicator based on which they can interfere with the markets before the market makes a drastic change.

Analytic solution to variance optimization with no short positions

ABSTRACT. A portfolio of independent, but not identically distributed, returns is optimized under the variance risk measure with a ban on short positions, in the high-dimensional limit where the number N of the different assets in the portfolio and the sample size T are assumed large with their ratio r=N/T kept finite. To the best of our knowledge, this is the first time such a constrained optimization is carried out analytically, which is made possible by the application of methods borrowed from the theory of disordered systems. The no-short-selling constraint acts as an asymmetric L1 regularizer, setting some of the portfolio weights to zero and keeping the out-of-sample estimator for the variance bounded, avoiding the divergence present in the non-regularized case. However, the ban on short positions does not prevent the phase transition in the optimization problem, only shifts the critical point from its non-regularized value of r=1 to 2, and changes its character: at r=2 the out-of-sample estimator for the portfolio variance stays finite and the estimated in-sample variance vanishes, while the susceptibility diverges at the critical value r=2. We have performed numerical simulations to support the analytic results and found perfect agreement for N/T<2 in the large N limit. Numerical experiments on finite size samples of symmetrically distributed returns show that above r=1 solutions with zero in-sample variance start to sporadically arise, their probability of appearance increasing as r approaches 2, steeply rising around the critical point, and becoming nearly one beyond r=2. A closed formula obtained for this probability shows that in the large N limit the transition becomes sharp. The zero in-sample variance solutions are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. With some narrative license we may say that the no-short constraint, with prohibiting large compensating positions, takes care of the longitudinal (length) fluctuations of the optimal weight vector, but does not eliminate the divergent transverse fluctuations of its direction arising from the reshuffling of the vector components. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large variances, in accord with one's natural expectation.

Construction of stochastic order book models based on real data analysis
SPEAKER: Kenta Yamada

ABSTRACT. We introduce a new order book model which reproduces all major stylized facts such as a power law distribution of price changes, abnormal diffusions and distribution of transaction intervals.

Maslov introduced a basic order book model[1] which describes the dynamics of an order book as a stochastic process. Namely in this model we give the dynamics of limit orders and market orders stochastically. The Maslov model reconstructs a power law distribution of price changes, however the market price produced by the Maslov model shows much greater oscillation than the real data and the diffusion properties of price are also different from the real data.

In order to construct a more realistic model, we analyzed the order book data of financial markets and give the base model the characteristics observed from the data. We find an important rule for the position where a new limit order is placed, that is the place of a new limit order depends on the volatility. When the market is volatile, limit orders tend to be placed in deep positions where the distance is away from the best price, while on the other hand when the market is stable limit orders tend to be placed in shallow positions. We also find the normalized distance by volatility follows the unique exponential distribution except for the special day such as government intervention. This feedback of volatility implies that dealers observe the volatility and they tend to extend their spread between the bid(buying) and the ask(selling) prices when the market is volatile.

The revised model, which contains the properties of a new limit order position, reproduces the power law distribution of price changes. We can also find volatility clustering and temporal non-uniformity of dispersion in time series of price changes as seen in the real data.

We also give two more effects to the revised model in order to reproduce abnormal diffusion of the market price, potential properties[2] observed from the market price and the statistical properties of transaction intervals: the first is the trend follow effect, i.e. feedback effect of price changes, and the second is expansion and contraction effect of phycological time, i.e. feedback effect of transaction intervals. These effects are discussed in the dealer model[3].

From our analysis, these three feedback effects are very important to describe real market fluctuations in market prices and transaction intervals, and these endogenous feedback effects are caused by dealers' observation of trends and volatility in market prices and market activities. We can simulate various situations such as a government intervention in our new model by adding special properties which are observed in the government intervention from the data.

[1]S. Maslov, Physica A: Statistical Mechanics and Its Applications 278, 571 (2000).
 [2] M. Takayasu, T. Mizuno, and H. Takayasu, Physica A: Statistical Me- chanics and Its Applications 370, 91 (2006). [3] K. Yamada, H. Takayasu, T. Ito, and M. Takayasu, Phys. Rev. E 79, 051120 (2009).

Multiplex financial networks: revealing the level of interconnectedness in the banking system

ABSTRACT. The network approach has been useful for the study of systemic risk; however, most of the studies have ignored the true level of interconnectedness in the financial system. In this work we show the missing part on the study of interconnectedness of the banking system. Recently, complexity in modern financial systems has been an important subject of study as well as the so called high degree of interconnectedness between financial institutions. However, we still lack the appropriate metrics to describe such complexity and the data available in order to describe it is still scarce. In addition, most of the focus on the subject of interconnectedness has been on a single type of network: interbank (exposures) networks. In order to have a more complete view of the complexity in the Mexican banking system, we use a comprehensive set of market interactions that include transactions in the securities market, repo transactions, payment system flows, interbank loans, cross holding of securities, foreign exchange exposures and derivatives exposures for banks. This the first attempt, to the best of our knowledge, to describe so comprehensively the complexity and interconnectedness in a banking system. By resorting to the multiplex paradigm we are able to identify the most important institutions in the whole structure, the most relevant layer of the multiplex and the community structure of the Mexican banking system.

Systemic effects of homogeneity of risk models in the insurance sector

ABSTRACT. While the insurance sector has been extensively studied from game theory and risk modelling perspectives, systemic risk in the insurance sector has not been sufficiently investigated. Risk models are accurate only up to a certain degree. In particular, they may err with respect to the correlation of risk events. With a small but significant probability, they would therefore lead to the insolvency and bankruptcy of the company. If all insurance companies use the same risk model, bankruptcies in the insurance sector would happen in clusters. This results in structural problems for the entire sector. Nevertheless, the number of risk models employed in the insurance sector remains very small. Risk models are research-intensive and must be carefully maintained. Official accreditation, a densely connected professional network and cautious attitudes in the face of considerable potential losses add to the entry barriers in this field.

We develop an agent-based model of the insurance sector and study the effects of unanticipated correlations of risk events in settings of varying diversity. We characterize the conditions under which bankruptcies result in structural effects on the system level and may thus be seen as systemic risk. We consider how different types of risks - hence different distributions - influence the systemic effects. We also study the effects of regulatory instruments.

We further provide some insights from historical cases of property insurance (hurricane, maritime, earthquake, flood, etc.) and associated claims and discuss causes for the lack of diversity in terms of risk models employed in the insurance business.

Event Driven Game Theory
SPEAKER: Justin Grana

ABSTRACT. Traditional game theory has considered time-extended, non-simultaneous move scenarios at least since the introduction of extensive form games. Almost all of this work assumes that the order and timing of the actions by the players occur at pre-fixed times. This is the case even in games with important stochastic elements. In particular, in Markov games, each player moves at each (integer-valued) time steps. However in very many realistic scenarios the assumption that all actions occur at pre-fixed times is wrong.

For example, in financial markets the times at which players act are not pre-fixed but are determined dynamically as players react to randomly arriving information. Similarly, in computer network security games, the underlying computer network that provides the strategic environment is stochastic and subject to asynchronous timing. Crucially, this stochasticity and resulting uncertainty regarding the timing of events has important strategic implications. In this paper, we present an event driven game formulation in which the timing and sequence of actions are not pre-fixed by the modeler but are governed by a stochastic process that unfolds as the game progresses. Our formulation allows players to act and receive information at random and asynchronous times that are determined by the underlying stochastic process as well as players' past actions.

After introducing the specification, we illustrate the applicability of the formulation in a variety of domains including collusive cartel formulation in industrial organization and computer network security. In our examples, we show how the parameters that govern the underlying stochastic process have important strategic implications that do not arise in traditional discrete time formulations. For example, in our industrial organization model, firms compete for customers by setting prices. However, each firm receives information at random times and can change their price at random times. At the same time, the demand for the firms' product is evolving according to a stochastic process that the firms only imperfectly observe. We prove that the rate at which firms monitor one another and the rate at which they receive information regarding demand determines their incentives to form a collusive cartel. In our computer network security model, an attacker attempts to traverse a computer network to gather valuable information while the defender's goal is to detect malicious activity while also limiting the number of false alarms. We show how to use our event driven game theory formulation to solve for equilibrium attacker and defender policies and show how a defense strategy based on the event driven game outperforms a baseline anomaly detector. We also explicitly treat the issue of solving event driven games and show how to use state-of-the-art recurrent neural networks to approximate both fully rational and behavioral solutions of event driven games. Finally, we discuss future directions of event-driven games. These include establishing folk theorems, future computational challenges and a litany of other possible application domains.

11:00-13:00 Session 8J: Foundations of Complex Systems - Spreading

Parallel session

Location: Cozumel 2
Dynamical analogues of rank distributions

ABSTRACT. We present an equivalence between stochastic and deterministic variable approaches to represent ranked data and find the expressions obtained to be suggestive of statistical-mechanical meanings [1,2,3]. We show that size-rank distributions $N(k)$ from real data sets can be reproduced by straightforward considerations based on the assumed knowledge of the background probability distribution function $P(N)$ that generates samples of random variable values similar to the real data. The choice of different functional expressions for $P(N)$, such as power law, exponential, Gaussian, etc., leads to different classes of size-rank distributions $N(k)$ for which we find examples in nature. We show that all of these types of rank distributions can be alternatively obtained from deterministic dynamical systems. These correspond to one-dimensional nonlinear iterated maps near a tangent bifurcation whose trajectories are proved to be precise analogues of the rank distributions. We provide explicit expressions for the maps and their trajectories and find that they operate under conditions of small Lyapunov exponent and therefore near a transition out of chaos. We give explicit examples that range from exponential to logarithmic behavior, including Zipf's law.

[1] Robledo A. Laws of Zipf and Benford, intermittency, and critical fluctuations. Chinese Sci Bull. 2011;56(34):3645–3648. [2] Yalcin GC, Robledo A, Gell-Mann M. Incidence of q statistics in rank distributions. Proc Natl Acad Sci USA. 2014;111(39):14082–14087. [3] Yalcin, GC, Velarde, C, Robledo, A. Generalized entropies for severely contracted configuration space, Heliyon 1 (2015) e00045.

SPEAKER: Clara Granell

ABSTRACT. The spreading of infectious diseases has been proved to be radically dependent on the population networked structure of interactions and on the mobility of individuals. Network scientists have made significant progress assessing the critical behavior of spreading dynamics at large geographic scales, but the prediction of the incidence of epidemics at smaller scales, localized environments, is still a challenge.

Representative examples of localized environments are university campuses, schools and work offices, to mention a few. The problem of modeling such realistic scenarios relies on finding the appropriate level of abstraction to grasp the main singularities of the epidemic spreading process for individuals using the particular environment. The analysis of these oversimplified model abstractions is of outmost importance to separate the effect of single parameters on the incidence of spreading process, yet allowing an analytical approach that could be used for prediction purposes and to test prevention actions.

In particular, we are interested in studying the spreading dynamics of influenza-like illnesses (ILI) inside university campuses. In most U.S. universities, most of the students live in the university residence halls and university dorms. The main activity of the students within campuses is dominated by a recurrent pattern of mobility that consist of attending classes and residing in dorms. This recurrent pattern of mobility between the bipartite structure of dorms and classes, is identified as a major player on the endogenous spreading of diseases between students. We propose a metapopulation model on a bipartite network of locations, that account for the interplay between mobility and disease contagion for this particular scenario. The model is as follows: there are two types of nodes (populations): dorms and classes. Individuals correspondence to a dorm is unique, while classes are shared by individuals of any dorm. Each individual returns to its dorm after their academic activities are over. These recurrent pattern turns out to be essential to understand the impact of quarantine-like policies on the sick students, and specially on the determination of the proper time to be assigned to these isolation strategies.

The results of our analysis for a SIS dynamics on this particular scenario allows to test different strategies to contain the spreading of epidemics, identifying for example the lowest quarantine bound to be applied to sick students for the containment of the disease spreading. We find analytical expressions amenable to quantify the final incidence of the epidemics on these localized scenarios with recurrent mobility patterns.

Not all friends are equal: How heterogeneous social influence promotes or hinders behavioural cascades in complex networks

ABSTRACT. Social influence is arguably among the main driving mechanisms of many collective phenomena in society, including the spreading of innovations, ideas, fads, or social movements. Many of these processes have been studied empirically in the past, particularly with regards to the existence of so-called adoption cascades, where large amounts of people adopt the same behaviour in a relatively short time. These phenomena have been commonly modelled either as simple contagion (where adoption is driven by independent contagion stimuli, like the Bass model of innovation diffusion [Bass 1969]), or as complex contagion (where a threshold on the number of adopting neighbours in a social network determines spreading, like the Watts model of adoption cascades [Watts 2002]). However, in these models social influence is usually considered homogeneous across ties in the network, implying that all acquaintances are equally likely to influence an ego while making decisions. In reality, the strength of social influence may vary from neighbour to neighbour as it depends on the intimacy, frequency, or purpose of interactions between acquaintances. Neglecting such local heterogeneities may lead to overly simplistic models and potentially undermine a detailed understanding of real spreading phenomena.

We address this problem by studying a dynamical cascade model on weighted networks, where tie heterogeneities capture diversity in social influence. First we focus on a bimodal weight distribution, such that spreading is determined by the adoption threshold $\phi$ of nodes (defined as the sum of link weights to adopting neighbours relative to the total strength of the actual node) and the standard deviation $\sigma$ of weight distribution. We find that the presence of tie weight heterogeneities induce unexpected dynamical behaviour, as they either speed up or slow down contagion with respect to the unweighed case, depending on $\phi$ and $\sigma$. We demonstrate this effect to be present in synthetic and data-driven simulations of adoption dynamics on various artificial and real networks. We show that the structure of this non-monotonous parameter space can be understood by combinatorial arguments, and we provide an analytical solution of the problem for networks with arbitrary degree and weight distributions, using approximate master equations [Gleeson 2013]. These results may be instrumental in developing more accurate spreading models that manage to gauge the rise and extent of real behavioural cascades in society.

Onset of Global Cascades in Correlated Networks
SPEAKER: Xin-Zeng Wu

ABSTRACT. Influence maximization aims to identify nodes, which when seeded, produce an outbreak that cascades to a large fraction of the network. A variety of heuristic strategies and optimization-based approaches were proposed for seeding maximal cascades of information and product adoption, and other social contagions. We explore the impact of higher order network structure on the dynamics of cascades on random networks in which pairs of connected nodes have correlated degrees. We show that the onset of large outbreaks, as well their size, sensitively depend on higher order network structure.

The random nature of rank dynamics
SPEAKER: Carlos Pineda

ABSTRACT. Any set can be ranked by comparing a common property, such as size, age, or wealth. Ranks indicate who does one object compare to others of the same set. People have analyzed the rank distribution of words, cities, earthquakes, and networks, just to name a few. Rank distributions seem prevalent because they are general descriptions of diverse phenomena. As such, they have applications in many areas, from science to business. How does rank change in time? To explore this question, we have proposed the measure "rank diversity". Assuming that elements change their rank in time, the elements’ trajectories can be tracked in rank space. Rank diversity is the normalized number of different elements that appear at a specific rank at different times.

We have measured the rank diversity of a broad range of phenomena: languages, sports, earthquakes, economic systems, transportation systems, and social systems. We have found two different universality classes of rank diversity curves: for open systems (where elements enter and leave the ranking in time), rank diversity increases as a sigmoid with rank. The second class is for closed systems (where most elements do not leave or enter the ranking during the evolution); the diversity behaves as a semicircle.

If rank diversity is so similar for different phenomena, and considering that the mechanisms to determine rank change in every system might differ, what are the minimal assumptions required for reproducing the two classes of rank diversity? To answer this, we present a single null model, for both classes.

In this model an element from a list is picked at random and replaced at a new random position. The solutions have a drift component that obeys a leaking diffusion-like equation with quadratic coefficients, and a Levy-type component that increases in size linearly with time. Its predictions show that a good portion of the data analyzed can be explained with it. Important quantities such as the first step probability can be accurately described with such a model, both in the open and closed situations.

Epidemic Spreading on Activity-Driven Networks with Attractiveness
SPEAKER: Nicola Perra

ABSTRACT. We study SIR epidemic spreading processes unfolding on a recent generalisation of the activity-driven modelling framework. In this model of time-varying networks each node is described by two variables: activity and attractiveness. The first, describes the propensity to form connections. The second, defines the propensity to attract them. We derive analytically the epidemic threshold considering the timescale driving the evolution of contacts and the contagion as comparable. The solutions are general and hold for any joint distribution of activity and attractiveness. The theoretical picture is confirmed via large-scale numerical simulations performed considering heterogeneous distributions and different correlations between the two variables. We find that heterogeneous distributions of attractiveness facilitate the spreading of the contagion process. This effect is particularly strong in realistic scenarios where the two variables are positively correlated. The results presented contribute to the understanding of the dynamical properties of time-varying networks and their effects on contagion phenomena unfolding on their fabric

A Network-Based Model of Exposure Risk Among University Students
SPEAKER: Cree White

ABSTRACT. Contacts between individuals influence the dynamics and spreading processes of disease. Data about these interactions can be used to build a network of contacts for simulating disease dynamics and facilitating disease mitigation strategies. These contact networks give insight into the underlying properties that affect the spread of disease and provide valuable information on which individuals may be at greatest risk of exposure to disease. Data for these contact networks can be obtained from University registrar records to model the exposure risk of students attending the university. Students are especially susceptible to outbreaks of many different types of diseases, including influenza, meningitis, measles, etc. We propose methods for creating reliable contact networks of students and identify corresponding network communities in order to quantify their exposure risk. Using these strategies, we are able to identify candidates for mitigation or preemptive treatment once an index case has been recognized. For this study, we obtained course schedule data from two sources, the University of North Texas (UNT) and the University of Nebraska at Omaha (UNO). Each dataset includes enrollment information pertaining to the student id, the course number of the class they are enrolled in, the room number, and class meeting times. Preliminary work has been done with the UNT registrar data to create a contact network of the UNT engineering campus. This data was used to create a contact network where nodes represent the students and edges were assigned when a pair of students were in the same room at the same time. From this contact network, distinct clusters and communities were observed that provide a wide variety of characteristics which might affect exposure risk. Several types of analysis and experiments can be performed on such networks for various schools to determine students who might be at disproportional risk of exposure to disease given their membership in network communities. Certain topological metrics of the vertices were used to help calculate and assign risk vectors, including using its respective degree, local clustering coefficient, closeness, and centrality. By comparing these local properties across the vertices of the network, we were able to identify the students which were most likely to become infected in the event of an outbreak starting at a particular seed. We also identify the community membership of all vertices to assign the risk vector, since a student is at a greater risk to exposure if someone in their respective community is infected. Lastly, different types of network models, static and dynamic, have been generated depending on how we defined contacts and frequency of contacts between individuals. We then performed experiments to evaluate how these network variations affected the values assigned for the exposure risk vector. By using these methods, the proposed model can be used to identify students who may need to be treated during an outbreak of a disease and can be used to determine how different parameters and course scheduling data affect the overall network structure.

Equivalence between non-Markovian and Markovian dynamics in epidemic spreading processes

ABSTRACT. A general formalism is introduced to allow the steady state of non-Markovian processes on networks to be reduced to equivalent Markovian processes on the same substrates. The example of an epidemic spreading process is considered in detail, where all the non-Markovian aspects are shown to be captured within a single parameter, the effective infection rate. Remarkably, this result is independent of the topology of the underlying network, as demonstrated by numerical simulations on two-dimensional lattices and various types of random networks. Furthermore, an analytic approximation for the effective infection rate is introduced, which enables the calculation of the critical point and of the critical exponents for the non-Markovian dynamics.

11:00-13:00 Session 8K: Socio-Ecological Systems (SES) - Social Networks

Parallel session

Location: Cozumel 5
Effects of temporal correlations in social multiplex networks

ABSTRACT. Multi-layered networks represent a major advance in the description of natural complex systems, and their study has shed light on new physical phenomena. Despite its importance, however, the role of the temporal dimension in their structure and function has been barely scratched. Here we show that empirical social multiplex networks exhibit temporal correlations between layers, which we quantify by extending entropy and mutual information analyses proposed for the single-layer case. We demonstrate that such correlations are a signature of a ‘multitasking’ behavior of network agents, characterized by a higher level of switching between different social activities than expected in a uncorrelated pattern. Moreover, temporal correlations significantly affect the dynamics of coupled epidemic processes unfolding on the network. Our work opens the way for the systematic study of temporal mutliplex networks and we anticipate it will be of interest to researchers in a broad array of fields.

Higher Order Structure Distorts Local Information in Networks
SPEAKER: Xin-Zeng Wu

ABSTRACT. The information that is locally available to individual nodes in a network may significantly differ from the global information. We call this effect local information bias. This bias can significantly affect collective phenomena in networks, including the outcomes of contagious processes and opinion dynamics. To quantify local information bias, we investigate the strong friendship paradox in networks, which occurs when a majority of a node's neighbors have more neighbors than it does itself.

Our analysis identified certain properties that determine the strength of the paradox in a network: attribute-degree correlation, network assortativity and neighbor-neighbor degree correlation. We also discovered that the neighbor-neighbor degree correlation is significant in real world networks. Understanding how the paradox biases local observations can inform better measurements of network structure and our understanding of collective phenomena.

The Effect of Interlayer Links in Multiplexed Artist Networks

ABSTRACT. The way information goes from a decoupled state into a coupled one in a multiplex network has been widely studied by means of in-silico experiments involving models of artificial networks. Those experiments assume uniform interconnections between layers offering, on the one hand, an analytical treatment of the structural properties of multiplex networks but, on the other hand, loosing generalization for real networks. In this work, we study multiplex (2-layer) networks of musicians whose layers correspond to: (i) collaboration between them and (ii) musical similarities. In our model, connections between the collaboration and similarity layers exist, but they are not ubiquitous for all nodes. Specifically, interlayer links are created (and weighted) based on structural similarities between the neighborhood of a node, taking into account the level of interaction of each artist at each layer. Next, we evaluate the effect that the weight of the interlayer links has on the structural properties of the whole network, namely the second smallest eigenvalue of the Laplacian matrix (also known as algebraic connectivity). Our results show a transition in the value of the algebraic connectivity that can only be adequately predicted when the real distribution of the weights of the interlayer links is taken into account.

Towards the end of a ”chicken-egg” problem: nestedness or degree distribution?

ABSTRACT. n mutualistic ecosystems, typically plant-pollinator or plant-seed dispersers networks, the interaction between two agents is naturally beneficial for both (pollinators eat and plants increase their reproductive efficiency). They may be described in terms of bipartite networks, where the two different kind of nodes correspond to the plant species and the animal species and the interaction takes place only between elements of different kind. A widespread particular ordering called nestedness has been observed in these systems: when ordering the network in decreasing degree one of the guilds, the other appears automatically ordered in the same way. This order reveals that the ecosystem is composed of generalists species which hold contacts with many different counterparts and specialist species, which prefer contacts with generalist counterparts; specialist-specialist interactions being very rare. The ubiquity of nestedness in mutualistic ecosystems data has triggered an intensive research aiming at measuring this particular ordering and explaining its origin. In the ecological community it is widely admitted that this organization is responsible for the robustness of the ecosystems, as well as the persistence of biodiversity. However, in the recent years the relevance of nestedness as the pertinent magnitude to describe the ecosystems has been questioned. In other words: is nestedness a relevant property issued from the ecosystems dynamics or does it just derive from lower order statistical properties of the network, as the observed (truncated) powerlaw like degree distributions? In this talk we present a theoretical work showing that nestedness is a consequence of the degree distributions of both guilds. Unlike methods based on the randomization of real networks under different schemes, our work is based on the Exponential Random Graph (ERG) model, where one obtains the probability distribution of networks with a given average degree distribution as constraint. Following the work of Squartini and Garlaschelli [5] we obtain the corresponding grand canonical ensemble to which that real network belongs with maximum likehood. We measure nestedness using NODF index (nestedness based on overlapping and decreasing field), which can be written analytically in terms of the adjacency matrix, and we obtain the theoretical expectation value and the standard deviation of the nestedness distribution in such ensemble. Our results show that the nestedness of the observed network is statistically equivalent to the expected value of the generated random ensemble, thus showing that this global organization is a consequence of the degree distribution. Additionally, we can use the obtained probability distribution in order to simulate the ensemble to which each observed real network belongs and we can measure average nestedness and standard deviation over the sampling. The results, which are consistent with the theoretical model, show the importance of finite size effects concerning the network size, in simulated ensembles. These size effects which may depend on the chosen nestedness metrics, are investigated in another work.

Mutation differently affects cooperation depending on social network structures

ABSTRACT. Cooperation is ubiquitous in every level of living organisms. In principle, cooperators benefit others by incurring some costs to themselves, while defectors do not pay any costs. Therefore, cooperation cannot be an evolutionarily stable strategy for a noniterative game in a well-mixed population. In such a situation, spatial (network) structure is a viable mechanism for cooperation to evolve. However, until quite recently, it has been difficult to predict whether cooperation can evolve at a network (population-wide) level. To address this problem, Pinheiro et al. proposed a numerical metric, called Average Gradient of Selection (AGoS), to characterize and forecast the evolutionary fate of cooperation at a population-wide level [1]. AGoS can analyze the dynamics of the evolution of cooperation even when nontrivial selection pressure is introduced [2], and also when structures and states of networks change over time (adaptive social networks) [3]. In these earlier studies, however, stochastic mutation of strategies was not considered yet. It is important to incorporate such mutation because they frequently occur in real societies and also because results obtained with stochastic fluctuations of strategies would provide more robust observations and conclusions. Here we analyzed the evolution of cooperation using AGoS where mutation may occur to strategies of individuals in networks. Our analyses revealed that mutation always has a negative effect on the evolution of cooperation regardless of the fraction of cooperators and network structures, because local clusters of cooperators can easily be destroyed by mutation. Interestingly, we found that mutation is particularly harmful to cooperation when the fraction of cooperation is high in homogeneous networks (e.g., homogeneous random network, regular lattice), but so it is when the fraction of cooperation is low in heterogeneous networks (e.g., scale-free networks). This may be due to that hubs surrounded by cooperators are robust to mutated defectors. These results indicate the importance of considering random noise (mutation), which was largely overlooked in the literature, in studying the evolution of cooperative behavior in social networks.

Coupling Patterns of Virtual and Physical Behavior

ABSTRACT. The relationship between the way people explore urban spaces and behave on the Internet is not fully understood so far. In this work, we analyze multiple types of human activities, i.e., urban mobility, online communication, and shopping, in order to characterize patterns of collective behavior that emerge from virtual and physical interactions. We found that the multiple types of data are consistent with each other and show an interplay of the way people inhabit geographical space and behave online. Even though physical limitations are not present on the Internet, we show that online collective behaviors are highly coupled to the physical ones.

Opinion leaders on social media: A multilayer approach

ABSTRACT. Twitter is a social media outlet where users are able to interact in three different ways: following, mentioning, or retweeting. Accordingly, one can define Twitter as a multilayer social network where each layer represents one of the three interaction mechanisms. We analyzed the user behavior on Twitter during politically motivated events: the 2010 Venezuelan protests, the decease of a Venezuelan President, and the Spanish general elections. We found that the structure of the follower layer conditions the structure of the retweet layer. A low number of followers constrains the effectiveness of users to propagate information. Politicians dominate the structure of the mention layer and shape large communities of regular users, while traditional media accounts were the sources from which people retweeted information. Such behavior is manifested in the collapsed directed multiplex network that does not present a rich-club ordering. However, when considering reciprocal interactions the rich-club ordering emerges, as elite accounts preferentially interacted among themselves and largely ignored the crowd. We explored the relationship between the community structure between the three layers. At the follower level users cluster in large and dense communities holding various hubs, that break into smaller and more segregated ones in the mention and retweet layers. We also found clusters of highly polarized users in the retweet networks. We analyze this behavior by proposing model to estimate the propagation of opinions on social networks, which we apply to measure polarization in political conversations. Hence, we argue that to fully understand Twitter we have to analyze it as a multilayer social network, evaluating the three types of interactions.

J Borondo, AJ Morales, RM Benito, JC Losada, Multiple leaders on a multilayer social media, 2015, Chaos, Solitons & Fractals 72, 90-98.

AJ Morales, J Borondo, JC Losada, RM Benito, Measuring political polarization: Twitter shows the two sides of Venezuela, 2015, Chaos: An Interdisciplinary Journal of Nonlinear Science 25 (3), 03311

Rosa M. Benito

13:00-14:20 Session : Lunch

Buffet lunch & poster session

Location: Gran Cancún 2
14:20-16:00 Session 9: Plenary session
Location: Gran Cancún 1
Collective Learning in Society and the Economy

ABSTRACT. How do teams, organizations, cities, and nations learn? How can we create tools to facilitate collective learning? In this presentation I will show research establishing the universal role of relatedness in the diffusion of productive knowledge, and on the creation of commercial relationships. Also, I will present software tools designed to improve the collective learning capacities of teams, organizations, and nations.

Cognitive Biases and the Limits of Crowd Wisdom

ABSTRACT. The many decisions people make about what information to attend to affect emerging trends, the diffusion of information in social media, and performance of crowds in peer evaluation tasks. Due to constraints of available time and cognitive resources, the ease of discovery strongly affects how people allocate their attention. Through empirical analysis and online experiments, we identify some of the cognitive heuristics that influence individual decisions to allocate attention to online content and quantify their impact on individual and collective behavior. Specifically, we show that the position of information in the user interface strongly affects whether it is seen, while explicit social signals about its popularity increase the likelihood of response. These heuristics become even more important in explaining and predicting behavior as cognitive load increases. The findings suggest that cognitive heuristics and information overload bias collective outcomes and undermine the “wisdom of crowds” effect.


Networks, Complexity and Disease Dynamics

ABSTRACT. The spread and proliferation of emergent human-to-human transmissable infectious diseases  are complex phenomenona that are driven by a broad range of factors  that act on different time- and spatial scales. On a basic level we have to understand how individuals interact within populations and how they move between them. I will report on ways in which network science has contributed to our understanding of disease dynamics on a global scale, based on the notion of effective distance,  and how network science can help understand contagion processes within single populations.  I will also report how new technologies permit epidemiological experiments in human populations that can aid on-going eradication programs, e.g. for polio and measles.

16:00-16:30 Session : Coffee Break

Coffee break & poster session

Location: Cozumel A
16:30-18:30 Session 10A: Foundations of Complex Systems - Multiplex

Parallel session

Location: Cozumel 1
Congestion induced by the multiplex structure of complex networks
SPEAKER: Albert Sole

ABSTRACT. Multiplex networks are representations of multilayer interconnected complex networks where the nodes are the same at every layer, and there exist different kinds of edges connecting them, which form the layers. They turn out to be good abstractions of the intricate connectivity of multimodal transportation networks, the activity of individuals using multiple online social platforms, among other types of complex systems.

One of the most important critical phenomena arising in transportation networks is the emergence of congestion when the nodes are forced to work beyond their processing capacity. Here we prove analytically that the structure of multiplex networks can induce congestion for flows that otherwise will be decongested if the individual layers were not interconnected [1]. We provide explicit equations for the onset of congestion and approximations that allow to compute this onset from individual descriptors of the individual layers. The observed cooperative phenomenon reminds the Braess’ paradox in which adding extra capacity to a network when the moving entities selfishly choose their route can in some cases reduce overall performance. Similarly, in the multiplex structure, the migration of paths to the more efficient layer yields an unbalanced load that results in unexpected congestion.

[1] Albert Solé-Ribalta, Sergio Gómez and Alex Arenas: Congestion induced by the structure of multiplex networks, Physical Review Letters 116 (2016) 108701.

Joint effect of ageing and multilayer structure prevents ordering in the voter model
SPEAKER: Oriol Artime

ABSTRACT. Last years have witnessed an increasing interest on multilayer networks. In many cases, the multilayer structure is necessary to describe the topology of real-world systems that may include different levels of interaction. The multilayer topology induces profound changes in the dynamics of systems running on networks. Jointly with the structural part, the temporal dimension of the interactions plays also a crucial role in the dynamics. Given the strong temporal inhomogeneities found in empirical data, it is interesting to include this ingredient in the models and to explore in detail the effects of the time-topology relationship.

In particular, in this work we study the voter model embedded in a two-layered network with an ageing mechanism in the nodes. The voter model is a opinion dynamics model, where agents (nodes) hold a state (opinion) and interact by copying the opinion of a randomly selected nearest neighbor. A fundamental question is under which conditions it is possible to attain global consensus, which is the absorbing state of the system dynamics. To take into account time heterogeneities, the model can be modified by adding ageing in the nodes: the probability of agents to switch opinion depends on the time elapsed since the last change \cite{stark,juan}. We use as a control parameter the fraction of nodes sharing states in both layers, the so-called multiplexity parameter $ q $. We find that the dynamics of the system suffers a notable change at an intermediate value of $q$, $ q^{*} $. Above it, the voter model always reaches global consensus through a coarsening process. Below it, a fraction of the realizations fall into dynamical traps that indefinitely delay the arrival at the absorbing state (Figure \ref{fig}). These traps are associated to a spontaneous symmetry breaking process leading to a dominant layer with old nodes and a dominated layer with younger nodes. Once this configuration is reached, the system dynamics slows down indefinitely. We are able to give analytical insights into the asymptotic values of the voter model's order parameter. We further explore the competition of time scales driving the evolution of the dynamics by employing different update rules in the layers, namely the standard update rule and the ageing one in every layer. Our results will help to understand better the interplay between topology and time and confirm the relevance of combining both ageing and multilayer topology to give rise to new phenomenology. This phenomenon does not appear when these ingredients are considered separately, but it emerges as the consequence of their interaction. Further details on our results can be found in the Ref. \cite{ours}.

Scaling in the recovery of cities from special events

ABSTRACT. The aim of this work is to understand the resilience and recovery of public transport networks from special events where a huge amount of people concentrates in a small part of cities. We use a packet based model of individuals going from origin to destination using the different lines of transport, taking into account the limited capacity of real transport networks. For this purpose we will study the performance of networks with two different routing protocols, the shortest path routing where the individuals follow the shortest paths in the weighted network and the adaptive routing with local knowledge where they are able to adapt their trajectory depending on the local information of the current stop. With this two protocols we are able to simulate the normal flow of people in multimodal public transport networks considering the real capacity and speed of the different modes of transport. Above this normal flow of people we will introduce different amount of agents in small places of the cities (200m x 200m) with a distribution of destinations according to the places of residence. We will study, depending on the routing protocol, the scaling of recovery times with the amount of agents in the perturbation and the place of the city as well as the average delay and the number of individuals and origins affected. The first we observe is that while in the case of the shortest path routing, the recovery of the networks is linear with the amount of individuals independently of the network. However in the case of the adaptive routing the structure of the network and the embedded space become relevant. In a first approximation we will apply our model in 1D and 2D lattices and studying how the recovery time and delays change with the position of the perturbation and the amount of individuals in it. In the case of the scaling in the recovery times, we solve analytically and use simulations to proof that it is related with the dimension of the embedded space, finding an exponent of 1 in the 1D case and 0.5 in the 2D case. While the average delay is not related to the place of the perturbation, in the case of the agents affected we find that the peripheral nodes have more influence on the number of origins and packets affected. When studying the recovery of cities we observe a scaling below 0.5 due to the different modes in public transport networks. While a public transport network with only one mode of transport has a dimension similar to a 2D lattice, the combination of modes with different speed and coverage increases the local dimension. We propose a new metric of local dimension which is related to the exponent governing the recovery of cities. We prove for different cities that our new definition of local dimension in weighted networks with capacity is able to predict the scaling and in a similar way the perturbations in peripheral areas affect a higher amount of origins and individuals.

Multidimensional Networks with Arbitrary Degree Distribution
SPEAKER: Ivan Kryven

ABSTRACT. In the infinite configuration network the links between nodes are assigned randomly with the only restriction that the degree distribution has to be predefined. Among the most interesting results derived from such models are the distribution of component sizes and the phase transition connected to emergence of the giant component. It is generally thought that the distribution of component sizes has exponential decay before and after the phase transition, whereas such decay is algebraic with universal exponent -3/2 precisely at the phase transition itself. Another wide-spread idea, is that if the links have direction then the phase transition for weak giant component coincides with the phase transition in the non-directional network. In this talk, by applying tools borrowed form analytical combinatorics, I will show that this two ideas are misconceptions. Firstly, I will demonstrate that heavy tailed degree distributions lead to a whole zoo of asymptotical classes of component-size distributions. Within this classes are component-size distributions of exponential, sub-exponential, and power-law decays with arbitrary exponents below -3/2. Secondly, I will show that if links have direction, the associated phase transitions cannot be related to the phase transition in non-directional networks. Finally, I will present an effective analytical toolbox for studying connected components in the case when links have many different types: i.e. multidimensional networks.

[1] I. Kryven. “General expression for the component size distribution in infinite configuration networks” Physical Review E 95 (2017): 052303

[2] I. Kryven. "Emergence of the giant weak component in directed random graphs with arbitrary degree distributions." Physical Review E 94 (2016): 012315.

Cooperation Network Responses to Shocks

ABSTRACT. Cooperation networks, such as international alliance and trade networks, are constantly in flux due to long-term trends, as well as short term “shocks”, such as coups, wars, and economic collapse that can dramatically affect how interactions occur both locally and globally. An open problem, however, is determining the mechanism of network evolution, especially after shocks. Furthermore, the coupling between many cooperation network layers, which can be modeled as a multiplex network, complicates this mechanism: alliances can occur with trading partners or the social networks of CEOs could affect how businesses collude. We begin to understand the effect of shocks on coupled networks using both simulations and human experiments, in which agents try to make links with other agents that maximize their individual utility. We define shocks in this context as an increase or reduction in the marginal utility of each link. We find that, in simulations, agents acting in their own best interest can create emergent network “hysteresis” or resilience: a network will resist a change in its topology even after a significant network shock. Therefore, a network where the cost of links was always high, and a network where the cost of a link was low but then increases, will look very different, despite having the same utility function in equilibrium. We measure resilience through the size of the largest connected component, average degree, clustering, and the mean profit as we vary the utility that can be gained by clustering, or link correlations across network layers. We compare the result of the simulations to experiments employing human subjects, where shocks appear to affect emergent networks, but in ways that can sometimes differ from a utility-maximizing model. In this experiment, users add or drop links with others in two separate networks in order to maximize their utility, measured in game “points” that are converted to dollars at the end of the experiment. Utility can increase when users make links that are the same in both networks, or if the link completes a “triangle” in which a neighbor of a neighbor is a neighbor, while utility decreases when users make too many links. After some period of time, a shock occurs which either increases or decreases the utility of creating a link. We find that users who undergo a shock from low link utility to high link utility tend to drop more links when the points they gain per round is low, while simulations do not predict a strong correlation, therefore the utility users see before they make a decision affects their behavior even as they separately attempt to maximize their utility. Adding realistic additions to our utility-maximizing model does not completely close the gap between it and our experiment, which warrants further investigation in the future. Overall, we have created novel ways to elucidate how networks evolve, especially for networks of cooperating agents.

16:30-18:30 Session 10B: Information and Communication Technologies

Parallel session

Location: Xcaret 1
Collective navigation of complex networks: Participatory greedy routing

ABSTRACT. Many networks are used to transfer information or goods, in other words, they are navigated. The larger the network, the more difficult it is to navigate efficiently. Indeed, information routing in the Internet faces serious scalability problems due to its rapid growth, recently accelerated by the rise of the Internet of Things. Large networks like the Internet can be navigated efficiently if nodes, or agents, actively forward information based on hidden maps underlying these systems. However, in reality most agents will deny to forward messages, which has a cost, and navigation is impossible. Can we design appropriate incentives that lead to participation and global navigability? Here, we present an evolutionary game where agents share the value generated by successful delivery of information or goods. We show that global navigability can emerge, but its complete breakdown is possible as well. Furthermore, we show that the system tends to self-organize into local clusters of agents who participate in the navigation. This organizational principle can be exploited to favor the emergence of global navigability in the system.

Collective attention patterns during public health emergencies: the 2015-2016 Zika virus epidemic in the USA

ABSTRACT. Background: The influence of human behavior on epidemic spreading has long been recognized as a key component in infectious disease modeling and epidemiology. In the past years, a number of studies have addressed the impact of awareness and information spread during epidemic outbreaks and it has been reported that the degree of public attention and concern induced by a health threat, such as an outbreak of an infectious disease, might play an important role in disease dynamics. Individual behavior has also been key in the 2015 − 2016 Zika outbreak, which has posed peculiar communication challenges to the public due to its association with microcephaly in newborns, its transmission modalities, and its prevalence in areas, such as the American continent, in which the surveillance had never detected the presence of Zika virus before and was suddenly characterized by intense international travel due to the Rio Olympics in 2016.

Objective: To quantify and characterize the patterns of public attention and awareness during the Zika epidemic through the analysis of Wikipedia pageview data, with a focus on the United States which, from December 2015 until the end of 2016, experienced an importation of travel- related Zika virus confirmed cases.

Methods: We analyzed geolocalized pageview data for a number of selected Zika-related Wikipedia pages and studied the dynamics of collective attention in relation to the timeline of importation of Zika cases, the global timeline of the Zika epidemic worldwide and the risk of local transmission due to the presence of the vector. Moreover, to understand the interplay between the attention measured by Wikipedia and the events media coverage, we compared pageview data with the coverage of the Zika epidemic in the US local and national media obtained by mining about 110,000 news from the GDELT project ( and ∼900,000 Zika related Twitter posts, generated in 2016.

Results: The temporal dynamics of attention to the Zika outbreak displayed three main phases: a pre-epidemic phase, where Zika-related pageviews were constantly below 1% of the total pageviews in the US; a high attention phase with two distinct peaks in pageview data, correlated to global events such as the WHO international alert; a declining phase, from June 2016 until the end of the year. Although the temporal profile of attention was consistent across the 50 States, spatial patterns of collective attention were highly heterogeneous. As shown in Figure 1, differences in the attention among States appeared to be highly correlated to the volume of Zika-related media coverage in each State (Spearman correlation ρ = 0.76) and also to the number of Zika-related tweets mentioning each State (ρ = 0.66).

Conclusions: Wikipedia geolocalized pageview data can be harnessed to capture the dynamics of collective attention during epidemic outbreaks, with potential implications for the calibration of epidemic-behaviour models.

A citation impact indicator based on author network distances

ABSTRACT. Scientists are embedded in social and information networks. These networks influence and are influenced by the scientific ideas that scientists are exposed to and give credit. The network that represents scientific collaborations has an important impact on the potential audience for a publication and therefore how it is cited. While it is already common practice to exclude self-citations when computing bibliometric indicators, we argue that it is even more important to control for effects generated by citations from co-authors, co-authors of co-authors and so forth. We introduce an indicator that controls for the citation potential authors have due to their position in the co-authorship network. Such an indicator allows for the detection of scientists and publications that far exceed this potential even if the absolute numbers would obscure their performance.

This large-scale empirical study analyzes network data from over 13 million scientific careers with at least 2 publications, which are extracted from Thomson Reuters’ Web of Science, name disambiguated and cover a wide range of scientific disciplines and time. We construct a growing collaboration network of authors based on pair-wise co-authorships, accumulated until the year of evaluation. Links have no weight and indicate only whether any collaboration between a pair of authors happened in the past. For a reference between a citing and cited publication, we determine the degree of separation by computing the shortest path between the two corresponding sets of authors.

First, we show that distance on the collaboration network correlates with likelihood of citation and thus the more well connected authors a publication has, the more it is cited. The distribution of distances to all citing publications determines the reach of a publication or its spread through the network of scientists. Second, with the average distance to all publication author sets of a given year, we can measure a citation potential for a previously published article or an author. We see that for a low average distance, chances are significantly higher to receive more citations. While there is still high variance in the success of an individual publication, we find that the social distance better explains the different citation rates on average than other reputation based factors. Third, with these insights, we propose a bibliometric indicator which normalizes according to the citation potential. This allows a fairer comparison between researchers publishing in different scientific communities. The citation potential is computed for each publication or author individually by taking the whole focal network into account and contrary to typical field-normalized indices, without the need for explicitly specifying scientific disciplines.

With this work, we quantify the boost in citations that can result from social networking and collaboration that is part of the scientific profession. This points to the importance of these social processes in channeling the spread of information and must be considered when judging merit based on citations, specifically in the case of varying degree of collaboration.

Human Behavioral Patterns in Online Games
SPEAKER: Anna Sapienza

ABSTRACT. Multiplayer online battle arena has become a popular game genre. It also received increasing attention from our research community because they provide a wealth of information about human interactions and behaviors. A major problem is extracting meaningful patterns of activity from this type of data, in a way that is also easy to interpret. Here, we propose to exploit tensor decomposition techniques, and in particular Non-negative Tensor Factorization, to discover hidden correlated behavioral patterns of play in a popular game: League of Legends. We first collect the entire gaming history of a group of about one thousand players, totaling roughly 100K matches. By applying our methodological framework, we then separate players into groups that exhibit similar features and playing strategies, as well as similar temporal trajectories, i.e., behavioral progressions over the course of their gaming history: this will allow us to investigate how players learn and improve their skills.

Internet of Autonomous/Intelligent Agents: Some Aspects of Distributed Computational Intelligence Behind the Next-Generation of Internet-of-Things
SPEAKER: Predrag Tosic

ABSTRACT. Internet-of-Things (IoT) is arguably one of the most important new paradigms and technological advances in the realm of the "consumer", present-in-every-household Internet-powered cyber-physical systems of the early 21st century. While IoT will likely undergo many architectural and other changes, clearly the paradigm and the actual IoT technologies and consumer products are here to stay, which is why IoT has been an R&D focus among the industry leaders in the Internet technologies, smart devices and platforms, and web-based applications, as well as an increasingly important area of fundamental and applied academic research. The list of technological and research challenges behind enabling the next-generation IoT is quite long and versatile. Our present focus is on the distributed intelligence, autonomous interacting agents and multi-agent systems (MAS) aspects of IoT, as well as on appropriate software design abstractions suitable for IoT. Some in the scientific community refer to this subset of IoT-related research problems and technology challenges as the "Internet-of-Agents" (IoA).

In this talk, we discuss three important aspects of "IoA-for-IoT". One, we review appropriate design principles and abstractions for the software agents providing inter-operability within IoT, enabling different devices and platforms to communicate, cooperate and exchange data with each other. In that context, we revisit agent-oriented programming paradigm, and focus on the classical Actor model as a suitable programming abstraction for an intrinsically open, decentralized and highly heterogeneous infrastructure such as the IoT. Second, we reflect on the cyber-security aspects of IoT, and outline some of the key elements of computational intelligence that could enable the sought-after self-healing and self-recovery capabilities of the next-generation IoT. The main purpose behind the desire for a self-healing/recovery design of IoT is so that the future cyber-attacks (like the Distributed-Denial-of-Service attack considerably disrupting the eastern USA in the fall of 2016) are guaranteed to cause much less damage, and in particular have strictly "sand-boxed", short-lasting adverse impact on users and IoT platforms. Third, closely related to the cyber-security aspect, and taking explicitly into account how humans interact with IoT devices and technologies, we outline how the recent and ongoing research on reputation and trust in multi-agent systems could enable the higher level of trust i) among different autonomous devices engaging in interaction and cooperation with each other, as well as ii) higher confidence and trust of users in their various devices and platforms within an IoT environment. In particular, it is our view that one of the most important challenges of the next-gen. IoT will be, how should the future IoT solutions be designed, so that the "ordinary folks" (that is, people who are not experts on the Internet computing, mobile technologies, or cyber-security) can build their confidence and trust that their "hooked-into-IoT" devices would never do anything malicious or detrimental to their end users. We outline and discuss some practical IoT scenarios where ensuring that humans can trust their devices and platforms is absolutely critical for the long-term success and massive-scale adoption of several promising IoT-based technologies.

16:30-18:30 Session 10C: Cognition and Linguistics - Cooperation

Parallel session

Location: Xcaret 2
Collective Computation in Nature & Society
SPEAKER: Jessica Flack

ABSTRACT. Biological (and social) systems are organized into multiple space and time scales. I have proposed this multi-scale structure functions as an information hierarchy resulting from the collective effects of components (cells, neurons, individuals) estimating regularities and using these perceived regularities to tune strategies in evolutionary, developmental, or ecological time.  As coarse-grained (slow) or compressed variables become for components better predictors than microscopic behavior (which fluctuates), and component estimates of these variables converge, new levels of organization consolidate and components collectively construct their macroscopic worlds. This gives the appearance of downward causation. This intrinsic subjectivity suggests that the fundamental macroscopic properties in biology will be informational in character. If this view is correct, a natural approach is to treat the micro to macro mapping as a collective computation performed by components in a search for configurations that reduce environmental uncertainty.  In this talk I will discuss what it means for biological systems to perform collective computations, give examples of dynamical and structural features resulting from collective computation, and outline the major open questions and challenges as I see them. 

Modelling decision times in game theory experiments

ABSTRACT. What makes us decide whether to cooperate or not? The answer to this fundamental question goes necessarily beyond a simple maximisation of individual utility. Recent studies contributed in this sense by using decision times to claim that intuitive choices are pro-social while deliberation yield to anti-social behavior. These analysis are based on the rationale that short decisions are more intuitive than long one and summed up to keeping track of the average time taken by the subject of game theory experiment to make their decision under different conditions. Lacking any knowledge of the underlying dynamics, this simple approach might however lead to erroneous interpretations, especially on the light of our experimental evidence that the distribution of decision times is skewed and its moments strongly correlated.

Here we use the Drift Diffusion Model (DDM) to outline the cognitive basis of cooperative decision making and characterise the evolution of subject's behavior when facing strategic choices in game theory experiments. In the DDM, at each moment subjects randomly collect evidence in favour of one of two alternative choices, which are in our case cooperation and defection. This accumulation has a stochastic character as a consequence of the noisy nature of the evidence. The continuous integration of evidence in time is described by the evolution of a one-dimensional brownian motion

dx = v dt + \sqrt{D} \xi(t)

equivalent to the commonly called ``gambler's ruin problem'', where $x(0)= z\cdot a$ represents the initial bankroll of the gambler, the absorption at $x=a$ represents the gambler leaving a possibly unfair game (if $v\neq 0$) after collecting her target winnings $a$, and the absorption at $x=0$ represents the gambler's ruin. The probability distribution of the times at which the process reaches the origin $x=0$ before reaching the exit value $x=a$ is known as Fürth formula for first passages.

This distribution has been successfully used to model decision time in a wide range of contexts. Our findings extend this use to the strategic choices of iterated Prisoner's dilemma experiments. Analyzing the results of large-scale experiments (169 subjects making 165 decision each) through the new lens of DDM and its characteristics free parameters (drift $v$, threshold $a$, and initial bias $z$) allows us to clearly discern between deliberation (described by the drift) and intuition (associated to the initial bias). Our results show that rational deliberation quickly becomes dominant over an initial intuitive bias towards cooperation, which is fostered by positive interactions as much as frustrated by a negative one. This bias appear however resilient, as after a pause it resets to its initial positive tendency.

Drunk Game Theory: An individual perception-based framework for evolutionary game theory
SPEAKER: Cole Mathis

ABSTRACT. We present Drunk Game Theory (DGT), a framework for individual perception-based games, where payoffs change according to player's previous experience. We introduce this novel framework with the narrative of two individuals in a pub choosing independently and simultaneously between two possible actions: offer a round of drinks (cooperation) or not (defection), which we dub the Pub Dilemma. The payoffs of these interactions are perceived by the individual dependent on her current state. We represent these perceptions using two different games. The first one constitutes the classic Prisoner's Dilemma (PD) situation, in which utility is computed as the amount of saved money and free drinks. In this case, one-shot game theory states that mutual defection is the only Nash equilibrium. The second game takes the form of the Harmony game, in which payoffs are computed solely as the number of received drinks. Players perceive one of the two games according to their current cognitive level. In an ordinary state players are more likely to perceive the PD payoffs, while players with altered cognition tend to play the Harmony game. The cognitive level of a player evolves according to the outcomes of her previous interactions. Cognitive level is diminished with the combined number of cooperative acts and is heightened by defection. We use evolutionary game theory to model the evolution of cooperation within well-mixed and structured populations. Our analytical results in well-mixed populations agree with the agent-based simulations. After completely exploring the Pub Dilemma, we consider all other possible pairings of 2-player, 2- strategy symmetric games. We explore the role of network-constrained interactions on the overall level of cooperation. In particular, we investigate the case of heterogeneous social networks in order to determine effect of hubs in determining the stability of cooperation. We find that, for hubs, initial perception, and initial tendency to cooperate play different roles in determining whether cooperation or defection is favored. By accounting for heterogeneous and feedback-dependent individual perceptions, this new framework opens new horizons to explore the emergence of cooperation in social environments when individuals have different perceptions over time.

Are Human Agents Myopic or Far-Sighted Under Differential Conditions of Risk and Ambiguity? A Bayesian Network Model of Biosecurity State Transitions in a Sequential Decision Experiment

ABSTRACT. Situated in the interdisciplinary literature on sequential decision games, Markov decision models and human risk perceptions under conditions of ambiguity and uncertainty, this research paper addresses two research questions: Given uncertainty and ambiguity about the behaviors of agents in their social/spatial networks, do agents behave with myopia or far-sightedness in perceiving biosecurity risk and adopting biosecurity practices in sequential games? How does a system-wide biosecurity risk state evolve when majority of the agents are far-sighted versus myopic in sequential games? A sequential decision experiment was designed with one control round where agents receive perfect information both about disease prevalence and the level of biosecurity adoption (either high or low) in a fixed network of 50 hog production facilities. Each agent can produce a maximum of 2500 hogs on a production facility. Seventeen treatment rounds expose agents to other possible combinations of perfect, partial and no information about the disease prevalence and the level of biosecurity adoption in the hog production network. In addition to one control and 17 treatment rounds, subjects also played two practice rounds. Each round consisted of 11 sequential biosecurity adoption decisions, simulating monthly decisions from February through December. During each of these 11 simulated months in a round, each agent can sequentially implement three levels of biosecurity, and only able to move from low level of biosecurity to high level (and not the other way around): (i) low biosecurity: development of a disease management protocol; (ii) medium biosecurity: adoption of cleaning and disinfecting protocols; (iii) high biosecurity: requiring a shower in, shower out protocol for all workers. Adoption of each level of biosecurity costed each agent $10,000 experimental dollars, while the absence of biosecurity randomly increased the probability of disease infection at the hog facility exposing agents to loss of revenues if infected. The effectiveness of sequential biosecurity adoption in reducing infection probability at each production facility remained ambiguous. The sequential decision experiment was played by 110 subjects on Microsoft Surface Pro Tablets and written in the R computer language, leading to (110x18 =) 1,980 rounds of observations. Subjects were paid a monetary reward as a fixed scaling factor of the experimental dollars that they earned. Two supervised (Naïve Bayes and Augmented Naïve Bayes) and two unsupervised (maximum spanning tree and equivalence EQ framework) Bayesian Network algorithms were applied to the dataset. K-fold validation of all four algorithms revealed that equivalence (EQ) Bayesian Network algorithm had the highest contingency table fit of 78.57%. Additional Bayesian network models are being applied on this dataset and results from the best fit network model will be presented during the conference. Further, the best fit Bayesian Network model is being incorporated in a multi-level agent based model that simulates hog farmer adoption of biosecurity practices under alternate information and incentive regimes.

Does classroom cooperation promote learning?

ABSTRACT. Does classroom cooperation promote learning? The literature on social learning has shown that people are more likely to learn from those who are seen as prestigious, talented, or that share demographic attributes with learners. Yet, the connection between cooperation and learning is relatively understudied. Here, we explore the connection between student performance and classroom cooperation by mapping six classrooms networks using a non-anonymous dyadic cooperative game. In our game, a variation of the prisoner’s dilemma, in every round students are endowed with tokens that they can share or keep (cooperate or defect). The total number of tokens that a student gets is equal to the number of tokens they kept plus twice the number of tokens they received. Hence, the group maximizes the total number of tokens earned when everyone cooperates, but students maximize their tokens when they defect and everyone else cooperates. We use this game to map a weighted network of cooperation for each classroom, with weights equal to the amount of tokens received by each student in each dyadic game. Finally, we compare the centrality of each student with their classroom grades (GPA) and find a positive and statistically significant relationship between network centrality, measured as the sum of tokens received, and a student’s academic performance. These results suggest a link between cooperation and learning and open new avenues for the role of networks in education.

Developing a Moral NLP tool kit

ABSTRACT. How do people make moral judgements? Morality shapes our social lives, how we collaborate and communicate, and how we develop a sense of individual and collective identity. Moral Foundations Theory (MFT)\cite{Haidt:2007wg} proposes that moral judgements are made along along five dimensions: harm vs.~care, subversion vs.~authority, outgroup vs.~ingroup, degradation vs.~sanctity, and cheating vs.~fairness. The divergence of moral structures and collective identity between various human social groups is assumed to be rooted in these foundations, each group placing a different weight per foundation than other groups. MFT has enjoyed broad empirical validation, and has even produced an extensive lexicon of terms that either ``affirm'' or ``violate'' each foundation\cite{Haidt:2009gn,Graham:2009er}. The construction of lexicons in psycho-metrics and natural language processing (NLP) is underpinned by the so-called ``lexical hypothesis'', which holds that the most important issues in daily human life and communication will be encoded directly into language\cite{Goldberg:1981wp}. Since morality features prominently in human lives, by the lexical hypothesis we should find its traces in human language. However, the MFT lexicon has not been extensively confirmed against natural language data; nor has it been translated into NLP methods. Here we show a 2-step process to (a) validate the MFT lexicon against large-scale language data and (b) leverage it towards a well-vetted tool for the assessment of moral judgements in natural language.

First, we validate the structure of MFT against multiple word embeddings, refining the lexicon and ensuring self-consistency. As an example, we show that opposing word pairs per foundation in the MFT lexicon are associated with linear substructures within an embedding. Our figure shows this for the harm vs. care foundation. After our validation, in the second objective we develop a set of NLP methods to score large-scale text data along each of MFT's dimensions. The effectiveness of this approach is shown in an analysis that evaluates the differential MFT ratings of Twitter users who identify as belonging to opposite ends of the political spectrum. Our work opens the possibility of rating the moral content of social media data to study how collective identity develops and evolves in terms of collective self-esteem and within/between-group moral judgements.

Steps Towards a Computational Visualizer of Legal Globalization as a Complex Adaptive Network

ABSTRACT. 1. Goals of the talk The aim of this talk is to present some of the results obtained in the development of a system to visualize the self-organizing processes that are taking place in legal globalization; specifically in the set of legal systems which are part of the Inter-American System of Human Rights. The second objective is to show it is possible a new transdisciplinary approach combining mathematics and legal theory 2. Antecedents This talk is the result of the efforts made to integrate complexity theory, law, contemporary cognitive sciences, and mathematics in what we call "Complex Legal Constructivism". This research is funded by CONACYT as part of its Frontiers of Science program. 3. Problem One of the main problems of legal globalization is that traditional sources of law derived from national constitutions do not work anymore. The appearance of diverse factors in the constitutional environment are changing the dynamics of legal systems. Some of these factors are new information technologies, the intensification of international trade, new globalized organizations not created by national legal system norms or international law, such as intergovernmental organizations, trans-governmental cooperation networks, non-governmental organizations, transnational networks of private actors, multinational corporations, globalized organized crime networks, etc. In this context no one is capable of knowing the connectivity happening at the global level and therefore, the possibility of intersubjective control of the whole system is more and more difficult. 4. The experiment

Even with a ”global” legal globalization, it is possible to talk of sub legal globalizations. One of this corresponds to the Inter-American system that includes all the countries that have signed the American Convention of Human Rights and accepted the competence of the Interamerican Court of Human Rights (Mexico included). Based on discrete mathematics, specifically graph theory, a hyper textual theory of law has been developed to identify some words as semantic markers of normative connectivity between rulings of national judges and the norms contained in the American Convention of Human Rights, which form the basis for second level decisions made by the Inter-American Court. From a corpus of both national and Inter-American Court decisions, our goal is to construct a system capable of identifying and establishing connections among semantic markers (considered as network nodes) to obtain an image of a complex network. This visual image could help know different properties of the Ius Commune Latin-American. For instance, the degree of specific nodes corresponding to certain articles of the American Convention (AC) would tell us something interesting about the different legal cases in the region; which AC norms are the hubs with the most important roles in the system´s dynamics; changes in network morphology might represent changes in social problems, etc.

16:30-18:30 Session 10D: Economics and Finance - Macroeconomics and economic Policy II

Parallel session

Location: Tulum 4
The Great Recession (2007 – 2009) and GDP fluctuations of US States

ABSTRACT. We investigate the interplay between the GDP fluctuations of the US states, looking for common behavior during crises or booms, for ``leader’’ and ``follower states’’, and for possible early signs of a recession in the data. It is interesting to see how the smaller building blocks of the US economy add up to form the whole. Our study is similar in style to studies of rich countries [1], Chinese provinces [2], and Latin American countries [3].

We analyze Gross Domestic Product (GDP) data downloaded from the U.S. Bureau of Economic Analysis (BEA). Our data consists of the quarterly real GDP for each U.S. state and Washington D.C. These data are available from 2005. We calculate the fluctuations of the GDP over different time frames, starting at three quarters up to over a year, and calculate Pearson correlation coefficients and economic distances for each pair of states. From these, we construct a network of the states using the threshold method. We also consider the complete weighted network.

When plotting the rank distribution of economic distances in a Zipf-style plot, we notice that at the start of the Great Recession (December 2007 – June 2009), the economic distances shrink – the state economies become more similar. We observe a similar phenomenon for the near-recession in 2012, when the GDP was almost stagnant.

We consider the network structure, various centrality measures and correlation coefficients. While some network clusters have a clear geographic correlation, the largest cluster contains states that are geographically very distant. We note that the degree distributions do not form a power law, and they seem to change during the recession.

[1] M. Gligor and M. Ausloos, Eur. Phys. J. B 63 (2008) 533 – 539; M. Gligor and M. Ausloos, Eur. Phys. J. B 57 (2007) 139 – 146; M. Ausloos, R. Lambiotte, Physica A 382 (2007) 16 – 21. [2] H. Sen, Y. Hualei, C. Boliang, Y. Chunxia, Physica A 392 (2013) 3682 – 3697. [3] F. O. Redelico, A. N. Proto, M. Ausloos, Physica A 388(2009) 3527 – 3535.

What do central counterparties default funds really cover? A network-based stress test answer
SPEAKER: Giulio Cimini

ABSTRACT. In the last years, increasing efforts have been put into the development of effective stress tests for financial institutions. Here we propose a stress test methodology for central counterparties based on a network characterization of clearing members, whose links correspond to direct credits and debits between them. This network constitutes the ground for the propagation of financial distress: equity losses caused by an initial shock with both exogenous and endogenous components reverberate within the network and are amplified through credit and liquidity contagion channels. Indeed, the default of one or more clearing members has the potential to impact other members as well: as highlighted by ESMA, a significant part of the protection central counterparty are equipped with is given by the resources provided by non-defaulting clearing members, which are in turn at risk of facing significant second-round losses. Indeed, our method allows to quantify these potential equity losses (the vulnerabilities) of clearing members resulting from the dynamics of shock reverberation between clearing members themselves. We can thus assess the adequacy of the central counterparty's default fund---which, according to EMIR Regulation, is gauged to cover losses resulting from the default of the two most exposed clearing members (the so-called “cover 2” requirement). The stress test methodology we propose is made up of the following operative steps: 1. use of a Merton-like model to obtain daily balance sheet information of clearing members; 2. reconstruct the network of bilateral exposures between clearing members; 3. apply a set of initial shocks to the market (idiosyncratic, macroeconomic and on margins posted); 4. reverberate of the initial distress through credit and liquidity shocks, and quantify the overall equity losses. We apply the proposed framework to the Fixed Income asset class of CC&G, the central counterparty operating in Italy whose main cleared securities are Italian Government Bonds. If we simulate an initial shock corresponding to the cover 2 case, after an unlimited reverberation of shocks the system falls into a stationary configuration which is comparable to the one obtained with a more uniform distribution of initial shocks. However, if we stop the propagation after the first reverberation, the cover 2 shock appears more severe, triggering a greater number of additional defaults. On the one hand, this proves that---and least under “extreme but plausible” market conditions---the cover 2 is a good proxy of a systemic shock but that, on the other hand, a network-based stress test can be deemed a more refined tool for calibrating central counterparties default funds.

Economic and Political Effects on Currency Clustering Dynamics

ABSTRACT. We propose a new measure that we call the symbolic performance to better understand the structure of foreign exchange markets. Instead of considering currency pairs, we isolate a time series for each currency which describes its position in the market, independent of base currency. We apply the k-means clustering algorithm to analyze how the roles of currencies change over time, from reference status or average appreciations and depreciations with respect to other currencies to large appreciations and depreciations. We show how different central bank interventions and political and economic developments, such as the cap enforced by the Swiss National Bank or the Brexit vote, affect the position of a currency in the currency network. Additionally we demonstrate how the symbolic performance encodes the correlation and dependence of currencies, allowing to quantify the influence one currency has over another.

The Dance of Godzilla and the Earthquake: On the Sectoral and Structural Foundations of Macroeconomic Fluctuations

ABSTRACT. Our investigation takes as vantage points two recent empirical findings: firstly, there are significant co-movements between the business cycles of different countries (e.g. Johnson, 2014). Secondly, because firm sizes are heavy tailed, aggregate fluctuations in a given country can be traced back to output fluctuations of the largest firms (e.g. Carvalho and Grassi, 2015).

Against this backdrop we employ methods from network science and time series analysis to understand the extent to which business cycle co-movement between different countries can be traced back to the firm level and the network of multinational firm relationships. More precisely, we suppose that the co-movement of business cycles between countries can be explained partly by the fact that the large firms which determine busyness-cycle movements in both countries are linked through their multinational activities (such as trade, activity in the same value-added chain, or joint R&D activities). Furthermore, we relate these results with the theory of economic complexity and the product space (Hidalgo et al. 2007, Hidalgo & Hausman 2009, Tacchella et al. 2012): controlling for the sectoral composition of the economies we expect countries with higher complexity to be less vulnerable to economic volatility, both internally – induced by any of the predominant firms - and externally. Thus, countries with low complexity can be expected to show greater co-movements with more advanced countries. This relationship would carry important policy implications with regard to international development collaboration.

To quantify the effect of the firm level on the business cycle co-movements we proceed as follows: Firstly, we quantify the business cycle co-movements by analyzing the dependency structure of the first moments of countries time series of GDP, profits of the largest firms, and changes in the business activities of the largest firms. Secondly, we study the inter-firm networks and the connectedness between large firms in countries with strong business cycle co-movement. Thirdly, investigate the possibility that information on firm-relatedness and firm-level fluctuations can predict the business cycles for each of the other countries. Finally, we study the relationship between product complexity cand business cycle co-movement. The resulting model will then be compared in terms of its predictive power with the importance of alternative sources of business-cycle co-movement, in particular, the degree of sectoral similarity between the countries and the amount of bilateral trade.

References -----------

Carvalho, Vasco M., and Basile Grassi. 2015. “Large Firm Dynamics and the Business Cycle.” CEPR Discussion Paper 10587. Hidalgo, Cesar A. et al. 2007. “The Product Space Conditions the Development of Nations.” Science, 317(7), 482–487.

Hidalgo, Cesar A., and Ricardo Hausmann. 2009. “The building blocks of economic complexity.” Proceedings of the National Academy of Science, 106(26), 10570-10575.

Johnson, Robert C. 2014. “Trade in Intermediate Inputs and Business Cycle Comovement.” American Economic Journal: Macroeconomics, 6(4): 39–83. Tacchella, A. et al., 2012. “A New Metrics for Countries' Fitness and Products' Complexity.” Scientific Reports, 2, p.482.

Cognitive biases, perceived wealth and household debt accumulation

ABSTRACT. Recent findings in behavioural economics and social cognitive psychology show that differences in perceptions can modify individual behaviour in ways that have potentially relevant macroeconomic consequences. This is due to the presence of cognitive biases which lead individuals to make decisions that are inconsistent with the actual amount of available resources (Morewedge et al., 2007; Soman, 2001; Sussman and Shafir, 2012). In line with these findings, we introduce the construct of perceived wealth which identifies a cognitive bias that creates a distorted perception of individual net worth, leading to consumption, saving and borrowing decisions that are not consistent with the actual level of wealth. Eventually, we build a simple macro agent-based model (ABM) in order to study the macroeconomic consequences of individual consumption and borrowing decisions based perceived wealth – measured by the product of deposits and the net-worth-to-liability (NWL) ratio – rather than actual net worth. Our results show that perceived wealth may trigger a process of overconsumption and massive debt accumulation which jeopardise macroeconomic stability. In addition, in presence of the cognitive bias, individuals are also likely to overestimate their ability to pay back consumption loans in the future. As such, individual consumption and borrowing decisions become a potential source of instability for financial markets, as banks may accumulate non-performing loans that affect credit availability for future borrowers. Hence, perceived wealth can explain the emergence of booms and busts characterised by consumption euphoria that eventually results in a debt crisis and a credit crunch.

16:30-18:30 Session 10E: Infrastructure, Planning and Environment - Complex Urban Systems

Parallel session

Location: Xcaret 3
The hierarchical landscape of industries in the UK
SPEAKER: Elsa Arcaute

ABSTRACT. Co-location of firms is a well-recognised strategy to take advantage of agglomeration economies. Using data at a micro-level, we explore whether there’s a hierarchical structure underlying the clustering of firms in London and whether its composition changed after the 2008 crisis. We take particular interest at knowledge-based industries and look into the role of diversity and the historical context for the observed patterns.

Urban Systems diversity measures

ABSTRACT. Digest the complex temporal dynamics of hierarchical communities is hard to attain in a way that captures the changes in rank and size of its members. Particularly, in urban systems, scaling laws and rank clocks approaches have proved to capture much of this dynamic at macro and micro scales respectively, correlating the variation of urban attributes with city size. Nevertheless, the former highly dependents on a coherent city definition and the later lose the actual population in the analysis. Fully aware that these problems are perhaps intractable, here we argue that adding simple diversity measures to the analysis could give some insights about the self-organization process that these urban hierarchical structures experience over time. Taking some ideas from linguistics and biology we looked at the behaviour of the rank itself (measured as the number of different cities occupying one rank over time) and relate it with the mean clock rank shifts and the cities total turnover from one year to another, to compose a rational picture of the complex temporal evolution of the urban system in terms of its population size. We selected 10 urban systems (UK, France, Italy, Spain, Mexico, USA, Colombia, Canada, Japan, Ex-Soviet Union) as a case of study and apply these diversity measures over a 100 years period (1900 to 2010) divided into 12 points roughly corresponding with national official Census.Our findings emphasize the differences between European systems and their Asian and American counterparts, reinforcing the notion that there is no ultimate rank-size universality to be found in cities. For example, the corpus of cities that are present in lower ranks at all years sampled, it is much larger for European systems that for the American ones, detecting the fundamental differences in terms of foundations dates between the two continents. American systems have "more" variety at middle ranges reflecting a stronger interaction between their cities through these last 110 years. Finally, we tested our measures using an alternative definition of cities for the UK, to explore its robustnesses.

Networks of urban vulnerability

ABSTRACT. City is a complex system, composed of a multitude of interacting actors of different types. And while complexity is one of the main reasons for city’s efficiency, it is also one of its major vulnerabilities. Urban systems are largely dependent on the functionality of their key components and one or several local failures can often cause substantial disruption to the entire system. In that relation understanding vulnerabilities of urban systems is critical for urban planning, transportation and public safety.

Vulnerability of the road networks for example is a well-studied area which saw a lot of advances including recent ones – in particular, approaches are developed to identify potential impact of disruptions happening to a certain node or link of the network [Ukkusuri, S. V., & Holguín-Veras, J. (2007). In Network Science, Nonlinear Science and Infrastructure Systems, Li, J. and Ozbay, K. (2012). Journal of the Transportation Research Board(2284)].

Disruptions can happen for a variety of reasons including infrastructural failures, planned interventions, natural or technogenic disasters or even terrorist attacks. But in most cases they create negative impact on urban mobility resulting in delays for urban population to reach their locations of interest. Major disruptions can even cause people to cancel their plans and change their destinations, but those scenarios are beyond the scope of the present study for now.

However, under certain conditions it can happen that a simultaneous disruption of two or more locations across the city (causing transportation system failures) can have a cumulative impact (delay) on urban mobility larger than the sum of their individual impacts taken separately, creating an effect one can call a disruptive synergy. One can represent this effect by constructing a vulnerability network with nodes representing urban locations, while edges are weighted according to the surplus of the projected cumulative delay to the expected urban mobility caused by a simultaneous disruption of this pair of nodes over the sum of their separate impacts.

In the present work we construct and study the vulnerability networks for NYC and several other major US cities. For that purpose, we leverage the information on available urban multi-modal transportation options on one hand and the expected mobility estimates on the other, based on the Longitudinal Employer-Household Dynamics (LEHD) and geo-tagged Twitter data. The last, despite its limitations is seen as a proxy for human mobility [Hawelka, B., et al (2014). Cartography and GIS, 41(3), Kurkcu, et al, 95th TRB Annual Conference, #16-3901] supplementing static LEHD data with temporal variations of the transportation demand.

In this analysis we ask a question: to what extent urban vulnerability networks for different cities exhibit common statistical and physical patterns and to what extent are those patterns city-specific? We also apply the vulnerability networks to discovering the structure of the city from a vulnerability standpoint, defining communities of locations, representing particular potential threat if disrupted together and compare those communities to the known patterns in urban infrastructure identifying new insights for urban and public safety stakeholders.

Modeling hierarchy and specialization of a system of cities as a result of the dynamics of firms' interactions
SPEAKER: Mehdi Bida

ABSTRACT. The two main characteristics of systems of cities are the size distribution and the specialization of the cities. These characteristics have extensively been studied by geographers (Christaller, 1933; Berry, 1964; Pred, 1977; Bourne, 1984; Pumain, 1982; Batty, 2005; Pumain et al., 2006), and more recently by physicists (Makse et al., 1995; Schweitzer, & Steinbrink, 1998; Bettencourt et al., 2007). All this literature underlines the remarkable constancy in space and time of Zipf’s law of distribution of city sizes (Zipf, 1941). Alongside the urban hierarchy, the degree of cities’ economic specialization follows an opposite trend to the size. It has been shown that the sole account of the diffusion of innovations across the network that cities form, is necessary and sufficient to reproduce the observed characteristics of urban systems (Bura et al., 1996; Pumain, Sanders, 2013). However, the developed models always considered the entire city as the unit in the urban system like in the Simpop model (Pumain et al., 2017).

In our approach, we propose a model where the cities forming the urban system are the results of micro-agents’ interactions. This model also aims reproducing hierarchy and specialization of urban systems, but with a complete bottom-up approach from the micro-agents, to the meso level of each single city until the macro level of system of cities. We use an agent-based model where agents are firms that make evolve their cooperation network for innovation in a geographical space. Innovations stem from a firm, or a group of firms that cooperate and propagate across them according to their geographical distance and their position in the agents’ network of cooperation. We will underline to what extent this basic model will be able to reproduce the urban system hierarchy. In a second step we will add a supplementary economic dimension that consists in different economic sectors. The new economic space combined with the geographical one will modulate the interaction patterns. In this configuration of the model, we will identify the necessary conditions to reproduce both hierarchy and specialization of urban systems.

Data-driven model of researchers’ migration

ABSTRACT. In the academic community there is a widely accepted belief that movement between institutions is beneficial to, possibly even essential for, a successful career. From doctorate to post-doc, lecturer to professor, many individuals relocate at some point in their career. Despite its common occurrence, it remains unclear how a researcher looking to relocate selects their next institution and at which point in time they decide to make this move.

Here we present an analysis of the APS publication database which consists of over 400,000 papers, reconstructing career trajectories of scientists to determine the driving forces behind their decisions to change institutions. The driving forces we consider include relative performance of both the researcher and the institutions and duration of employment, amongst others. We apply methods originating from machine learning, including decision tree regressions and random forest classifiers, in order to determine which factor is most influential in a researcher’s decision to move relocate.

Using this insight, we construct a mathematical model to describe the migration of researchers between institutions. The model may be used to determine both the probability that a researcher will migrate (i.e., change institution) and the probability to relocate to a given institution (i.e., the possible destinations). The insight gained from this work provides us with a deeper understanding of the factors that influence the migration decisions of researchers along side a general modelling approach to describe migration dynamics.

Network properties of mexican cities

ABSTRACT. Cities are complex systems par excellence. Human beings live in cities of various kinds, which are shaped according to very specific and local processes, for example the topology of the land where they were built or the culture of the society that inhabits it. However there are universal properties that all human cities share. This talk discusses some of these universal properties in the context of Mexican cities. We analyze 3500 Mexican cities in the context of complex networks. Our study encompasses cities of a broad range in terms of population, from cities with 2500 inhabitants to megacities with millions of inhabitants such as Mexico city. In a system like the city, different communities emerge with their own identities; This is due to multiple factors, among them, the physical infrastructure of the city; Being part of this the structure the city streets. We represent the urban trace of each of the studied cities as a complex network and then we analyze the statistics of the properties of these networks, their effects on human mobility and how they change as a function of population. Representing the city as a complex network allows segmenting the network into communities or subnets according to the connectivity properties of the network itself. These communities in the network correspond to areas of the city with similar connectivity. Structures that are topologically coherent with each other. These regions are compared directly with zones of the city obtained through other strategies, in particular in terms of socio-demographical variables, such as social development, security, poverty, etc.

The Interhospital Transfer Network for Very Low Birth Weight Infants in the United States

ABSTRACT. Background- Very low birth weight, VLBW, infants are frequently transferred among hospitals, yet the structure of transfers and how structural variation affects care is not well understood.

Objective- To identify structural relationships in the interhospital transfer network for VLBW infants in the United States and determine how structural variation relates to function, i.e. care.

Methods- We used data from the Vermont Oxford Network, VON, to construct the interhospital transfer network. The transfer network was partitioned into communities using modularity maximization via message passing. Centrality was quantified using message passing and hierarchy was determined using spectral entropy. Significance was determined by hierarchical Bayesian modeling and permutation.

Results- In 2015, VON hospitals in the US cared for 44,859 VLBW infants. The interhospital transfer network included 2,126 hospitals with 10,185 infant transfers among the hospitals. The figure shows two views of the transfer network. The communities differed in terms of both the degree of hierarchy and pattern of transfers. These structural differences accounted for among-community variation in the frequency of hospital acquired infections and successful infant discharge.

Conclusions- The interhospital transfer network for VLBW infants in the US consists of structurally variable communities. This variation affects care, in terms of the percentage of successful discharges and hospital acquired infections, and function, in terms of the frequency of infant transfers. This study is the first to demonstrate that the structure of the interhospital transfer network for VLBW infants in the US varies and that this variation may affect care.

16:30-18:30 Session 10F: Biological and (Bio)Medical Complexity - Celular dynamics and neurobiology II

Parallel session

Location: Tulum 1&2
Information theory, predictability, and the emergence of complex life

ABSTRACT. Darwinian dynamics emphasize fast replication and large progeny in a process devoid of final cause. To satisfy these constraints, it pays off being small and relatively simple, such as bacteria; while large, complex, costly structures can often be penalized. And yet, different levels of cognitive (and other forms of) complexity have been achieved by living systems – which poses an evolutionary conundrum. Stephen Jay Gould [1] argued that simple life forms still largely dominate the biosphere, and that the ‘incidental’ complexity that we observe is the result of a random drift biased towards larger complexity only because a lower boundary exists: nothing much simpler than bacteria can replicate autonomously, hence any random fluctuation is likely to yield more complex replicators. He insisted that explicit selection of complexity does not take place [1]. An alternative solution is that cognitive complexity might allow living systems to extract enough information from their environment, so that they can cope with the costs associated with the expensive processing structures. Under some such circumstances complexity could be explicitly selected for by natural selection.

Inspired by Maynard-Smith [2], we modeled evolutionary dynamics as a message passed down across generations through a noisy channel (the environment) [3]. In [2], genes carry meaningful bits of information that must be protected against the action of the environment. Against this view, we realize how messages that better cope with the environment naturally replicate faster. Hence a better characterization is that of the environment pumping meaningful bits of information into the genome. This can be rewritten as a prediction task in which so-called bit-guessers [3] attempt to reduce the uncertainty of the environment. Based on these insights we develop a minimal mathematical model that allows us to address the tradeoff between complexity and replication. Thanks to it we can prove that complex living systems can be selected for beyond a random drift, and we can quantify under what circumstances this will happen. The complexity of the environment is the key driver in these dynamics, but other actors can also be integrated into the model – as we discuss in our work [3]. The model also proved extremely versatile to address other biological scenarios so we expect this to be the first in a series of contributions around the same topic.

[1] Gould SJ, 2011. Full house. Harvard, MA: Harvard University Press. [2] Maynard-Smith J, 2000. The concept of information in biology. Philos. Sci., 67(2), 177-194. [3] Seoane LF and Solé R, 2017. Information theory, predictability, and the emergence of complex life. Under review, J. R. Soc. Interface.

Numerical Take On Multidimensional Chemical Master Equation: Modelling Biological Switches
SPEAKER: Ivan Kryven

ABSTRACT. When discussing master equations that govern population dynamics the term ‘high- dimensional’ is rather routine than exceptional. In this case, we mean ‘population’ in the broadest sense: whether these are molecules, cells, bacterias, colloids, people, or connected components in a random network, it is natural to represent the system state as a multidimensional probability (or mass) density function. This reflects a statistical view on the system as a population of samples with deviating vector-valued properties. In this talk, I will focus on the case when, even though the distribution that solves the master equation is multivariate, it is supported only on a ‘small’ manifold comparing to the whole state space. Here, the support is defined as a region where the probabilities are larger than a pre-defined threshold. Such manifold may have a non-trivial shape, and even may change its topology as the distribution progresses in time. The radial basis functions are employed to approximate the distribution in the interior of the manifold. In the same time, the shape of the manifold is tracked by the level set method, so that the approximation basis can be adapted to the change of the distribution support. The talk is fortified with examples inspired by problems from cell differentiation paradigm that feature metastable behaviour.

[1] Kryven, Ivan, Susanna Röblitz, Christof Schütte. "Solution of the chemical master equation by radial basis functions approximation with interface tracking." BMC systems biology 9.1 (2015): 67.

Global topological conservation versus local reorganization of the psychedelic brain

ABSTRACT. We analyse the homological structure of functional connectivity of the human brain under the effect of two different psychedelic drugs, psilocybin [1] and LSD [2] via persistent homology [3], a technique in topological data analysis, able to capture multiscale high-order patterns. Previous work showed that subjects injected with psilocybin showed a remarkably different functional topology as compared with subjects that received placebo [4]. The differences were in both the overall modulation of connectivity and in the localization of the homological features in the brain. Global information about topology is summarised by persistence diagrams which describe the lifespan of functional cycles, corresponding to areas of localized weaker connectivity bounded by strongly interacting cycles. Local information is instead encoded in a set of surrogate networks, called persistent homology scaffolds, which bring the information about functional cycles back to the brain network level, effectively yielding its topological skeleton [4].

In this contribution, we first replicate the psilocybin results under a different preprocessing pipeline, showing that the homological properties detected are robust under different processing pipelines and that psilocybin causes a strong reorganization of the local correlational structure. Then, by leveraging recent results on distance kernels for persistence diagrams [5], we compare the topology of functional brain networks for subjects under psilocybin and subjects under LSD and we find that, at the topological level, psilocybin produces functional alterations that are more uniform across subjects as compared with those produced by LSD. We then focus on the localization of these functional alterations (as expressed by homological scaffolds) and find that the scaffolds for subjects under LSD and placebo share a large fraction of edges (~ 50%) with the difference that stems from a strong reduction of the scaffolds' edge weights of the LSD scaffold and correlates with the self-reported intensity of the psychedelic experience; in the psilocybin case instead, scaffolds under drug or placebo share a very small edge fraction (~5%) and also on those, there is no relationship between edge weights in the two conditions. Considering the global (persistence diagram distances) and local (scaffold weights modulations), we find strong evidence for a different topological effect of the two drugs: LSD causes small effects of the topological structure that are inconsistent across subjects, effectively showing a psilocybin instead produces a substantial network rearrangement which is more consistent across subjects.

Social Intelligence in Pseudomonas aeruginosa

ABSTRACT. Pseudomonas aeruginosa (PA) is one of the several species of bacteria which exhibits swarming motility, defined as the ability of a collection of bacterial cells to rapidly spread across a solid surface such as agar supplemented with nutrients. Bacterial swarming is a phenomenon which involves collective decision making, for instance, to estimate whether a critical population size has been reached or if the conditions are favourable for swarming. Swarming bacteria display spatial patterns characteristic to their species ranging from smooth circular patterns to finger like patterns formed by PA. The swarming patterns extend over a length scale (tens of centimeters) which is 4-5 orders of magnitude larger than the typical size of a single bacterium (about a micrometer). Given the extremely large number of individuals in a bacterial swarm (more than 10^7 cells), and the fact that individual bacteria are likely to encounter a highly stochastic environment in terms of local concentrations of nutrients and signalling molecules, bacterial swarms provide an excellent system to probe biological complexity. The relative ease of performing temporal and spatial multi-scale imaging enables detailed documentation of the emergence of robust collective behaviour from stochastic individual behaviour in the case of bacterial swarming. It is more difficult to piece together such data in other systems studied in the context of biological complexity such as reaction networks. We will describe interesting instances of robust collective control exhibited by PA swarms. Specifically, we focus on the ability of a PA swarm to sense its neighbourhood and to modulate its movement based on the sensory cues. Multiple PA colonies swarming on the same agar plate can sense an approaching colony from as far as a centimeter. The response of a swarming PA tendril (a finger like projection) to an approaching PA tendril suggests the dynamic emergence of a colony level altruism along with reorganisation of the bacteria within the swarm. The swarm reorganisation process begins with a unique swarm tendril retraction without any significant distortion of the swarm finger shape as might be expected from jamming associated with the direction reversal of a large population of motile agents. Quite Surprisingly, our experiments present strong evidence for the ability of a swarming PA tendril to sense non-biological obstacles made of inert polymers such as PDMS (Poly DiMethyl Siloxane). We propose a computational model of motion control in PA swarms based on the sensing of concentration gradients of nutrients and signalling molecules. While the current computational model is a coarse grained fluid dynamical model, we are exploring the development of a microscopic single cell based model to better understand the strategies employed by the swarm to overcome stochasticity at the single bacteria level. In summary, bacterial swarms present a rich system to study complexity in a biological setting. Understanding the rules governing the behaviour of bacteria at the single cell level and studying how such rules lead to emergence of robust collective behaviour observed in experiments holds good promise in developing successful strategies to deal with complexity associated with large systems.

Diagnosis of epilepsy from the reconstruction of the attractor of EEG

ABSTRACT. In this work, we performed an ANOVA analysis of the embedding dimension and the delay time, which form part of the measurements intervening in the reconstruction of the attractor of non linear time series corresponding to electroencephalogram including alpha signals of both healthy and epileptic patients. The measurements corresponding to healthy patients were divided into two groups: one in which the EEG was considered with the eyes opened (type Z) and another with the eyes closed (type O); a second group of measurements corresponding to an intracranial EEG included two types of epileptic patients: the first one is a free of crisis (type N) patient whose measurements were obtained from the hippocampus zone and in the second one the measurements were obtained from the epileptogenic part. As with similar studies, we have focused in the correlation dimension, but additionally, we have also included the delay time to study their possible correlation. We found that for both kinds of patient, there are significative differences, with p = 0;05 in the embedding dimension and the delay time. We conclude that, in the presence of the disease, the dynamics of the signals has less degrees of freedom than in its absence. The results could allow considering the creation of software for medical routine use; this could be helpful in the diagnosis of the disease, as it would indicate the place in which epileptic crisis start, through the embedding dimension.

Neuromodulation in crustacean circadian rhythm: a theoretical analysis

ABSTRACT. Circadian rhythms are universal biological clocks that synchronize cell and organism behaviors with natural rhythms. Their main role is probably to regulate the sleep-wake cycle. Although a “master clock”, a single physiological structure responsible for generating circadian rhythms, has yet to be found, specific organs and systems are responsible for robustly maintaining circadian activity in different organisms, for example, the suprachiasmatic nuclei (SCN) in the human hypothalamus, the pineal gland in birds, and the eyestalk in crustaceans. Robust maintenance of circadian rhythms rely on endogenous release of hormones and neurotransmitters, the resonance of several ultradian (shorter than a day) rhythms, the synchronization with exterior cycles such as food and light, and other properties particular to the organism, e.g. age [Fanjul-Moles1992, Escobar1999]. Thus, complexity ensues. Recently, cellular excitability and its modulation have also been identified as key players in the genesis and regulation of circadian rhythms [Nitabach2002].

In the case of crustaceans, such as the crayfish members of the Procambarus genus, the eyestalk serves not only as a sense organ, but also coordinates rhythms involved in the individual’s locomotor system. The X organ-sinus gland system, located in the eyestalk, is of particular interest because it houses 150 – 200 neurosecretory cells that release various hormones, some of which are crucial to regulating blood sugar levels and molting [Garcia1998]. Recordings of X organ neurons during the circadian cycle show rich transitions in their electrical activity. Within an 11 hour time-lapse they have been shown to exhibit tonic spiking, repolarizing blocks, low-amplitude oscillations, bursting and silence [Garcia1998]. These changes are mediated by a plethora of neuromodulators, like γ-aminobutyric acid (GABA), Met-enkephalin (Met-enk) and 5-hydroxytriptamine (5-HT), and regulate the type and amount of neurotransmitters and hormones released during the day by the X organ-sinus gland system.

We employ conductance based modelling and novel non-linear sensitivity analysis techniques [Drion2015] to explore the maximal conductance space and formulate predictions about the neuromodulatory mechanisms underlying the spiking mode transitions observed in the X organ during the circadian cycle. We use a state-of-the-art neuronal conductance-based model of a stomatogastric ganglion (STG) neuron in Cancer borealis [Liu1998]. This model is able to robustly reproduce all the activity patterns observed in the X organ and has been widely used for neuromodulation studies [Marder2014]. Our analysis leads to clear predictions about the possible paths in the maximal conductance space that realize the circadian neuromodulation in the X organ.

16:30-18:45 Session 10G: Socio-Ecological Systems (SES) - SES Complexity

Parallel session

Location: Cozumel 3
SPECIES: A platform for modeling spatial data and identifying ecological interactions

ABSTRACT. The data revolution of the last couple of decades has given rise to vast amounts of new data, as well as making old data available for analysis in new ways. Much of the data is spatio-temporal, giving information about where things - species, people, businesses, diseases etc. - are, relative to one another in space and time. These spatio-temporal positions, in turn, are highly dependent on the interactions between objects. In physics, such information has yielded most of the knowledge we currently have about the interactions between physical objects. The most illustrative example is that of the gravitational interaction, associated with Brahe, Kepler and Newton, which was perhaps one of the first examples of data science. A data base of the positions of planetary bodies (Brahe) was analysed and phenomenological regularities observed (Kepler) which led to the supposition of an interaction (Newton). Unlike physics, where the number of fundamental interactions is small and there is a very high degree of universality, in Complex Adaptive Systems we know much less about the interactions between organisms. Indeed, there are just too many to ever try to observe and characterize directly. We are therefore led to examine to what degree ecological interactions may be detected and characterized from data associated with where and when organisms are. The platform SPECIES (Sistema Para la Exploracion de Informacion Espacial - is an interactive on-line tool for creating predictive models at the level of ecological niche and community that include data, both abiotic and biotic, from arbitrary and distinct spatio-temporal resolutions. By constructing Complex Inference Networks, it can, and has been, used to discover, identify and characterize ecological interactions. As examples, we consider emerging diseases, showing how the system can be used to identify vector-host interactions, identifying the most important potential disease hosts for a given disease. Indeed, in the case of the disease Leishmaniasis, the predictions have been validated by recent field work leading to the prediction and confirmation of 23 new species of host in Mexico, a 300% increase relative to what was known before. We will discuss the theoretical underpinnings of the system as well as considering various concrete applications: data validation, emerging diseases, risk analysis, biodiversity, habitat destruction etc.

Evolution of mutualistic networks by speciation-divergence dynamics

ABSTRACT. Mutualistic networks have been shown to involve complex patterns of interactions among animal and plant species. The architecture of these webs seems to pervade some of their robust and fragile behaviour. Recent work indicates that there is a strong correlation between the patterning of animal-plant interactions and their phylogenetic organisation. Here we show that such pattern and other reported regularities from mutualistic webs can be properly explained by means of a very simple model of speciation and divergence. This model also predicts a co-extinction dynamics under species loss consistent with the presence of an evolutionary signal. Our results suggest that there is no need to assume that the ecological scale plays a major role in shaping mutualistic webs. Instead, the evolutionary unfolding of these webs would lead to the observed invariant properties.

Robustness of plant-pollinator mutualistic networks against phenological mismatches

ABSTRACT. Mutualistic interactions play an important role within many natural systems, with abundant examples ranging from the economical context to the biological world. Such mutualism occurs when two different species or agents engage in a relation that benefits both, instead of competing like in the case of predation. The paradigmatic subjects of study are, in fact, ecological networks, whose structural and dynamical features have been linked, thanks to a recently growing endeavour, to observations of ecosystems' biodiversity and stability. In this context, the extent to which the so-called mutualistic networks might be affected by global climate change remains as a crucial question, specially for plant-pollinator communities that seem to be particularly sensitive to climate alterations. Empirical studies report an advance in their flowering and birth cycles (known as phenological shifts) as a response to environmental warming, which occasionally results on losses of temporal overlap between interacting species and by extension on an atrophy of the mutualistic functions.

In our work we determine the repercussions of such deterioration of pollination services in a scenario in which the occurrence's probability of phenological shifts can be tuned. By borrowing tools from population dynamics and statistical mechanics, we introduce a model accounting for such effects on networks. We apply it to a real system, based on the phenological and relational data collected by Burkle et al [1], as well as to artificial networks. We observe that, as the noise in the initial dates of activity increases, the number of surviving species gradually descends until reaching a critical regime in which it sharply drops, indicating a massive disruption of the remaining subnetwork. Such results suggest the existence of a non-equilibrium phase transition from an alive to an absorbing state where all species go extinct. Interestingly, we encounter that the quantity of phenological noise necessary to destroy the entire network entails a vanishing seasonality, pointing out that the system as a whole is extremely robust. Moreover, we find that pre-critical configurations are higly resilient, displaying a considerable inertia which surely slows down the process of extinction. At the critical regime, though, the resilience steeply declines and the rate of extinctions accelerates. Together with the disappearance of high generalist species and the subsequent severe loss of nestedness, the existence of this final collapse alludes to a sort of cascading effect. Besides, our method permitted to identify the most vulnerable species, integrating the combined effect of the structure of interactions and their phenology distribution. It also allowed us to assess the role that the core of generalist plays in enhancing the inertia against extinction. Prospective work should go on the direction of asserting the generality of these conclusions regarding other network's sizes, as well as on deciding whether the nature of the phase transition is continuous or not.

[1] Burkle, Laura A., John C. Marlin, and Tiffany M. Knight. "Plant-pollinator interactions over 120 years: loss of species, co-occurrence, and function." Science 339.6127 (2013): 1611-1615.

Interference between connectedness and species dynamics in in silico ecosystems governed by non-hierarchical competition
SPEAKER: Jan Baetens

ABSTRACT. Competition is one of the mechanisms that supports biodiversity in ecosystems. Previous works have shown that cyclic dominance among species can promote their coexistence and as such contributes to the maintenance of biodiversity. A classic example is the rock-paper-scissors (RPS) game, where three species interact in a non-hierarchical way, i.e., each species has one predator, and, at the same time, it preys on another species. Many food webs display such a behavior, like certain bacterial species, coral reefs, and vertebrates, amongst others. Besides, some human decision-making processes are guided by similar interactions. Variants of the RPS game for more than three species can also be found in literature, supporting the study of richer communities. Characteristics to be taken into account when modeling such evolutionary systems are community evenness and the individuals’ mobility. The first is related to the distribution of the species, which is often assumed to be uniform despite evidence of the contrary from real-world ecosystems. Secondly, mobility plays a crucial role in many ecosystems. For instance, it has been found that the coexistence of species is mediated by their dispersal and that there is a critical threshold mobility above which biodiversity is lost.

In computer simulations, square lattices are typically used when simulating the interactions among species, which strongly limits the individuals’ degrees of freedom. To a much lesser extent, graphs have been considered. Yet, other studies have led to the insight that the dynamics of a system can be affected by its underlying topology, making use of network models that are closer to real-world topologies. In the present work, we investigate how the structural properties of a set of graphs obtained from different network models (random, small-world, scale-free, geographic and regular) influence the dynamics of the RPS game. For that purpose, we generated 100 graphs per network model and per average vertex density for which we selected four values, namely 4, 6, 8 and 10. Then, the RPS dynamics was evolved for 2000 generations and we investigated how the coexistence of species was affected by tracking the time until the first extinction, the evenness and richness of the community, and the community’s patchiness. In agreement with previous works, the following processes were simulated on these graphs: 1) selection according to the non-hierarchical competition structure, 2) reproduction and 3) migration. These interactions were stochastic, meaning that at every iteration one individual and one of its neighbors are randomly chosen, after which only one of the processes occurs, which one is dictated by their respective reaction rates. Based on this extensive simulation study and corroborating preliminary studies, we conclude that not only the mobility of the species has an important impact on the evolved dynamics, but the connectedness of the species as well.

Biogeographical network analysis of plant species distribution in the French Mediterranean area

ABSTRACT. The study of biotic taxa distribution on a territory represents a key step in the understanding, analysis and conservation of ecosystems, but often hindered by a level of diversity and complexity that may appear overwhelming at first glance. To better understand and visualize the biogeographical structure of a territory, it is therefore necessary to divide this territory into meaningful and coherent geographical regions, minimizing the heterogeneity in taxonomic composition within regions while maximizing the differences between them. While the delineation of biogeographical regions has been based for a long time on expert knowledge of qualitative data collection, the increasing availability of species-level distribution data and the recent technological advances have allowed for the development of more rigorous frameworks. While limited consideration is given to network approaches in biogeography, the generic nature of networks and the level of complexity that they can capture at different scale, make it a powerful tool for investing the interactions among species occurring on a territory. In this work, we used a network approach to identify and characterize biogeographical regions in southern France, based on a large database containing information on millions of vegetation plant samples corresponding to more than 3,500 plant species. This methodology is performed following five steps, from the biogeographical bipartite network construction, to the identification of biogeographical regions and the analysis of their interactions based on plant species contribution indicators.

Mapping Social Ecological Systems Archetypes

ABSTRACT. Achieving sustainable development goals requires targeting and monitoring sustainable solutions tailored to different social and ecological contexts. Elinor Ostrom stressed that there is no panaceas or universal solutions to environmental problems, and developed a social-ecological systems’ (SES) framework -a nested multi tier set of variables- to help diagnose problems, identify complex interactions, and solutions tailored to each SES arena. However, to our knowledge, the SES framework has only been applied to over a hundred cases, and typically reflect the analysis of local case studies with relatively small coverage in space and time. While case studies are context rich and necessary, their conclusions might not reach policy making instances. Here we develop a data driven method for upscaling Ostrom’s SES framework and applied to a context where we expect data is scarce, incomplete, but also where sustainable solutions are badly needed. The purpose of upscaling the framework is to create a tool that facilitates decision making on data scarce environments such as developing countries. We mapped SES by applying the SES framework to poverty alleviation and food security issues in the Volta River basin in Ghana and Burkina Faso. We found archetypical configurations of SES in space given data availability, we study their change over time, and discuss where agricultural innovations such as water reservoirs might have a stronger impact at increasing food security and therefore alleviating poverty and hunger. We conclude outlining how the method can be used in other SES comparative studies.

Developing an early warning signal of a critical ecological threshold for gray whale breeding lagoons in Mexico

ABSTRACT. Use of early warning signals to prevent crossing undesired thresholds of social-ecological systems (SESs) is key to avoid unsustainable pathways. To date, most of the research in SES’s thresholds has been done after such thresholds were crossed and hence that research becomes irrelevant for preventing undesirable consequences. Consequently, it is crucial to develop approaches that can enable both policy-makers and society to act in time and prevent undesired changes in SSEs.

However, deep uncertainty about the interactions and feedbacks between the human and nature domains is pervasive to SSEs. Thus, there is limited knowledge about when and how rapidly SES’s thresholds will be crossed.

One way to address deep uncertainty is through computational modeling that allows for an exploration of multiple scenarios so that an early warning signal of a SES’s threshold to a catastrophic change can be identified. In that sense, SES modeling does not seek to predict the most likely future but rather to foster a more strategic vision of the future on the decision-making process. Through the case study of the gray whale breeding coastal lagoons in Baja California, Mexico, we illustrate the development of a SES computational model. The gray whale is protected by national and international law. Nonetheless, the Mexican government lacks the technical information to justify the regulation of the number of boats carrying out whale watching activities. Our model aims to produce an early warning signal to be used in the development of regulations for gray whale watching activities.

The SES computational model entailed the elicitation of scenarios regarding the carrying capacity of whale watching boats. We developed a system dynamics model assuming a logistic growth of the gray whale population with a “harvesting” factor equivalent to the sublethal effect of whale watching activities. These activities, in turn, were coupled to a logistic model of tourism to complete the socio-economic loop of the overall system. We carried out a set of computational experiments through Monte Carlo simulations. These experiments allowed us to identify a critical ecological threshold by examining the relationship between the variance of gray whale abundance (the state variable) and the variance of number of whale watching boats (the forcing variable). Thus, the early warning signal corresponded to the highest value of the quotient between the state and the forcing variables.

Hydro-geographic Complexity and Teleconnections Analysis: The Puerto Rico Stream Flow Regimens Case.

ABSTRACT. In general, hydrological dynamics consider time series analysis approach. Teleconnections detection uses cross-correlation analysis between the stream flow series and other indicators such as SO or NAO index (SOI and NAOI). However, hydrological dynamics can also be described regarding information content and observing the microstates of series. Recently, measures of emergence, self-organization, complexity and relative complexity based on information theory have been developed, and their usefulness in hydrological studies can be evaluated (Fernández, et al., 2014).

In this context, we carried out two types of analysis. The former was a time series analysis to analyze the flow patterns in 19 streams of Puerto Rico considering their location. That means we carried out an hydrological and hydro-geographic analysis. The second one was a complexity analysis concerning the regularity and change of stream flow and SOI. A comparison between the complexity of rivers and SOI were also performed to determine the relative complexity, or response of stream flows to SO phenomenon.

Comparing results of the time series analysis and complexity analysis, we argued that emergence, self-organization and complexity measurement approach is an alternative way to characterize the states of rivers at multiple scales. Future work could also verify the utility of our measures including ecological aspects of migratory species that are affected by the discharge.

The structural drivers behind neoextractivism: a discussion about edge centralities
SPEAKER: Carlos Perez

ABSTRACT. Neo-extractivism has been the main economic model of Latin America in the last decade, and for a major part of its modern history. The almost exclusive concentration in exports of raw materials from agriculture and mining industries, with little added value, has promoted broad research literature around such phenomena: from the related worsening of terms of trade in the scope of the regional dependency theories of ECLAC to the more recent ones of natural resource curse and Dutch Disease of North American and European literature, both of which pose extractivism as a syndrome of economic and policy ailments that permeate Latin American economic development. Recently, the definition of such concept has gone beyond mere export concentration to include the promotion of commodities exports by government (hence the prefix “neo”) and includes the ailments usually associated with such practice: environmental degradation, unequal distribution of income and international commerce imbalances. This last connotation, implicit in all the aforementioned literature, has a structural implication which characterizes global trade: a country which concentrates on exporting raw materials is highly likely to be at a disadvantage when compared to other countries in the world, with other production specializations. This structural characteristic of global trade, when diligently and scientifically scrutinized, can give light to imbalances in today’s commercial network, and thus give lead for policy prescriptions that can make this system a better one for all. A network analysis of the world trade network, specifically focused on the differences between countries that specialize on raw materials from agriculture or mining against those who don’t, is an apt methodology for such inquiry. However, in network theory the common unit of analysis is the node (the country in case of global trade), whereas for extractivism the edge is of utmost importance (the flow of goods between countries). This article, thus, develops the framework for an innovative measure of edge centrality that can capture the relevant information of such natural resource and economic flows. The straightforward application of such measure is to determine the level of “extractivism” of a country, relative to the global trade network. Current questions such as whether Mexico is more or less extractive than Brazil, could be answered using such edge centrality measure by capturing different features of what extractivism implies, and thus, would be a new analytical asset for the critical objective of sustainable (and egalitarian) economic development in Latin American countries.

16:30-18:30 Session 10H: Complexity in Physics and Chemistry - Interdisciplinary applications

Parallel session

Location: Xcaret 4
Indications of a critical transition in global climate timeseries

ABSTRACT. Climate change is one of the most pressing matters that humanity faces in the twenty-first century. The global land and ocean temperature average is the most direct measurable evidence of this change and has been the centerpiece of climate change discussion and research. Although these climate time series have been studied extensively, our work pretends to analyze the modern data record --mainly that spanning the twentieth century-- in the context of dynamical complex systems and phase transitions. The driving question is to determine whether the global temperature record exhibits some of the characteristic telltale signs of many dynamical systems when they are near important dynamical transitions such as critical points of phase transitions (the Ising model for ferromagnetism being a classical example). In this work we have analyzed the global temperature data (both global averages and surface distribution) published by the Berkeley Earth Group using traditional tools of timeseries analysis. We have found suggestive changes in the correlation properties in the power spectra, in the autocorrelation function and in the evolution of the statistical properties of the spatial distribution of decadal temperature records as time progresses from 1880 to 2010. In order to gain a better understanding of the possible significance of this result, we have engaged in the theoretical study of a well-understood model of planetary homeostasis: Lovelock's Daisyworld. By understanding the dynamical differences exhibited by the timeseries of Daisyworld both in the self-regulated and non-regulated regimes, we hope to draw connections that hint that climate change may be altering the regulatory stability of the global climatic system of the Earth and that we may be approaching a critical tipping point.

Stable structures and dynamic aspects of sleep EEG

ABSTRACT. Electroencephalographic scalp recordings (EEG) are noise contaminated and highly non-stationary. Therefore one might expect, that an averages of an interrelation measure like the Pearson coefficient, which may take positive and negative values with the same probability, should (almost) vanish when estimated over long data segments. However, the average zero-lag cross correlation matrix estimated over the sleep stages of healthy subjects result in a pronounced, characteristic correlation pattern. This pattern seems to be a generic feature of the brain dynamics, because it is independent of the physiological state and even if calculated for different subjects we find an amazing similarity between the average correlation structures. Hence, dynamical aspects of the brain dynamics should be studied as deviations from this stable pattern. In the present study we confirm this hypothesis via the analysis of sleep-EEG recordings and discuss our results within the framework of established theories about the “sleeping brain”.

Climate Information Production Recorded in Water Isotopes from Deep Polar Ice Cores

ABSTRACT. The Earth’s climate system is a nonstationary complex system with intricate spatiotemporal dynamics, complicated external forcing and appears to be changing rapidly. One promising way to explore this is by framing what is currently happening to the climate in context of its past history---e.g., the detailed histories that are laid down in ice cores. From the water isotope records in these cores, it is possible to reconstruct climatological factors like temperature and accumulation rates dating back to the last glacial period, and beyond.

For our initial study we used the two highest-resolution records available, one from Northern Greenland (NGRIP) and one from West Antarctica (WAIS). The NGRIP core, drilled in 1999-2003, covers 128,000 years at 5cm resolution. The WAIS core, completed in the past few years, covers a shorter timespan (68,000 years), but at 0.5cm sampling.

From these data, we would like to answer questions like: Do these records contain any information about the past, present or future climate? If so, what information can we reliably extract? Do extreme events like super volcanic eruptions or abrupt temperature transitions (e.g., Dansgaard-Oerschger events), have detectable signatures?

As a first pass at answering these questions, we calculated weighted permutation entropy (WPE) in a sliding window across these records. This measured the average rate at which new information---unrelated to anything in the past---is produced by the climate.

Our preliminary results suggest that analytical techniques, as well as thermodynamic, climactic, and glaciological effects, impact the information production of the climate system. One such early finding suggests that WPE can detect differences in hydrogen and oxygen isotope records that are likely related to kinetic fractionation in the hydrologic cycle, including evaporation of source waters, diffusion in the firn column, and solid diffusion during geothermal heating. The second-order thermodynamic differences between these isotopes are known in theory, but detecting these effects in data has been elusive until now. Additionally, studying information production over time in these records has allowed us to detect extreme events that were not visually apparent in the raw data, such as instrumentation failure and super volcanic eruptions.

Because of the physical and chemical processes that affect the ice, such as compression and deformation, the relationship between the depth in the core and the age of the material at that depth is nonlinear. Since the precise nature of those effects is unknown, it is a real challenge to deduce an age-depth model; this process involves a combination of layer counting, synchronization with tiepoints (e.g., eruptions), modeling, and interpolation. The intertwined mechanics of age, measurement resolution, accumulation variation and the art of age-depth models have created interesting challenges for us, which we will discuss in our talk.

Going forward, we believe that similar applications of information-theoretic methods to paleoclimate records may prove to be a powerful forensic tool for unraveling the mysteries of our ancient climate system. In turn, this may provide deep insights into the current climate system—such as quantifying the timing and impact of human civilization on the climate.

Temporal evolution of the giant component in Patent Citation Network

ABSTRACT. Patent Citation Network is the network that is formed from the references of a patent to other patents. The nodes of the network are the patents and the links between the nodes exist if a patent cites another patent. All links are directed as they point only to one direction and the network is acyclic as the references are only to prior patents. The network of all patents in the European Patent Office (EPO) and the Patent Cooperation Treaty (PCT) for the period 1978-2016 was formed. It includes 14,031,393 patents and 22,107,570 links. The majority of patents have only a few citations, while there are not many highly cited or citing patents. Percolation method was applied to the network [1], to figure out the number of days that it takes after a patent is announced for the giant component to form. Starting from day 1 of the data, the result is that it takes approximately 1200 days. The same procedure was followed for various starting dates, at later points in time, in the interval 1978-2016. The outcome is that the giant component is formed in a shorter time period as we progress in time. While at the beginning of the data (1978) it takes more than 1000 days, around year ~1983, we find that it takes only ~500 days. After that year, there is a gradual decrease in the number of days required for the giant component to form. In the last decade (~2010 and later) it only takes one to two months. Some possible causes for these drastic changes in the time required could be the increase in interdisciplinarity in science, and thus in patents, as well as the use of internet and the ease that it provides in communication and exchange of ideas. [1] D. Stauffer and A. Aharony, Introduction to Percolation Theory (Taylor and Francis Ltd, London, United Kingdom, 1994).

Continuous-Time Random Walk simulations of tracer transport with radial drift in 2D fracture networks

ABSTRACT. The transport of particles in 2D fracture networks is simulated by a Continuous Time Random Walk approach (CTRW), in which random walks with prescribed statistics are considered. The characteristics of the jump vector are related to the statistical properties of the fractures in the porous medium, specifically orientation, and fracture segment length, which are given in terms of probability distribution functions (PDF). The velocity of the tracer is incorporate by means of a conditional probability function which consider that a jump of size r takes a time t, i.e. we are assuming a coupled CTRW. In general the velocity need not be a constant. Varying some functional forms chosen for the PDF, and their respective parameters, several transport behaviors are observed. By this way the classical Brownian motion can be recovered and further Levy walks and in general anomalous transport can be analyzed. In the simulations the possible presence of a radial drift that makes particles to have a preferential jump orientation in radial direction with respect to an origin is included. This drift can depend on the distance to the origin. The introduction of a radial drift in CTRW simulations is new, and resembles the effect of a continuous fluid injection at a given point in the fracture network. This fluid carries the tracer particles giving place to an advective motion, which adds to the dispersive stochastic random walk motion. The total random walk properties that are calculated in these simulations are: particle trajectories, particle concentration profile and radial PDF, tracer breakthrough curves at diverse radii, and the temporal evolution of the first and second moments of the radial PDF’s. Results for diverse situations of interest in geosciences are presented.

16:30-18:30 Session 10I: Economics and Finance - Technology, innovation & growth

Parallel session

Location: Gran Cancún 1
Firms' Complexity: Technological Scope, Coherence and Performance

ABSTRACT. The aim of this work is to shed light on the relationship between firms' performance and their technological portfolios using tools borrowed from the complexity science. In particular, we ask whether the accumulation of knowledge and capabilities related to a coherent set of technologies leads firms to experience advantages in terms of productive efficiency. To this end, we analyzed both the balance sheets and the patenting activity of about 70 thousand firms that have filed at least one patent over the period 2004-2013. From this database it is possible to define a monopartite network of technological codes, that can be used to assess the firms' configuration, defined as the set of technologies in which the given firm is active. We then introduce firms' coherent diversification, a quantitative assessment that does not evaluate a technological portfolio based only the number of fields it encompasses, but also weighs each of its constituent fields of technology on the basis of their coherence with respect to the firm’s global knowledge base, as illustrated by Figure 1. Such a measure implicitly favors companies with a diversification structure comprising blocks of closely related fields over firms with the same breadth of scope but a more scattered diversification structure. We find that our measure of the coherent diversification of firms is quantitatively related to their economic performance and, in particular, we prove on a statistical basis that it explains labor productivity better than standard diversification. This is an empirical evidence that this measure of the coherent diversification of technological portfolios captures relevant information about the productive structure of the firms. As a consequence, it can be used not only to investigate possible synergies within firms but also to recommend viable partners for merging and acquisitions.

Shock Diffusion in the European Production Network
SPEAKER: Duc Thi Luu

ABSTRACT. Global economic system is a highly interlinked network, comprised of heterogeneous industries in different countries. In such a complex system, the production of any industrial sector has two distinct effects on the remaining industrial sectors: On the one hand by increasing/decreasing production it will demand more/less inputs from other sectors (i.e. “upstream” propagation), on the other hand it will be able to supply more/less output to the sectors that use its production as input to their own production process (i.e. “downstream” propagation).

This work studies shock diffusion in the input-output production network of the Economic and Monetary Union (EMU). Our goal is to answer the following questions: (i) if a shock hits an industry or a country, what are the consequences on the aggregate output of the whole EMU as well as the output of each other country?; and (i) how does the network structure among industries affect shock diffusion and the possible emergence of crises at a national and international level?

At the sector level, we find that a shock initially triggered in different industries ignite cascades with different aggregate “downstream” severity and “upstream” severity. In addition, based on the individual and global damages that would result from the failure of each sector, we can rank its economic importance as well as precisely identify the most important sectors (i.e. hubs) facilitating shock propagation in the EMU's input-output network.

At the country level, we find that, first, some countries like DEU, FRA, ITA, EPS, NLD, and BEL are the key propagators to other countries. Second, the impacts of a shock initially triggered in each country on the other countries are highly heterogeneous, revealing that the way the shock is propagated crucially depends on more intensive bilateral trade linkages as well as on more tightly connected nature of some clusters among sectors in some countries.

Furthermore, we show that when both upstream and downstream propagation channels are taken into account in cascading failures, large shocks to some key propagators can lead to a huge loss in the aggregate output of the whole EMU.

Our findings shed light on the way shocks are propagated in the EMU's input-output network. The results provide useful information for designing more effective strategies to mitigate cascades in this network.

Exaptation: a crucial mechanism for the open-endedness of biological and technological evolution

ABSTRACT. Vladar et al. (2017) have recently suggested that open-endedness in biological evolution can only be envisaged if evolution can add new functional dimensions to the phase-space of species and suggest that exaptation enables the emergence of such new functional dimensions. Is this also true in technological innovation? It seems undisputable that technological evolution has been both open-ended and extremely rapid, especially since the Industrial Revolution. We posit that discoveries due to exaptation may have played a role in such dynamics. To assess the role of exaptation in technological innovation and in the open-endedness of technological evolution, we measured the radiation of emergent uses for a sample of FDA-approved drugs based on new molecular entities (1998-2000 sample, for a total of 83 drugs). First, we identified all their FDA-approved uses, and the emergent uses later discovered by clinicians as listed in the 2013 version of the Micromedex Drugdex compendium. Second, we associated each FDA-approved and emergent use to the respective disease(s) as classified in the ICD9-CM (WHO’s International Classification of Diseases, version 9-CM). Third, we compared each emergent use with the FDA-approved one to understand whether the emergent use represents a new functionality hence an exaptation. Our results showed that: o Slightly more than 40% of emergent uses appear to be exaptations. o About 70% of these involve a first-order bifurcation and thus are significantly removed from the original use. o The distribution of emergent uses and exaptations across drugs is a long-tailed distribution of the power-law type. o A fraction of uses shows a radical impact, as measured by their capability of treating previously untreated diseases or providing substantial improvement over existing treatments. Almost all radical uses are characterized by a large distance from the original adaptive use and are exaptive. o All the radical uses for which it was possible to reconstruct the history of the discovery indicate that the discovery was unanticipated and resulted from the serendipitous observation of a new function. o Also, it seems that these radical uses rely on different molecular pathway and (sometimes) phenomena than the approved use. o As an illustration see the pattern of radiation of thalidomide (figure 1). All uses are exaptive, seven of them are radical and rely on different pathways than the approved use. A few uses revealed unsuspected phenomena. o Overall, the observation that: a) radical uses are exaptive and functionally distant from the approved uses; b) their discovery is mostly due to serendipitous events; c) they seem to rely on new pathway/phenomenon and d) their discovery may lead to systematic research meant to uncover the science behind the discovery suggests that exaptation seems to represent a mechanism for discovery of the ‘adjacent possible’ that adds further dimension to the complexity of the existing phase-space of technological evolution. We speculate that such discoveries occur when the exposure of current artifacts to very distant contexts activates ‘affordances’ in the artifact that reveal new mechanisms of actions and occasionally unknown phenomena.

Figure 1: functional diversification of thalidomide.

Data Protection Law and Data Controllers' Behaviors in Digital Trade: An Analysis Based on a Standing-Ovation Model
SPEAKER: Kunbei Zhang

ABSTRACT. Our paper deals with digital trading, i.e. selling and purchasing services via the internet, and it impact on the legal frameworks that aim to protect individuals personal data and that have been developed in the last decade, notably in the European Union. As the description implies, personal data about individuals and their commercially relevant habits and preferences are important building blocks of digital trade. The increase in digital trade volume is consequently having a significant impact on personal-data protection that has been under debate in several territories for years. Many relevant questions about the enforcement and about the effectiveness of data-protection law have emerged. How do domestic enforcement departments make sure of the compliance with data protection law, when online transactions may instantaneously transmit information around the world? Where and how are the data located if data rarely stay in one location? How are personal-data themselves traded between personal-data users? What kinds of relationships with the data processors do national data protection authorities have? And what defines the effectiveness of data protection law if divergent approaches to data privacy and protection, particularly as regards the United States and the European Union (EU), reportedly impose substantial costs and uncertainty on companies? These questions keep on challenging the current data-protection law regimes. We argue that current data-protection practice, as based on a positivist legal-theory perspective, treats the effectiveness issue too narrowly and too statically. Drawing on complexity theory, we find agent-based modeling fit to simulate and explain (i) the core (inter) actions among participating stakeholders, (ii) to emulate the combined working mechanisms of the legal practice, and (iii) to support calibration of the model’s parametric adaptations. We contend that the ability to emulate and calibrate toy versions of the processes by which the interactions between enforcement departments, social-media service providers and commercial profile-data users is essential for understanding and - if necessary – adapt them. In this paper, we will offer the results of our investigations. First we employ and adapt Miller and Page’s standing ovation model to try and explain how in the network of personal data users a race to the bottom qua protection level can emerge in commercial practices where data protection is concerned. We subsequently investigate whether the “workflow” of configured competition-, bargaining- and altruistic-punishment encounters (as harvested from an earlier project on describing the co-evolution of the legal fair-use principle and the complex practice of music-file sharing from 1960-2017 and as discussed at the TILTing 2017 conference) can be adapted so that it can help analyze and possibly even stop the rot. We conclude with an example model that provides an existential proof of the possibility.

Compressing Networked Markets
SPEAKER: Tarik Roukny

ABSTRACT. Unlike centrally organized markets, markets where participants trade on a bilateral basis can generate large networks of contractual obligations. These markets are known to be opaque as market information is often very limited to most agents. In addition, they are also extremely large in size: the aggregate volumes of total bilateral obligations can amount to several trillion dollars. The size, coupled with the lack of transparency of these markets has become an important concern for policy makers.

In this paper, we show both theoretically and empirically that the size and complexity of markets can be reduced without affecting individual trade balances. First, we find that the networked nature of these markets generates an excess of obligations: a significant share of the total market volume can be deemed redundant. Second, we show conditions under which such excess can be removed while preserving individual net positions. We refer to this netting operation as compression and identify feasibility and efficiency criteria, highlighting intermediation as the key element for excess levels. We show that a trade-off exists between the amount of excess that can be eliminated from markets and the conservation of trading relationships. Third, we apply our framework to a unique and comprehensive transaction-level dataset on a largest types of financial market: markets for derivatives. We document large levels of excess across all markets and time. Furthermore, we show that compression - when applied at a global level - can reduce a considerable fraction of total notional even under relationship conservative approaches.

While some markets have already adopted compression in order to reduce their risk and size, these results show, for the first time, the efficiency and trade-offs of compression when systematically applied at a larger scale. Finally, our framework provides ways for regulators and policymakers to curb the impact of financial crises and improve the efficiency of markets by reducing the total aggregate size of markets and reconfiguring the web of obligations.

Network Externalities Revisited

ABSTRACT. The theory of network externalities, even though it is relatively new (it appeared in the mid-1980s), is now a well-established field of analysis in the economic literature and an important reference in numerous legal areas including antitrust, intellectual property and corporate law. Its fundamental idea is that each new participant in a network derives private benefits, but also brings about external benefits (network externalities) on existing users. Besides, because of network externalities, markets may “fail” by tipping the market in favor of the inferior technology and produce excess inertia and technological monopolies. Presently, network externalities have been studied from two different theoretical perspectives: the game-theoretical approach and the Pólya processes conceptualization, and despite their important findings these approaches are based in very artificial maneuvers and ad hoc assumptions that reduce their explanatory capacity. In this paper I show that they rely excessively on the assumption that consumers will adopt the technology that enough number of individuals have already adopted, thus expectations are given and fixed in advance and, as a consequence, uncertainty is eliminated implicitly from their models. To reduce the negative effects of these ad hoc conditions on the explanatory power of these models, this paper attaches to the standard Pólya process a network-based arbitrary function and proves that this kind of function is more realistic for it allows describing the dynamics of the adoption process in a more accurate way and introducing an adequate level of uncertainty to the model of markets subject to network externalities. In this paper, I also discuss that the revision to Pólya process-based network externalities models brought about by this network-based arbitrary function can be helpful to distinguish winner-take-all situations that result from increasing returns produced by network externalities from the those situations where the outcomes conform to a power law and are produced by other sources of increasing returns. Consequently, I argue, these results are not only relevant for economics, but also for management and more specifically for strategy whose theories have to be revised accordingly. This paper has five additional sections. In the following section analyzes briefly the disadvantages of the game-theoretical models of technological adoption subject to network externalities. Section three provides an overview of the application of Pólya processes to technological competition in markets characterized by network externalities. Section four shows that these models are based on very artificial assumptions that reduce substantially their explanatory power. In section five a solution to this problem is provided. This paper finishes with some conclusions.

16:30-18:30 Session 10J: Foundations of Complex Systems - Complexity 2

Parallel session

Location: Cozumel 2
Modeling the role of the microbiome in evolution

ABSTRACT. It is known that animals and plants host a myriad or bacterial species which live with them, both on their skin and also inside them. All the bacteria living with an organism constitute its “microbiome”, and the system composed by the host organism and its microbiome is known as a “holobiont”. It has been assumed that through evolution a symbiotic relationship has been established between the host and its microbiome, and this is why all superior organisms have a microbiome in the first place. Although experimental evidence of this symbiotic relationship has been found for specific cases, it is still unclear what advantages the microbiome confers to the host, and vice versa. In this talk I present a model of network evolution in which networks representing the host co-evolve with networks representing its microbiome. The main difference between the host and the microbiome networks is in the mutation rates, being the former smaller than the latter. When the host network is trained to perform a task, the training is achieved faster and more efficiently in the presence of the microbiome networks than in their absence. Furthermore, when the host network has to perform several tasks, in can only do so when different types (or species) of bacterial networks are present. The results presented here suggest general principles of species co-evolution that allows us to understand why all superior organisms, from insects to humans, have evolved very diverse and complicated microbiomes.

Complexity science approach to modelling atrial fibrillation

ABSTRACT. The 21st century will be characterised by the need to master chronic diseases as the population ages, and among the greatest challenges is the disrupted cardiac electro-mechanics of the diseased heart that leads to atrial fibrillation (AF), which is increasing in prevalence and is the single biggest cause of stroke. Because of its common occurrence, and because there is a developing treatment that involves targeting complex signals within the heart, any progress in characterising the complexity of heart activity in atrial fibrillation is likely to have a large and immediate beneficial effect. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. Moreover, the progression of AF with age, from short self-terminating episodes to persistence, varies between individuals and is poorly understood. An inability to understand the origin of AF and predict variation in AF progression has resulted in less patient-specific therapy.

We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells [1]. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age (e.g., due to fibrosis or gap junctional remodelling), beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. Hence, we explicitly relate the microstructural features of heart muscle tissue (myocardial architecture) with the emergent temporal clinical patterns of AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits, consistent with clinical observations.

Furthermore, the model reveals how variation in AF behaviour arises naturally from microstructural differences between individuals: the stochastic nature of progressive transversal uncoupling of muscle strands with age results in variability in AF episode onset time, frequency, duration, burden, and progression between individuals [2]. Again, this is consistent with clinical observations. The uncoupling of muscle strands can cause critical architectural patterns in the myocardium that anchor microreentrant wave fronts and thereby trigger AF. It is the number of local critical patterns of uncoupling as opposed to global uncoupling that determines AF progression. This insight may eventually lead to patient-specific therapy when it becomes possible to observe the cellular structure of a patient’s heart.

[1] K. Christensen, K.A. Manani and N.S. Peters, Simple model for identifying critical regions in atrial fibrillation, Phys. Rev. Lett. 114, 028104 (2015).

[2] K.A. Manani, K. Christensen and N.S. Peters, Myocardial architecture and patient variability in clinical patterns of atrial fibrillation, Phys. Rev. E 94, 042401 (2016).

Information transfer enhanced by noise on a human connectome model

ABSTRACT. Stochastic resonance (SR) is a phenomenon in which noise enhances the response of a nonlinear system to an input signal. The nervous system, and particularly the brain, has to integrate extrinsic and intrinsic information in a noisy environment, suggesting that it is a good candidate to exhibit SR. Here, we aim to identify the optimal levels of noise that ensure the best transmission of a signal through a discrete dynamic model implemented on the human connectome [1]. We find a noise level (different from zero), that enhances the similarity between an input signal that is introduced through a seed node, and the activity of all other nodes in the system (Fig. 1). Furthermore, the optimal noise level is not unique. Instead, we find that when the model parameters are such that the system enters a critical regime, noise is able to enhance the similarity between input signals and signals that have propagated through the network, with higher similarities detected for particular seed and output node pairs. Given the simplicity of the model presented here, future research could aim at finding the differences in the dynamical properties of the networks extracted from diseased brains. If the transmission of information in damaged networks is different, the noise effect could be explored as a way to improve neural communication in these systems.

Universal limits to parallel processing capability of neural systems

ABSTRACT. One of most fundamental puzzles that any general theory of cognition must address is our limited capability for multitasking of control-dependent processes: why in some cases the human mind is capable of a remarkable degree of parallelism (e.g., locomotion, navigation, speech, and bimanual gesticulation), while in others it is radically limited (e.g., conducting mental arithmetic and planning a grocery list at the same time). Multiple-resource theories of cognition identify shared neural resources between tasks as the primary parallel capacity limitation: if two encoded tasks rely on the same resource (e.g. representations encoded in a neural network) then their task pathways will interfere when executed simultaneously. However, to date, such theories have been expressed only qualitatively. Here we provide the first analytical approach to capturing the limitations on multitasking ability in general neural networks. We first show that the the maximum number of tasks a neural network can successfully perform in parallel corresponds to the maximum independent set (MIS) of its task interference graph. Then we give a tight estimate of the MIS density based on the task-degree distribution alone, and show analytically that it is independent from graph size, it decreases with average degree and grows with increasing network degree heterogeneity (Figure 1 left). We compare these results with the average parallel task density of random task subsets with size equal to the MIS': we find in fact that, for any fixed network average degree and heterogeneity, the average parallel capacity decreases strongly with increasing network size. This provides a formally rigorous solution to the multitasking puzzle: while linear network size provides an increase in capacity for a specific task set (the MIS), it simultaneously imposes increasing constraints on parallel capacity of generic task subsets (Figure 1 right). In short, even a small overlap between tasks strongly limits overall parallel capacity to a degree that substantially outpaces gains by increasing network size.

Emergent situations in ecosystems and their detection
SPEAKER: Jiří Bíla

ABSTRACT. In the paper will be introduced a novel method for the detection of emergent situations in ecosystems. The ecosystem is considered as a complex system with many interacting elements. The paper continues in recent published works 1, 2, 3 where the detection of an emergent situation was done by indication of violence of so called structural invariants of the system. In this paper is used only one type of structural invariant - Matroid and Matroid Bases (M, BM). A calculus for the emergent situation appearance computation is introduced. The application of the presented approach and computation method is demonstrated for the violation of so called Short Water Cycle (SWC) with some consequences of such a situation (dry landscape, invasions of parasites in forests). The Short Water Cycle (SWC) refers to the behavior of the local ecosystem, in which the volume of water that comes into the ecosystem is evaporated and falls back into this system. When SWC is violated, the evaporated water rises quickly in the transport zone and does not have time to condense before it is transported outside the ecosystem to the distant mountains, where it condenses spontaneously in the rising air streams. (Due to the enormous volumes of vapor that are transported, the condensation is very dynamic and sometimes leads to torrential downpours).Violation of SWC is demonstrated by situations in National Park Sumava in South Bohemia. (The reason was a small volume of ground water in the soil and drying the trees. Then followed the invasion of wood parasites and absolute devastation of landscape.) The proposed method of the detection is universal, can be used for retro analysis (e.g., also for violation of SWC in region Yucatan in Mexico many years ago). (There are also introduced in the paper another emergent situations that could be interpreted positively.)

Very important in complex system description are two factors: level of the description and the basic group (compartment) of complex system elements. There are used two descriptive sets: the first - symptoms represented by external observational variables (e.g., biodiversity, maximum temperature, …) and the second - drivers (e.g., high velocity in transport layer, decrease of area of landscape vegetation, …). The calculus for the emergent situation in a complex system (that is introduced in the paper) associates two variables for emergent situation – The power of emergent phenomenon and the complexity of emergent phenomenon. By quantified actualized symptoms is computed the power of emergent phenomenon and from the power is computed complexity of emergent phenomenon that determines the “size” of compartment. The further computations leads to detection of a Possible Appearance of an Emergent Situation (PAES).

16:30-18:30 Session 10K: Socio-Ecological Systems (SES) - Economy & Innovation

Parallel session

Location: Cozumel 5
Adaptive policies, guided by knowledge generation, in order to avoid private monopolies in an emerging technological sector
SPEAKER: Dries Maes

ABSTRACT. Deep geothermal energy appears to be currently on the edge of a take-off and offers the potential to supply a major share of the Belgian renewable energy requirements. For the authorities, deep geothermal energy production is a standard showcase of an emerging innovative technology. Pioneer installations require significant financial support from public funds to be profitable. This can be justified by considering the important learning effects that quickly improve the profitability of the individual geothermal projects, and allows the sector to emerge in the medium to long term. In this respect the geothermal development is very similar to the development of solar technologies in last century, or the growth of the co-generation sector at the start of this century. However, geothermal energy intrinsically starts from the utilization of a the deep underground, which can be considered a public resource. Also geothermal projects need large scales and exhibit high levels of investment risks, compared to other innovative energy solutions. Considering the particular characteristics of geothermal energy, and their impact on the economic viability of the projects, the sector development may be more difficult than for other technologies. In the medium term, it can be expected that early investment will allow public and private actors to learn about crucial information on the deep underground. However, the same learning effects increase the probability of a regional private monopoly in geothermal energy. This monopoly can emerge in an open market with public support, if no legal requirements for data exchange and open innovation are imposed. In this paper, we review the situation of geothermal energy in Belgium, the technology characteristics and the different public support instruments that are applicable. We build an evolutionary model to simulate the future development of the sector under different policy scenarios, and to estimate the probability of emerging private monopolies in the sector. The dynamics of the sector emergence require interdisciplinary models. A large part of the development potential is determined by the geological and technical characteristics of each new project. An economic level of analysis is added, because investments are determined by availability of capital, risk assessments and expectations of market evolutions. Finally, the complexity of the sector evolution is caused in reality by the knowledge creation during each geothermal activity, and the impact of policy measures on the speed and capitation of newly generated knowledge. The model therefore includes also the endogenous knowledge creation and exchange between private partners, as well as the learning effect for policy makers that make adaptive policy scenarios possible. The results show that protected private geological knowledge can lead to private monopolization of a public resource. It implies that policy makers have to include safeguards early on during the emergence. The underground can be managed as a public resource, and judicial options are necessary to impose the exchange of geological data. This obligation is best coupled with other policy measures to ensure the optimal investment climate and fast sector growth, while avoiding early monopolization.

Assessing sustainability in North America's ecosystems using criticality and Information theory

ABSTRACT. Sustainability has become a key concern in every economical or political discussion for good reasons. Concern about whether current trajectories of human demography and socioeconomic activity can continue in the face of such environmental impacts has led to calls for “sustainability.” Never the less, sustainability is often used in a very qualitative sense and then a quantitative measure of it through a system-level index is greatly needed. In this work we use an informational approach to quantifying some aspects of ecosystem sustainability, in particular their health and stability.

We propose a novel conceptualization of ecosystem health as criticality, following well established ideas in human health. In this framework for example, a hearth is healthy if the power spectrum of the fluctuations of electrical activity is scale invariant (S~f^beta) and it scales as a pink noise (beta ~ -1). In this work as a simil of hearth activity we used ecosystem respiration data from AMERIFLUX data base, a network of more that one hundred monitoring sites in most ecosystems types of North America.

After selecting only time series without gaps and after removing periodicities, a traditional spectral analysis was performed by computing the Fast Fourier Transform of the fluctuations time series and computing spectral indices by fitting power-laws to the spectrum. Two fits are obtained. The first is a direct single power-law fit to evaluate if it follows a pink noise type. The second fit model is piecewise-defined double power-law composed by a low-frequency power-law followed by a high-frequency to prube scale invariance. A time series is scale invariant if comparing with a BIC infomational index the single power-law model is better that a two power-law model.

In terms of stability using the same fluctuations time series, we followed in one hand a Fisher Information approach developed by Frieden, Cabezas and others that has proved as robust method to assess the stability of a system over time. On the other hand we used statcomp library in R to measure permutational entropy and complexity in terms of Michaelian ideas about ecosystem stability and out of equilibrium thermodynamics.

We validate our results by comparing to Landscape Condition from the Nature Serve Network, that commonly refers to the state of the physical, chemical, and biological characteristics of natural ecosystems, and their interacting processes.

In this way we assess sustainability (health and stability) in North America's ecosystems finding a complex set of patterns that we analysed using traditional statistics and classification trees using the C4.5 algorithm in WEKA. In general we found that ecosystems types out of criticality are elder forest or has been altered by human activity or events of wildfires for example. Sites with pink noise (beta ~ -1) behavior are statistically in better Landscape Condition that those sites with white (beta ~ 0) or brown (beta ~ -2) noise type. In the same way, stability is greater for sites with pink noise type. Some heuristics for desicion making are propoused

How to guard company’s supply-chain risks by global inter-firm relationships

ABSTRACT. In this paper, after examining the structure of global inter-firm networks, we discuss the implications of global linkages at the firm level for the proliferation of 'conflict minerals' or 'dirty products' through global buyer-supplier linkages and apply these implications to solve similar issues for supply-chain risks. We first investigate the structure of global inter-firm relationships using a unique dataset that contains the information of customer-supplier relationships for 423,024 major incorporated firms. Global customer-supplier network has scale-free properties. We show through community structure analysis that firms cross national borders and form communities in the same industry. There are also firms that act as bridges between these communities, so that throughout the world each firm is connected with an average of six business partners. By enhancing this feature of links between firms, it can be used as a countermeasure for risks related to conflict minerals and slave labor issues. Conflict minerals are natural minerals (gold, tin, tungsten, etc.) that are extracted from conflict zones and sold to perpetuate fighting. The most prominent example is the natural minerals extracted in the Democratic Republic of the Congo (DRC) by armed groups and funneled through a variety of intermediaries before being purchased by multinational electronics firms in industrial countries. There is wide discussion on how to mitigate the worldwide spread of conflict minerals. By utilizing a simple diffusion model and empirical results where firms comprise a community with those firms that belong to the same industry but different home countries, we show numerically that regulations on the purchases conflict minerals by limited number of G8 firms belonging to some specific industries would substantially reduce their worldwide use. When these firms refuse to buy conflict minerals from their suppliers, the supply chains of many intermediaries which are positioned upstream suffer. We also deal with slave labor issues. The global indirect connections with illegal firms through lawful trades of each country are also attracting attention. For example, nobody wants to import clothes made by the garment manufacturer that exploited sweatshop laborers that make cheap clothing possible, although the trade with this garment manufacturer is lawful in its home country. We use the Dow Jones Risk & Compliance dataset that covers about 40,000 firms who may have had adverse/negative media coverage related to specific topics, "Regulatory, Competitive/Financial, Environment/Production, Social/Labour". The firms associated with adverse media are concentrated on the specific communities in global inter-firm network. We can efficiently guard wholesome firms from such supply-chain risks by cutting these communities based on edge betweenness centrality from the firms associated with adverse media. Our results improve supply-chain transparency and contribute to the sustainability of firms.

Complex adaptive processes in the oil and gas industry: a network perspective to knowledge management and the challenges of a retiring workforce
SPEAKER: Fabio Bento

ABSTRACT. Changes in workplace demographics have raised a concern with the impacts of mass retirement to knowledge management in the oil and gas industry. Some recent reports estimate that the industry may lose up to 50% of its workforce in a matter of five years. From a knowledge management perspective, such phenomenon is perceived in terms of a knowledge loss crisis. The typical industry response to this challenge has consisted of strategies aiming at codifying and storing knowledge in databases and reports. However, such common managerial responses show important limitations in terms of grasping tacit and network-based dimensions of knowledge in rather complex oil production operations. In this conceptual article, I suggest a complex system approach to this organizational problem by discussing the potential of social network analysis in enlightening the emergence of new patterns of interaction and knowledge in the oil and gas industry. Complex adaptive processes demand looking into organizations not only in terms of formal organizations charts, but as units of adaptation characterized by emergent patterns. The focus on relational data that embeds organizational network analysis has an important potential in uncovering the dynamics of integration operations that have permeated organizational changes in the oil and gas industry. The concept of integration is more than the implementation of new technologies, but a significant change in the business model of most companies focusing on the interdependence among staff in different operational processes This is a major shift from a traditional business model in which processes were modelled in a sequential manner and understood with little focus on interactions with other processes. Organizational network analysis has been used to understand flows of information and identify the role of different agents in processes of communication. However, understanding adaptive process due to knowledge crisis demands operationalizing the temporal dimension complexity. First, we need to incorporate the temporal dimension of central concepts such as emergence and self-organization. Second, we need to go beyond the concern with system robustness in most organizational network analysis but also developing an understanding of systems resilience. The conceptualization of network analysis from the perspective of complex adaptive processes has the potential of understanding how systems behave in response to the retirement of central actors and losses in networked-based expertise. Bearing that in mind, the conceptualization of knowledge loss crisis in organizations seen as units of adaptation may contribute to our understanding of integrated operations and perform the basis for new knowledge management initiatives in the industry.

Exogenous vs. Endogenous Critical Cascades, Community Building & Productive Bursts in Open Collaboration

ABSTRACT. Many social phenomena arise from human interactions, such as imitation and cooperation. Among them, collective action involves groups of individuals animated by the prospect of achieving a common goal. Although the conditions under which successful collective action arises are well-documented [1], mechanisms of interactions and triggering, as well as their long-term implications for the success of collective projects remain unclear. Using version control data of 50 open collaboration projects, we show that activity of contributors exhibit endogenous versus exogenous critical dynamics [2, 3] in their contribution timelines (Figure 1A) [4]. These dynamics of contributions map into self- and multi-excited Hawkes conditional Poisson processes [5, 6], with exogenous input activity and endogenous triggering with long-memory (the memory kernel yields from human task prioritization and economics of time as a non storable resource, and is typically found exponential, stretched exponential or power law [7]). For number of open collaboration projects, we have found that triggering is at or close to criticality, and may involve both individual or mutual excitation [8]. Critical cascades of contributions hence explain how number open collaboration projects remain active over long periods, even though they exhibit overall marginally decreasing inflow of new contributors. Critical cascades also explain why open collaboration projects exhibit super-linear productive bursts, which concentrate the majority of contributions in very short time windows [9]. Such special moments [10] may include setting deadlines for software release, kicking of new sub-projects, or simply convening regular co-located meetings. With some exceptions, these special moments are actively organized for the sake of maintaining or enhancing the spirit of community, for fast-paced colliding of new ideas, to make strategic choices about the project. Hence, while short-lasted – typically a few days, they often play a fundamental role for the long-term continuation and survival of an open collaboration project. A specific study of co-located special moments is carried on for two communities of data scientists (astrophysicists and neuroscientists), for which we have gathered detailed information of their physical meetings and we know that they are geared towards advancing data science projects hosted on an online repository platform (i.e., GitHub). We find that co-located meetings have a long-term positive impact on community building and contributions. Yet, each co-located event has its own impact on the community, as measured by the number of contribution cascades initiated on personal and shared code repositories (resp. ci and cs; see Figure 1C). Co-located events with a clear jump of activity exhibit decays of activity that are best fitted by power laws ∼ t−α with α ≈ 0.7 ± 0.1 for events c, d and e (both for individual and collective contribution cascades). Additionally, collective cascades (i.e., contributions by 2 or more individuals) follow a power law distribution of size, while individual cascades follow a log-normal distribution (Figure 1B). This last result underlies the importance of co-located events for community building, which in turn foster collaboration, and critical cascades of self-propelled contributions.

Indifference-attractor bifurcation in discrete-time complex ecological-economic optimal control problems

ABSTRACT. We study the genesis of indifference thresholds for a class of single- state discrete time optimal control problems, as a system parameter changes. The class under consideration contains a wide range of ecological-economic models, like the discrete time version of the lake pollution models introduced by Maler et al. (2003), which models complex nonlinear responses of (shallow) lakes to the increase of the stock of phosphorus in the water. This model is used here as an illustration example. We consider state-costate — or phase— orbits that are associated to optimal state orbits, making use of the fact that these have to be on the stable manifolds of saddle fixed points of the phase system. In particular, we show that if the phase system goes through a so-called heteroclinic bifurcation scenario, an indifference threshold and a locally optimal steady state are generated in an indifference-attractor bifurcation and we the analyze the consequences for the optimal solutions. In the case of the lake model, the resulting bifurcation diagram summarizes the joint effect of the robustness of the lake and the economic importance of the lake in the form of the optimal policy. The diagram is partitioned into four parameter regions: unique steady state, low pollution, high pollution, and dependent on the initial state. Moreover, we analyze the effect of the change of ‘stiffness’ or responsiveness of the lake. We find that in the pollution management of strongly responsive ecosystems, it is more likely that the optimal policy is ‘green’.

Approaching the Impact of Neoliberal Policies’ on Health Equity and Wellbeing from a Complex Systems Perspective: The Case of Mexican Industrial Cities

ABSTRACT. During the last 35 years Mexico's industrial cities experienced an accelerated transformation of migration and urbanization patterns, environmental deterioration and a growing precarious labor market, among other factors. Many of them have reconfigured their economy by becoming industrial manufacturing centers connected to global production chains. Through critical realism and network theory we develop a theory-informed conceptual framework to guide the mapping of the most important determinants of population health and wellbeing in urban agglomerations in Mexico between 1980 and 2016. Cities are taken as the space of conflict in which historical, political, economic and cultural dimensions converge, and where territorial social pacts materialize, shaping the management of public goods at local level. The conceptual model shows a systemic representation that simultaneously considers social, spatial, temporal elements and the most relevant social actors, grouping them into four categories: a) Factions of capital b) Institutional state c) Civil society and d) Labor force. This synthesis tool aims to achieve a better understanding of the systemic processes behind the implementation of neoliberal policies and how these affects the political, economic, and cultural determinants of health equity and wellbeing and provides some insights in how to organize the complexity involved in studying the health effects. It could therefore constitute a meaningful stimulus to trigger further empirical research and as a basis for the development of future scenarios on health inequalities research inspired by complex-systems approaches.

Re-thinking nested and modular networks in ecological and socio-technical complex systems

ABSTRACT. The identification of macroscale connectivity patterns in complex networks has been central to the development of the field. Besides an interest in the methodological challenges, these patterns matter to the community inasmuch they result from a complex structure-dynamics interactions. It is in this context --network architecture as emergent phenomena-- that nestedness and modularity arise as prominent macrostructural signatures to study. Nestedness was originally developed in ecology, to characterise the spatial distribution of biotas in continental and isolated landscapes, and to describe species-to-species relations. In structural terms, a nested pattern is observed when specialists (nodes with low connectivity) interact with proper nested subsets of those species interacting with generalists (nodes with high connectivity); see Fig.~ \ref{figure1}A (left). A modular network structure (Fig.~\ref{figure1}A (middle)) implies the existence of well-connected subgroups, which can be identified given the right heuristics to do so.

Nestedness has imposed itself as a landmark feature in mutualistic settings with an emphasis in natural ecosystems, and has also triggered a large amount of research spanning fieldwork, modelling and simulation. Modularity constitutes a sub-area itself in complex networks, with an unmatched number of references regarding algorithms and heuristics, network dynamics, alternative measures, and, of course, the diverse empirical contexts in which it plays a role.

In the last years we have become increasingly aware that nestedness is not exclusive to plant-animal interaction assemblages. Rather, it appears often in systems where positive interactions play a certain role. Perhaps this fact, and the remarkable evidence that modularity is observed in many systems, has spurred research on the (possible) co-existence of both features. Here we will focus on particular settings in which modularity and nestedness are observed together, and discuss some possible explanations and methodological problems. Then, we will present a brand new formulation of the problem where nestedness and modularity can coexist in the form of nested blocks within the network (NeMo structures: Fig.~ \ref{figure1}A, right). Using a proper formulation of the problem, we first exploit synthetic networks as a testbed for our approach. Once validated, we proceed to analyse hundreds of real networks to evidence by example that this type of structures exist in both uni- and bi-partite networks (Fig.~\ref{figure1}B,C), and discuss possible directions from here.